llm-backend
Backend API for:
- LLM multiplexer (OpenAI / Anthropic / xAI (Grok))
- Personal chat database (chats/messages + LLM call log)
Stack
- Node.js + TypeScript
- Fastify (HTTP)
- Prisma + SQLite (dev)
Quick start
cd llm-backend
cp .env.example .env
npm run db:migrate
npm run dev
Open docs: http://localhost:8787/docs
Auth
Set ADMIN_TOKEN and send:
Authorization: Bearer <ADMIN_TOKEN>
If ADMIN_TOKEN is not set, the server runs in open mode (dev).
Env
OPENAI_API_KEYANTHROPIC_API_KEYXAI_API_KEY
API
GET /healthGET /v1/chatsPOST /v1/chatsGET /v1/chats/:chatIdPOST /v1/chats/:chatId/messagesPOST /v1/chat-completionsPOST /v1/chat-completions/stream(SSE)
POST /v1/chat-completions body example:
{
"chatId": "<optional chat id>",
"provider": "openai",
"model": "gpt-4.1-mini",
"messages": [
{"role":"system","content":"You are helpful."},
{"role":"user","content":"Say hi"}
],
"temperature": 0.2,
"maxTokens": 256
}
Next steps (planned)
- Better streaming protocol compatibility (OpenAI-style chunks + cancellation)
- Tool/function calling normalization
- User accounts + per-device API keys
- Postgres support + migrations for prod
- Attachments + embeddings + semantic search
Description
Languages
TypeScript
100%