66 lines
1.2 KiB
Markdown
66 lines
1.2 KiB
Markdown
# llm-backend
|
|
|
|
Backend API for:
|
|
- LLM multiplexer (OpenAI / Anthropic / xAI (Grok))
|
|
- Personal chat database (chats/messages + LLM call log)
|
|
|
|
## Stack
|
|
- Node.js + TypeScript
|
|
- Fastify (HTTP)
|
|
- Prisma + SQLite (dev)
|
|
|
|
## Quick start
|
|
|
|
```bash
|
|
cd llm-backend
|
|
cp .env.example .env
|
|
npm run db:migrate
|
|
npm run dev
|
|
```
|
|
|
|
Open docs: `http://localhost:8787/docs`
|
|
|
|
## Auth
|
|
|
|
Set `ADMIN_TOKEN` and send:
|
|
|
|
`Authorization: Bearer <ADMIN_TOKEN>`
|
|
|
|
If `ADMIN_TOKEN` is not set, the server runs in open mode (dev).
|
|
|
|
## Env
|
|
- `OPENAI_API_KEY`
|
|
- `ANTHROPIC_API_KEY`
|
|
- `XAI_API_KEY`
|
|
|
|
## API
|
|
- `GET /health`
|
|
- `GET /v1/chats`
|
|
- `POST /v1/chats`
|
|
- `GET /v1/chats/:chatId`
|
|
- `POST /v1/chats/:chatId/messages`
|
|
- `POST /v1/chat-completions`
|
|
|
|
`POST /v1/chat-completions` body example:
|
|
|
|
```json
|
|
{
|
|
"chatId": "<optional chat id>",
|
|
"provider": "openai",
|
|
"model": "gpt-4.1-mini",
|
|
"messages": [
|
|
{"role":"system","content":"You are helpful."},
|
|
{"role":"user","content":"Say hi"}
|
|
],
|
|
"temperature": 0.2,
|
|
"maxTokens": 256
|
|
}
|
|
```
|
|
|
|
## Next steps (planned)
|
|
- SSE streaming (`/v1/chat-completions:stream`)
|
|
- Tool/function calling normalization
|
|
- User accounts + per-device API keys
|
|
- Postgres support + migrations for prod
|
|
- Attachments + embeddings + semantic search
|