Files
Sybil-2/docs/api/rest.md
2026-02-14 21:20:14 -08:00

3.9 KiB

REST API Contract

Base URL: /api behind web proxy, or server root directly in local/dev.

Authentication:

  • If ADMIN_TOKEN is set on server, send Authorization: Bearer <token>.
  • If ADMIN_TOKEN is unset, API is open for local/dev use.

Content type:

  • Requests with bodies use application/json.
  • Responses are JSON unless noted otherwise.

Health + Auth

GET /health

  • Response: { "ok": true }

GET /v1/auth/session

  • Response: { "authenticated": true, "mode": "open" | "token" }

Models

GET /v1/models

  • Response:
{
  "providers": {
    "openai": { "models": ["gpt-4.1-mini"], "loadedAt": "2026-02-14T00:00:00.000Z", "error": null },
    "anthropic": { "models": ["claude-3-5-sonnet-latest"], "loadedAt": null, "error": null },
    "xai": { "models": ["grok-3-mini"], "loadedAt": null, "error": null }
  }
}

Chats

GET /v1/chats

  • Response: { "chats": ChatSummary[] }

POST /v1/chats

  • Body: { "title"?: string }
  • Response: { "chat": ChatSummary }

DELETE /v1/chats/:chatId

  • Response: { "deleted": true }
  • Not found: 404 { "message": "chat not found" }

GET /v1/chats/:chatId

  • Response: { "chat": ChatDetail }

POST /v1/chats/:chatId/messages

  • Body:
{
  "role": "system|user|assistant|tool",
  "content": "string",
  "name": "optional",
  "metadata": {}
}
  • Response: { "message": Message }

Chat Completions (non-streaming)

POST /v1/chat-completions

  • Body:
{
  "chatId": "optional-chat-id",
  "provider": "openai|anthropic|xai",
  "model": "string",
  "messages": [
    { "role": "system|user|assistant|tool", "content": "string", "name": "optional" }
  ],
  "temperature": 0.2,
  "maxTokens": 256
}
  • Response:
{
  "chatId": "chat-id-or-null",
  "provider": "openai",
  "model": "gpt-4.1-mini",
  "message": { "role": "assistant", "content": "..." },
  "usage": { "inputTokens": 10, "outputTokens": 20, "totalTokens": 30 },
  "raw": {}
}

Behavior notes:

  • If chatId is present, server validates chat existence.
  • For chatId calls, server stores only new non-assistant messages from provided history to avoid duplicates.
  • Server persists final assistant output and call metadata (LlmCall) in DB.

Searches

GET /v1/searches

  • Response: { "searches": SearchSummary[] }

POST /v1/searches

  • Body: { "title"?: string, "query"?: string }
  • Response: { "search": SearchSummary }

DELETE /v1/searches/:searchId

  • Response: { "deleted": true }
  • Not found: 404 { "message": "search not found" }

GET /v1/searches/:searchId

  • Response: { "search": SearchDetail }

POST /v1/searches/:searchId/run

  • Body:
{
  "query": "optional override",
  "title": "optional override",
  "type": "auto|fast|deep|instant",
  "numResults": 10,
  "includeDomains": ["example.com"],
  "excludeDomains": ["example.org"]
}
  • Response: { "search": SearchDetail }

Search run notes:

  • Backend executes Exa search and Exa answer.
  • Persists answer text/citations + ranked results.
  • If both search and answer fail, endpoint returns an error.

Type Shapes

ChatSummary

{ "id": "...", "title": null, "createdAt": "...", "updatedAt": "..." }

Message

{
  "id": "...",
  "createdAt": "...",
  "role": "system|user|assistant|tool",
  "content": "...",
  "name": null
}

ChatDetail

{
  "id": "...",
  "title": null,
  "createdAt": "...",
  "updatedAt": "...",
  "messages": [Message]
}

SearchSummary

{ "id": "...", "title": null, "query": null, "createdAt": "...", "updatedAt": "..." }

SearchDetail

{
  "id": "...",
  "title": "...",
  "query": "...",
  "createdAt": "...",
  "updatedAt": "...",
  "requestId": "...",
  "latencyMs": 123,
  "error": null,
  "answerText": "...",
  "answerRequestId": "...",
  "answerCitations": [],
  "answerError": null,
  "results": []
}

For streaming contracts, see docs/api/streaming-chat.md.