For developers

Build against Eldric
like you would against OpenAI.

The public API is OpenAI-compatible. Tooling written for the OpenAI SDK works against an Eldric server with one line changed — the base URL. The full reference lives in the documents below.


Three documents

Pick the one that fits your level.

api-public.md

Edge-reachable endpoints only. One line per endpoint. The first place to look if you are integrating from outside the LAN.

api-reference.md

The complete endpoint surface, grouped by component (Edge, Controller, Router, Data, Agent, Media, Comm, Science, Training, IoT, Swarm, NOVA, etc.).

Features.md

Eighteen domains, every shipped feature, what is currently work-in-progress. The catalogue behind every page on the site.


Quick start

Five-minute setup.

1. Install

Download the package for your distribution from repo.eldric.ai. For Fedora / RHEL:

sudo dnf install eldric-controller eldric-edge eldric-workerd eldric-datad

2. Configure

Start the four services. Each has a sane default config under /etc/eldric/. The controller listens on port 8880, the edge on 443.

3. Create the admin user

Open the chat interface in your browser (the edge serves it at /chat). On a fresh install you are prompted to create the first user. That first user becomes the system administrator — they get the full set of tenant, user, license and configuration controls. There is no shipped default password; the admin is whoever signs up first. Make sure the right person on your team signs up first, and lock down the initial signup once they are in.

4. First API call

From the admin UI, mint an API key. Then, once an inference worker is running and a model is loaded:

curl https://eldric.local/v1/chat/completions \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-3.3-70b",
    "messages": [{"role":"user","content":"Hello."}],
    "stream": true
  }'

5. Use the OpenAI SDK

from openai import OpenAI
client = OpenAI(
    base_url="https://eldric.local/v1",
    api_key="$API_KEY",
)
resp = client.chat.completions.create(
    model="llama-3.3-70b",
    messages=[{"role":"user","content":"Hello."}],
)

That is it. Existing OpenAI tooling — LangChain, LlamaIndex, Continue, Cursor, whatever you have — works unchanged after pointing the base URL at Eldric.


Going deeper

Beyond chat.

Agents & workflows

Build a multi-step ReAct loop with /api/v1/agent/chat. Decompose complex queries with /api/v1/agent/decompose. Run multiple agents in parallel with /api/v1/agent/multi.

Training

Create a fine-tune job at /api/v1/jobs with LoRA, QLoRA, SFT, or DPO. The dataset can be local JSONL or pulled from a data worker.

Plugins

Plugins are Python or JavaScript add-ons that the chat shell loads at runtime. Five plugin types: Tool, Filter, Pipe, Action, Widget. Install from the marketplace at /api/v1/marketplace/catalog.

Webhooks

Subscribe to events at /api/v1/webhooks/subscriptions. Each outbound POST is signed with HMAC-SHA256. Failed deliveries auto-disable after the threshold.