A short guide to applying Eldric to a domain it doesn't already cover out of the box. The principle is the same everywhere — ingest, consolidate, distil, deploy, query. The specifics are yours; the platform provides the lanes.
Get your data into the platform. Documents go through the chunked-upload protocol into a knowledge base; structured data via the database connector; sensor feeds through the IoT worker; messages through the communication worker. The data stays on your hardware from the moment it lands.
Normalise across sources. A pharma lab pulls patient records from one EHR, clinical guidelines from another corpus, and study protocols from a third. The data worker brings them under one tenant with consistent metadata, so a single query reaches all three.
Push the right knowledge into the right shape. Vector embeddings for exact retrieval; matrix memory for compressed, generalising recall; a fine-tuned small model for the terminology of your house. The distillation pipeline (§50) reads from a vector source, prompts an LLM for question-answer pairs, and writes both sides as outer-product associations into a portable .emm file.
The .emm matrix-memory file is a single binary that captures the domain knowledge you've consolidated. It moves between clusters, ships to edge devices, gets versioned alongside your code. Take the memory of your hospital with you to the field clinic; deploy the memory of your factory line to the robot.
Push the AI to where the work happens. A data centre cluster for the central knowledge worker. A Raspberry Pi at the line for soft real-time inference. An on-board kernel on the robot for offline operation when the radio link drops. Same binary, same memory format, different scale.
The chat interface, the OpenAI-compatible API, agent invocation, scientific tool dispatch, voice — all hit the same underlying memory and the same agents. A query returns the answer in your terminology, cited from your sources, with the audit trail compliance demands.
The Science worker has sixteen categories with twenty-eight built-in sources today (NASA, CERN, PubMed, GBIF, …). Your domain might not be in there — but the custom category is the plugin entry point. Register a new source via POST /api/v1/science/sources with the catalogue metadata, point at your endpoint, and the dispatch flow at /api/v1/science/tools/execute picks it up alongside the built-ins.
Every customer gets KB upload from day one — the Admin Console → Knowledge Bases → New KB flow takes PDF, DOCX, Markdown, plain text, HTML. Chunking and embedding happens locally; the vectors stay on your data worker. RAG queries from the chat shell or via /api/v1/agent/knowledge-bases/{id}/search retrieve and cite.
POST /api/v1/agent/generate-training on the Agent worker reads from a KB and produces training samples in Alpaca / OpenAI / DPO format. Hand the dataset to the Training worker for LoRA, QLoRA, SFT, or DPO. Output is a fine-tuned small model that speaks your terminology — and it never left your network.
Plugins are Python or JavaScript add-ons the Edge plugin host loads at runtime. Five types: Tool (server-side capability the LLM can call), Filter (pre/post-LLM message processing), Pipe (virtual models / custom backends), Action (client-side UI extension), Widget (client-side UI panel). Install from the marketplace catalogue at /api/v1/marketplace/catalog or drop your own into the plugins directory.
The Agent Builder takes a description and generates an agent — tool definitions, system prompt, test cases. The Agent Generator does the same from a domain template (finance, medical, devops, legal, science, support). Generated agents land in agents/<name>/ as a self-contained package you version-control.
Subscribe to platform events at /api/v1/webhooks/subscriptions. Each outbound POST is HMAC-SHA256-signed; failed deliveries auto-disable after the threshold. Wire your downstream LIMS, ticketing system, or monitoring stack into the platform's event bus.
A genomics lab at a research hospital wants AI assistance for variant interpretation, but its patient genomes are bound by GDPR + HIPAA and cannot leave the institution. Here is what they did, step by step.
One DGX-class node in the genomics lab plus a small data worker on adjacent storage. curl https://repo.eldric.ai/install.sh | sudo bash, then dnf install eldric-aios on each. Controller on the DGX, data worker on the storage box. License file from license@core.at for the Professional tier.
Three KBs created via the Admin Console: clinical-guidelines (ACMG variant-interpretation standards, hospital protocols), recent-literature (PubMed-cached papers from the last two years), institutional-cases (de-identified prior case write-ups). Chunked-upload protocol handled the multi-GB PDF batches.
The Science worker enables BLAST, Ensembl, GTEx, OpenFDA out of the box. Local BLAST databases on the data worker — query metadata leaves, sequence data never does. The lab adds two custom sources via the §43 registry: their internal variant database, and the consortium-shared frequency table they have access to.
The variant-interpretation reasoning style of the lab is specific — short verdict, structured rationale, ACMG criteria explicitly cited. POST /api/v1/agent/generate-training on the institutional-cases KB produced 4,800 training samples; a QLoRA run on Llama-3.2-8B gave the lab its house model in about twelve hours.
The combined knowledge — ACMG guidelines + recent literature + institutional patterns — is distilled into a hierarchical .emm file (domain: genomics, rank 256, dim 1024). The matrix memory sits next to the vector store. Compressed recall is fast at high concurrency; exact-citation retrieval still hits the vector store when needed.
Geneticists query through the webchat with the variant-interp tenant active. Agentic RAG walks across all three KBs plus the Science-worker BLAST output. Audit ledger captures every AI-assisted decision for the IRB. Patient genomes never crossed the LAN boundary; only the answer did.
The portable matrix-memory file is the key to running domain-aware AI outside the data centre. A field clinic with a Raspberry Pi gets the same .emm file as the central hospital; a factory edge gateway gets a manufacturing-tuned variant; a robot on the floor gets the policy-execution slice via the xLSTM daemon. Same binary, same memory format, different deployment scale.
Operations details (signed-artifact verification, version pinning, store-and-forward for offline operation, replication policy to keep replicas current) are covered in the Edge runtime documentation — see for-enterprises for the architectural overview and the features catalogue §14 for the per-feature surface.
Sixteen industry pages cover the sectors with the most established deployment patterns: hospitals, banks, law firms, insurance, pharma, genomics, factories, automotive, robotics, logistics, trading desks, space agencies and observatories, earth observation, materials science, neuroscience, CRISPR.
If your domain isn't on that list yet, the six-step principle above still applies. Write to office@eldric.ai with a sketch of what you want to do; we'll tell you which extension points fit, what the worked example would look like for your sector, and whether the result needs a 5.0 patch or fits the current release.