Find the right doc for your question — grouped by what you're doing, not by what the system is called. The pointers go to the deeper documents that already exist (API reference, features catalogue, sector pages, troubleshooting), not to duplicates.
Install in five minutes — Linux server, macOS workstation, or multi-node cluster. Three platform paths covered.
From `dnf install` complete to a working cluster — admin signup, license, first KB, first chat.
Nine common failure modes (GPG fail, port conflicts, admin conflicts, Gatekeeper, license signature) with recoveries.
Cluster-wide RPM upgrades orchestrated through the controller. Drain → install → restart per node. Reference: api-reference.md §19.
Local-destination snapshots of cluster state — controller, vector, memory, configs. Verify + restore per snapshot. Reference: api-reference.md §18.
Internal CA plus Let's-Encrypt issuance + renewal via certbot. Generate, deploy, rotate. Cluster-wide push from master. Reference: features §13.4.
Walks a 4.x data tree (vectors / memory / agent / comm / science) and replays into 5.0 syscall surface. Companion CLI: scripts/migrate-4x-to-5.0.sh.
Real rsync-over-SSH replication between data workers. Vector storage, matrix memory, file storage tenants. SSH key auth.
Outbound event publisher with HMAC-SHA256 signing. Failed deliveries auto-disable after threshold. Reference: for-developers.
OpenTelemetry exporter — spans, counters, histograms. Off by default; admin opt-in. Low-cardinality path normalisation built in.
Hash-chained log of every AI-assisted decision. Defensible record for compliance reviews. Reference: features §13.5.
Controller serves controller-overview, swarm-management, cluster-ops, and knowledge-browser dashboards. Access at /dashboard.
OpenAI-compatible API, plugin authoring, webhooks, agents — drop-in replacement for OpenAI tooling.
Complete endpoint surface — Edge, Controller, Router, Data, Agent, Media, Comm, Science, Training, IoT, Swarm, NOVA, xLSTM, Cluster Operations.
Edge-reachable endpoints only — one line per endpoint. The starting point for external integration.
Five plugin types: Tool, Filter, Pipe, Action, Widget. Python or JS. Manifest + valves + main code. Marketplace at /api/v1/marketplace/catalog.
Agent Builder uses LLM reasoning to design novel agents; Agent Generator uses domain templates for deterministic mass production. Both write to agents/<name>/.
Subscribe at /api/v1/webhooks/subscriptions. HMAC-SHA256 signing on every outbound POST. Failed-delivery auto-disable.
Upload via Admin Console → KBs → New KB. PDF, DOCX, Markdown, text, HTML. Chunking + embedding local. RAG query via chat shell or API.
Multi-tenant vector store on the data worker. SQLite, FAISS, ChromaDB, or memory backend. Reference: api-reference.md §6.
Hierarchical associative memory — domain → project → run. Outer-product updates + matrix-vector recall. Portable .emm v3 format. Reference: api-reference.md §7.
Six backends (Unsloth, Axolotl, TRL, DeepSpeed, MLX, llama.cpp), eight methods (LoRA, QLoRA, SFT, DPO, RLHF, PPO, full fine-tune, distillation). Reference: api-reference.md §13.
Model → EMM knowledge distillation. Vector source → LLM Q&A pairs → matrix memory associations. Sync run via /api/v1/distill/run.
Round orchestration — controller broadcasts to participants, workers train locally, shards aggregate. Reference: features §9.5.
TLS termination, auth, rate limits, embedded webchat, plugin host. Port 443/80. Reference: api-reference.md §1-3.
Cluster orchestration, licensing, topology push, cluster operations. Port 8880. Reference: api-reference.md §18-19.
Intelligent request routing — eight load-balancing strategies, intent classification, theme detection, AI-powered decisions. Port 8881. Reference: api-reference.md §5.
Multiple backends: Ollama, vLLM, TGI, Triton, OpenAI-compatible, llama.cpp. Port 8890. Reference: api-reference.md §4.
Multi-backend cloud gateway — Ollama Cloud, OpenAI, Anthropic, xAI, Groq, DeepSpeed, more. Port 8889.
File storage, NFS, databases, vector/RAG, matrix memory. Multi-tenant with quotas. Port 8892.
Agentic RAG, multi-agent orchestration, query decomposition, training data generation. Port 8893.
STT (Whisper, OpenAI), TTS (Piper, ElevenLabs, OpenAI), audio analysis, video processing, voice chat. Port 8894.
Email, SMS, WhatsApp, Signal, Teams, XMPP, VoIP. AI auto-response. Port 8895.
Source Registry (§43) — 16 categories, 28 built-in sources. Bioinformatics, pharma, CRISPR, LIMS. Port 8897.
Six backends, eight methods, training chains, latent reasoning. GPU inventory + multi-GPU. Port 8898.
Direct GGUF / xLSTM loading without Ollama / vLLM. Speculative decoding, continuous batching, pipeline parallelism. Port 8883.
Consumer IoT (Netatmo, HomeKit, Matter) + industrial protocols (OPC-UA, Modbus, MQTT Sparkplug B). Port 8891.
Multi-agent orchestration with six topology types (hierarchical, P2P, ring, star, mesh, hybrid). Port 8885-8887.
Goal system, tri-memory, reasoning, meta-learner, sandboxed self-modification. Port 8899. Off by default.
Per-tenant boundary on storage, KBs, agents, projects. Cross-tenant access denied at the gateway. Tenant guard documented in features §16.1.
Four roles — Viewer, Developer, Admin, SuperAdmin. Capability-gated API surface. Per-tenant assignment.
Hash-chained log of every AI-assisted decision. Defensible record for HIPAA / GDPR / 21 CFR Part 11 reviews.
GLP and FDA 21 CFR Part 11 compliance flags on the Science worker. IRB-conformant audit trail.
Full on-premises stack with no external dependencies. Cloud workers and federated learning are optional.
Ed25519-signed license files. Free, Standard, Professional, Enterprise tiers. Hardware-binding optional. License email: license@core.at.
Per-sector workload patterns, compliance support, deployment shape. Each page has a workloads grid, a compliance row, and an honest-scope statement.
journalctl -u eldric-aios --since today --no-pager