Articles
Memory · architecture
A million tokens isn’t memory
Context windows climbed from 4K to 2M — still a tiny fraction of a working lab's 40 years of papers, notebooks, and failed experiments. Why Eldric pairs a vector store with xLSTM-inspired Matrix Memory (v4 Gated DeltaNet) for compressed associative recall across decades.
2026-04-23 · ~8 min read
Privacy-first · use case
When “AI” and “confidential” have to coexist
Law, HR, family offices, journalism, research. Value-props + SWOT + 30/90/180-day roll-in plan. Zero-egress architecture, session.local scrub, hash-chained audit. Runs on 3×4090 for ~€12–15k one-time — displaces €30–60k/year of SaaS AI that still can’t see your data.
2026-04-24 · ~7 min read
Banking · use case
AI that can read the mainframe without ever leaving the vault
Db2 z/OS over DRDA via the shipped ODBC layer today, hash-chained audit log, multi-tenant chinese walls, DORA + EU AI Act primitives by construction. Full SWOT, AI differentiator, hardware bill-of-materials for a regional bank — 3×4090 deploys against a 500-seat retail bank.
2026-04-24 · ~8 min read
Insurance · use case
Claims, fraud, and the archive nobody has time to read
Matrix Memory v4 Gated DeltaNet compresses decades of fraud-outcome pairs; data.pageindex tree-walks policy documents; vector retrieval anchors citations. Solvency II audit by construction. Value-props grid, SWOT, 30/90/180 plan, hardware setup.
2026-04-24 · ~8 min read
Inference providers · use case
The control plane your inference business already needs
Stop rebuilding tenant, quota, routing, auth, memory, audit, chat UI, billing hooks. Whitelabel-ready webchat + 14 dashboards + modular backend abstraction. Starter rack is a single GPU node; scale out by adding inference-role hosts. SWOT includes the honest limits.
2026-04-24 · ~7 min read
HPC · use case
Put the AI where the compute already is
Role-modular architecture maps cleanly onto Leonardo (CINECA), LUMI (CSC), Marenostrum 5 (BSC). edge+router on login nodes, data on storage pods, inference on Grace Hopper partitions. Same binary, different flag. Sovereign-AI posture by default.
2026-04-24 · ~8 min read
Science · use case
An AI that lives in the lab, not at the vendor
140+ scientific APIs as plugins, OPC-UA / Modbus instrument ingest, per-domain Matrix Memory sized for the problem (genomics 256/1024, particle_physics 512/1024). Cross-cohort recall so PhD graduations don’t drain the archive. Value-props, SWOT, disk-bill.
2026-04-24 · ~8 min read
Universities · use case
One cluster. Every faculty. No shadow IT.
Faculty tenants as code, real identity service, GDPR-shaped defaults, sovereign-AI posture by construction. One contract, one DPA, one audit log — replacing N vendor subscriptions. A 3×4090 node covers a 10–20k-student university in full pilot.
2026-04-24 · ~7 min read
Labs & pharma · use case
AI that passes the validation step
Science Worker (CRISPR, docking, ADMET, AlphaFold) + 21 CFR Part 11 primitives in one kernel. Hash-chained audit = validation evidence. Single signed RPM keeps DQ/IQ/OQ tractable. SWOT is honest about what isn’t certified yet. Hardware plan from sandbox to IND-enabling.
2026-04-24 · ~8 min read
Automotive & robotics · use case
The shop-floor AI that doesn’t phone home
OPC-UA + Modbus + MQTT Sparkplug B in the iot module directly — no SCADA-to-REST middleware. ECU archive retrieval, Matrix Memory for cross-model-year failure patterns. IATF 16949-shaped audit. Plant-edge node runs on 3×4090, no WAN dependency.
2026-04-24 · ~8 min read
Data access · architecture
Connect to everything — NFS, SQL, z/OS, APIs
alpha.3 ships NFS + SQLite + universal ODBC — enough to reach PostgreSQL, MySQL, Oracle, MSSQL, Db2 LUW, and Db2 z/OS out of the box. Streaming, NoSQL, object storage, and native mainframe messaging are the actual Phase-1 / Phase-2 roadmap; extensions cover everything in between.
2026-04-23 · ~7 min read
Industrial AI · positioning
Industrial AI is six markets — one is wide open
Predictive maintenance, quality inspection, process optimisation, anomaly detection, supply-chain forecasting — all mature. The sixth category, operations assistants grounded on plant telemetry, is where a private-cloud LLM with on-prem retrieval wins.
2026-04-23 · ~5 min read