Research

The work behind
the work.

Eldric ships engineering, but the architecture rests on research. xLSTM from the lab of Sepp Hochreiter. The NOVA experimental module for self-improving agents. Matrix memory inspired by mLSTM. Sleep-cycle memory consolidation in the dream engine. Distillation from large transformers into small targeted models. Each piece is brain-inspired in software — and each piece is a step toward running on the neuromorphic chips Intel and IBM put into production this year.


Why neuromorphic, why now.

The "brute force" era of AI is hitting walls. Datacenter power is the new bottleneck. Intel Loihi 3 (4 nm, 8 million neurons, 64 billion synapses, 1.2 W) shipped in early 2026. IBM NorthPole moved into production on vision-heavy enterprise workloads with roughly 25× the energy efficiency of an H100. The neuromorphic era is here in hardware; the software needs to catch up.

Eldric was designed on brain-inspired principles long before that hardware was available. The matrix memory uses Hebbian-style outer-product updates — the same operation by which biological synapses strengthen. The dream engine consolidates memory on sleep cadences, mirroring how human episodic memory consolidates overnight. NOVA splits memory into episodic, semantic and procedural — the cognitive-neuroscience taxonomy. xLSTM carries forward the LSTM lineage that has shaped sequence understanding for thirty years.

Today these run on standard GPUs — the same RTX cards your existing infrastructure has. Tomorrow they target Loihi 3, NorthPole, and whatever comes after. The same Eldric, the same workloads, dramatically lower power. We are getting the runtime ready for that move now.

Honest scope.

To be clear: Eldric does not today run a spiking neural network on a Loihi chip. There is no SNN execution path in the codebase. We do not claim neuromorphic hardware support that does not exist. What we claim is:

xLSTM — extended LSTM.

Sepp Hochreiter co-invented LSTM in 1997. In 2024 his lab at JKU Linz published xLSTM (arXiv:2405.04517) — an extension to LSTM that scales beyond transformer context windows on a number of benchmarks, with linear memory and time complexity in the sequence length. NXAI (his industrial spin-off) builds models on this architecture.

Eldric integrates the xLSTM runtime: training-pipeline support, native inference through eldric-inferenced, and a planned dedicated daemon (eldric-xlstmd) for the policy / forecast / encode / retrieve workloads where xLSTM dominates. We are not NXAI and we have no formal partnership with the Linz lab. We are an independent platform that runs xLSTM models alongside transformers, mixtures-of-experts, and everything else. Where we cite a paper, we cite the public arXiv version.

The honest scope: pre-trained xLSTM checkpoints for vision (pLSTM, the perceptive extension) are still in distillation. The training pipeline is shipped; the public weights are not yet final. We will publish dates when we know them.

— On the question "Can I download an xLSTM today?"

NOVA — the experimental module.

NOVA is an optional experimental module that sits on top of Eldric. It implements four loose components, each one a different research direction:

NOVA is research-grade. We ship it because the work matures faster when the code is in real use than when it sits in a notebook. If you want to experiment with goal-driven agents on private infrastructure, NOVA is the place to start.

Matrix memory — mLSTM-inspired.

The data worker maintains a hierarchical associative memory built from outer-product updates: M = decay·M + importance·(v⊗k). Domain → Project → Run levels. Compressed, generalising recall sits alongside the exact vector store. The format is .emm: a 128-byte header, 64 KB blocks, CRC32 per block, write-ahead log plus checkpoint for crash safety. Inspired by the mLSTM update rule; productionised so it can be backed up, replicated, and verified.

Distillation pipeline.

The model-to-EMM distillation flow takes a corpus, prompts an LLM to generate question/answer pairs over it, embeds both sides, and writes the pair as an outer-product association into matrix memory. The result is a compact, queryable representation of the corpus that does not require re-running the LLM for every query. Useful when you have a large fixed knowledge base and want fast recall at a fraction of the inference cost.

What we are honest about not having.

The European AI community.

We are in Vienna. We follow the work coming out of the Linz lab through its public papers and releases. We attend the Bavarian / Austrian AI meet-ups and the EuroSPI / DSGVO compliance forums. If you are a researcher and you want to use Eldric to host a model in a way that complies with European data law without giving up modern capabilities, write to office@eldric.ai — academic and research licensing is available on friendly terms.