Materials Science & Quantum Computing

Materials Project database queries, crystallographic data, quantum algorithm simulation, and NIST chemistry — accelerating next-generation material design.

Materials Project Quantum Algorithms COD Database NIST Chemistry On-Premise
150K+
Materials
Quantum
Circuit sim
Crystal
Structures
$0
Cloud cost

Distributed GPU Cluster

Eldric runs on any GPU — from a single RTX 3090 to NVIDIA H100/H200 datacenter cards. Connect labs, datacenters, and remote researchers into one cluster that spans cities or continents. Workers register through the Edge TLS gateway over the internet, or use the built-in tunnel for NAT traversal — no VPN needed. Adding a node is one command: the worker auto-registers and is immediately available for inference.

Mixed GPU Research Cluster Controller :8880 (orchestration) Router :8881 (AI decisions, xLSTM predictor, Swarm LLM) Inference Workers — auto-register, any GPU, mix freely RTX 3090 (24GB) Ollama :8890 RTX 4090 (24GB) xLSTM + Ollama :8890 H100 (80GB HBM3) vLLM + xLSTM :8890 Apple M4 Ultra MLX :8890 + Add More... auto-registers Specialized Workers — all auto-register with controller Science :8897 140+ APIs, LIMS Training :8898 xLSTM, LoRA, DPO Data :8892 NFS, RAG, Vector Edge :443 TLS, Auth, Web IoT :8891 OPC-UA, TinyML Media :8894 STT, TTS, Video Location-Independent — nodes connect over the internet via Edge TLS gateway University Datacenter Corporate On-Prem Research Lab Home Office / Remote Cloud / Colo Datacenter Mix any hardware · Any location · Workers auto-register over the internet · xLSTM on every node
# Node 1: Controller + Router + Data Worker (any machine) ./eldric-controller --port 8880 & ./eldric-routerd --controller http://localhost:8880 & ./eldric-datad --nfs --vector --controller http://localhost:8880 & # Add a GPU worker — any machine, any GPU. # It auto-registers with the controller and is immediately available. ./eldric-workerd --backend ollama --controller http://node1:8880 # Add more workers the same way. Mix any hardware: ssh rtx-box "./eldric-workerd --backend ollama --controller http://node1:8880" ssh h100-node "./eldric-workerd --backend vllm --controller http://node1:8880" ssh mac-studio "./eldric-workerd --backend mlx --controller http://node1:8880" # Science Worker (auto-registers like any other worker) ./eldric-scienced --controller http://node1:8880 # Split a 70B model across all workers (VRAM-proportional sharding) curl -X POST http://node1:8880/api/v1/pipeline/deploy \ -d '{"model_path":"/mnt/models/llama-70B-Q4.gguf", "workers":["wrk-1","wrk-2","wrk-3","wrk-4"], "strategy":"vram_proportional"}'

xLSTM + Transformer Mixture Models

Train xLSTM on materials property datasets for bandgap prediction and new material discovery. Transformer attention captures inter-element relationships; xLSTM memory stores composition-property mappings across the periodic table.

Architecture

Sepp Hochreiter's xLSTM extended with Transformer layers for domain-specific tasks.

  • sLSTM: exponential gating for long-range dependencies
  • mLSTM: matrix memory with covariance update
  • Transformer: cross-attention for multi-feature correlation
  • Mixture: xLSTM temporal + Transformer relational

Training Configuration

# Train via Eldric Training Worker API curl -X POST http://controller:8880/api/v1/training/jobs \ -H "Content-Type: application/json" \ -d '{"name":"xlstm-bandgap-predictor","base_model":"xlstm-250m","method":"sft","backend":"xlstm","dataset":{"path":"/data/materials-project-bandgaps.jsonl","format":"alpaca"},"hyperparams":{"epochs":50,"batch_size":32,"learning_rate":2e-4},"model_config":{"architecture":"xlstm_transformer_mixture","xlstm_layers":6,"transformer_layers":4,"hidden_size":256,"context_length":2048}}'

API Examples

All endpoints are served by the Science Worker (:8897). Requests are routed through the Edge server for TLS and authentication in production.

Search materials by property

POST http://science-worker:8897/api/v1/materials/search # Request body: {"formula":"Li*O*","bandgap_min":1.5,"bandgap_max":3.0}

Quantum circuit simulation

POST http://science-worker:8897/api/v1/quantum/simulate # Request body: {"circuit":"h 0; cx 0 1; measure 0 1","shots":1024}

Crystal structure from COD

GET http://science-worker:8897/api/v1/cod/structure/1000041

NIST chemistry data

GET http://science-worker:8897/api/v1/nist/compound/water

Distributed Inference Documentation

Learn how to split large models across multiple workers with pipeline parallelism.

Distributed Inference Docs