CRISPR & Gene Editing

Design guide RNAs, predict off-target effects, simulate base and prime editing — with complete audit trails for regulatory compliance.

Guide RNA Design Off-Target Analysis Base Editing Prime Editing GLP Compliant
500+
guides/day
<3s
Off-target analysis
Base+Prime
Editing modes
Full
Audit trail

Distributed GPU Cluster

Eldric runs on any GPU — from a single RTX 3090 to NVIDIA H100/H200 datacenter cards. Connect labs, datacenters, and remote researchers into one cluster that spans cities or continents. Workers register through the Edge TLS gateway over the internet, or use the built-in tunnel for NAT traversal — no VPN needed. Adding a node is one command: the worker auto-registers and is immediately available for inference.

Mixed GPU Research Cluster Controller :8880 (orchestration) Router :8881 (AI decisions, xLSTM predictor, Swarm LLM) Inference Workers — auto-register, any GPU, mix freely RTX 3090 (24GB) Ollama :8890 RTX 4090 (24GB) xLSTM + Ollama :8890 H100 (80GB HBM3) vLLM + xLSTM :8890 Apple M4 Ultra MLX :8890 + Add More... auto-registers Specialized Workers — all auto-register with controller Science :8897 140+ APIs, LIMS Training :8898 xLSTM, LoRA, DPO Data :8892 NFS, RAG, Vector Edge :443 TLS, Auth, Web IoT :8891 OPC-UA, TinyML Media :8894 STT, TTS, Video Location-Independent — nodes connect over the internet via Edge TLS gateway University Datacenter Corporate On-Prem Research Lab Home Office / Remote Cloud / Colo Datacenter Mix any hardware · Any location · Workers auto-register over the internet · xLSTM on every node
# Node 1: Controller + Router + Data Worker (any machine) ./eldric-controller --port 8880 & ./eldric-routerd --controller http://localhost:8880 & ./eldric-datad --nfs --vector --controller http://localhost:8880 & # Add a GPU worker — any machine, any GPU. # It auto-registers with the controller and is immediately available. ./eldric-workerd --backend ollama --controller http://node1:8880 # Add more workers the same way. Mix any hardware: ssh rtx-box "./eldric-workerd --backend ollama --controller http://node1:8880" ssh h100-node "./eldric-workerd --backend vllm --controller http://node1:8880" ssh mac-studio "./eldric-workerd --backend mlx --controller http://node1:8880" # Science Worker (auto-registers like any other worker) ./eldric-scienced --controller http://node1:8880 # Split a 70B model across all workers (VRAM-proportional sharding) curl -X POST http://node1:8880/api/v1/pipeline/deploy \ -d '{"model_path":"/mnt/models/llama-70B-Q4.gguf", "workers":["wrk-1","wrk-2","wrk-3","wrk-4"], "strategy":"vram_proportional"}'

xLSTM + Transformer Mixture Models

Train xLSTM on CRISPR outcome datasets for editing efficiency prediction. The model learns sequence context around the cut site (PAM + spacer + flanking) to predict on-target activity and off-target risk.

Architecture

Sepp Hochreiter's xLSTM extended with Transformer layers for domain-specific tasks.

  • sLSTM: exponential gating for long-range dependencies
  • mLSTM: matrix memory with covariance update
  • Transformer: cross-attention for multi-feature correlation
  • Mixture: xLSTM temporal + Transformer relational

Training Configuration

# Train via Eldric Training Worker API curl -X POST http://controller:8880/api/v1/training/jobs \ -H "Content-Type: application/json" \ -d '{"name":"xlstm-crispr-efficiency","base_model":"xlstm-250m","method":"sft","backend":"xlstm","dataset":{"path":"/data/crispr-outcomes.jsonl","format":"alpaca"},"hyperparams":{"epochs":25,"batch_size":32,"learning_rate":3e-4},"model_config":{"architecture":"xlstm_transformer_mixture","xlstm_layers":6,"transformer_layers":2,"hidden_size":256,"context_length":4096}}'

API Examples

All endpoints are served by the Science Worker (:8897). Requests are routed through the Edge server for TLS and authentication in production.

Design guide RNAs for target gene

POST http://science-worker:8897/api/v1/crispr/design # Request body: {"gene":"BRCA1","organism":"human","pam":"NGG","max_guides":10}

Predict off-target sites

POST http://science-worker:8897/api/v1/crispr/offtargets # Request body: {"guide_sequence":"ATCGATCGATCGATCGATCG","genome":"GRCh38","max_mismatches":4}

Base editing simulation

POST http://science-worker:8897/api/v1/crispr/base-edit # Request body: {"guide":"ATCGATCG...","editor":"ABE8e","target_position":6}

Prime editing design

POST http://science-worker:8897/api/v1/crispr/prime-edit # Request body: {"target_sequence":"...","desired_edit":"G>A","position":42}

Distributed Inference Documentation

Learn how to split large models across multiple workers with pipeline parallelism.

Distributed Inference Docs