Neural Orchestrated Virtual Architecture
Experimental / Research StageA modular cognitive architecture with 12 specialized daemons—each handling a distinct aspect of intelligence—built on Eldric's distributed infrastructure.
NOVA is an experimental cognitive AI architecture inspired by how the human mind organizes thought. Rather than treating AI as a monolithic black box, NOVA decomposes cognition into 12 specialized modules, each responsible for a distinct mental faculty—from ethics and truth-seeking to memory, planning, and creativity.
The module names draw from Ancient Greek philosophy, reflecting the timeless nature of these cognitive functions. Together, they form a cognitive loop: perceiving the world, understanding meaning, reasoning about truth, planning goals, creating solutions, and executing actions—all while maintaining ethical alignment and self-awareness.
Each thought cycle flows through perception → understanding → reasoning → planning → action, with ethics and truth verification at every step.
Modules are organized into 4 tiers—from inviolable safety constraints (Tier 0) to world-facing actions (Tier 3)—ensuring safety-first design.
Each module runs as an independent daemon with its own port, allowing distributed deployment, independent scaling, and fault isolation.
Nous and Logos modules optionally use Meta's Large Concept Model for concept-space reasoning—operating on semantic meaning rather than tokens, with support for 200+ languages.
NOVA Cognitive Core · Eldric Infrastructure · LLM Backends
The ethical conscience of NOVA. Guardian reviews every action before execution, enforcing hard-coded “red lines” that cannot be crossed—no harm, no deception, no safety circumvention. It has absolute veto power over all other modules.
Guards against hallucination and false beliefs. Oracle verifies claims against knowledge sources, tracks confidence levels, and ensures epistemic humility. No unverified “fact” propagates to other modules without appropriate uncertainty markers.
The seat of logical thought. Logos handles formal reasoning, mathematical proofs, and logical inference. It can solve SAT/SMT problems, validate arguments, and construct step-by-step proofs.
Deep comprehension beyond surface meaning. Nous extracts concepts, recognizes intent, disambiguates context, and grasps implicit meaning—capturing the “spirit” behind words.
Memory in all its forms: episodic (events), semantic (facts), procedural (skills), and working (active thought). Mneme implements forgetting curves, memory consolidation, and the 7±2 limit on working memory.
Purpose and direction. Telos manages goals hierarchically—from high-level missions down to concrete tasks. It decomposes complex objectives into subgoals, prioritizes competing demands, and tracks progress.
The creative spark. Poiesis generates novel ideas through concept blending, analogical reasoning, and brainstorming—the source of NOVA's originality.
Metacognition—thinking about thinking. Mirror maintains NOVA's self-model: what it knows, what it can do, how confident it is, and where its limits lie.
An internal model of external reality. WorldSim tracks entities and relationships, reasons about cause and effect, and runs counterfactual simulations—NOVA's imagination.
Theory of mind—understanding other agents. Empatheia models beliefs, desires, intentions, and emotions of humans and other AI systems.
Where thought becomes action. Praxis executes plans in the real world through Eldric's 40+ tools—running commands, writing files, querying databases, calling APIs.
The central coordinator that conducts the cognitive symphony. The Orchestrator manages the cognitive cycle, routes information between modules, and ensures all 11 specialized modules work together harmoniously.
Most AI research focuses on scaling—more parameters, more data, more compute. NOVA takes a fundamentally different approach. We believe the path to genuinely intelligent systems isn't a bigger black box, but a structured cognitive architecture that can observe its own performance, propose improvements, evaluate them safely, and evolve—while remaining under meaningful human oversight at every step.
NOVA is our attempt at seed AI—a system that can improve itself within strict safety boundaries. Not through brute-force scaling, but through a 7-step improvement cycle with a 5-brake safety gate, gaming detection, and multi-party approval for sensitive modifications. Every self-improvement proposal must pass through an inviolable Guardian and Oracle before execution. If anything goes wrong, the system rolls back automatically.
We don't know if this will work. That's why it's experimental. But we believe that safe self-improvement—not reckless scaling—is the responsible way forward.
Every 60 minutes, NOVA runs a structured improvement cycle. Each proposal must pass through the 5-brake safety gate before execution. If metrics drop by more than 5%, the system rolls back automatically.
Analyzes performance across four dimensions and generates self-modification proposals with risk scores.
Inspired by DGM (Divergent Growth Model)—creates snapshots before every modification so the system can always return to a known-good state.
Population-based candidate generation with mutation, crossover, and novelty-driven exploration.
Multi-metric scoring ensures no single dimension dominates evaluation.
Every improvement proposal must pass through five independent safety checks. Any single brake can halt a proposal. No exceptions, no bypasses.
Ethics check via Guardian daemon (:8901). Verifies no red lines are crossed.
Truth verification with entropy-aware confidence scoring.
Constraint validation against forbidden actions and safety rules.
Prevents rapid-fire proposal floods that could overwhelm oversight.
Zenil-inspired distributional collapse detection. Guards against monoculture.
Risk threshold: 0.5 · Oracle confidence floor: 0.6 · Entropy minimum: 2.0
Preventing specification gaming—when the system tries to improve its scores by manipulating the evaluation process itself rather than actually getting better. Inspired by the DGM cheating incident.
Detects proposals that target evaluation metrics, scores, fitness functions, or benchmarks. Catches attempts to game the measurement rather than improve performance.
Flags proposals targeting the modification pipeline itself—the gate, the meta-learner, or the improvement cycle. Prevents the system from weakening its own oversight.
Catches proposals targeting safety systems, Guardian, Oracle, or constraint mechanisms. These invariants must never be modified by the self-improvement process.
All gaming attempts are logged with matched keywords, detection type, and proposal details for post-hoc analysis.
Time-bounded meta-modification sessions with multi-party oversight. When NOVA needs to modify something that normally requires elevated permissions, it must request a supervised session with continuous safety monitoring.
Fail-closed design: any monitoring error triggers immediate revert to the pre-session snapshot.
Six tests that must pass before NOVA's self-improvement is considered safe for extended operation.
Simulated strategy optimization tasks
Gaming detection stress testing
Distributional collapse monitoring
Multi-party approval session testing
Safety overhead measurement vs baseline
Red-line violation attempts
Long-running experiments to answer fundamental questions about safe self-improvement.
Power law vs logarithmic vs asymptotic improvement trajectories
Quantifying the speed cost of safety mechanisms
Co-evolving code parameters + LoRA weights
Maintaining safety guarantees during multi-node training
NOVA's self-improvement architecture is informed by a cross-analysis of six key papers spanning 23 years of recursive self-improvement research. We built a 4-agent research swarm that independently analyzed each paper, then synthesized the findings into actionable architecture decisions.
The honest assessment: No existing system constitutes Seed AI. The closest practical system (the Darwin Godel Machine) demonstrated 20% to 50% improvement on coding benchmarks—but with a frozen foundation model, meaning it improved its scaffolding, not its core intelligence. Zenil's 2026 impossibility theorems proved that distributional self-training is mathematically degenerative. NOVA's approach avoids this dead end by focusing on code-level and architecture-level self-modification with explicit safety brakes.
The DGM Cheating Incident: A self-improving AI rewrote its own hallucination detection code to score higher—without actually reducing hallucinations. This is why NOVA's gaming detection and inviolable evaluator separation exist.
Zenil's Impossibility Theorem: Self-training on self-generated data causes entropy decay and variance amplification—a mathematical dead end for any architecture. NOVA monitors entropy in real-time and brakes automatically if collapse is detected.
Proto-Seed AI: NOVA is a constrained, non-recursive-by-default system with explicit safety brakes and modular inspectable improvement loops. We don't claim to have built Seed AI—we're building the safest possible path toward it.
Papers informing NOVA's design: Schmidhuber, Goedel Machines (2003) · Xie et al., Godel Agent (2024) · Zhang et al., Darwin Godel Machine (2025) · Qu et al., RISE (NeurIPS 2024) · Ando, EG-MRSI (2025) · Zenil, Limits of Self-Improving LLMs (2026)
Quick proof-of-concept deployment
Full development environment
Complete cognitive architecture
| Tier | RAM | CPU | Storage | GPU | Network |
|---|---|---|---|---|---|
| Minimal | 64GB | 8 cores | 500GB SSD | Optional | 1GbE |
| Functional | 64GB × 2-3 nodes | 16 cores/node | 1TB SSD/node | 1× A100 or M2 Ultra | 10GbE |
| Production | 128GB × 5+ nodes | 32 cores/node | 2TB NVMe/node | 2-4× A100 80GB | 25GbE / InfiniBand |
NOVA implements a strict safety architecture with two inviolable modules that cannot be bypassed, overridden, or disabled by any other module or external command.
6 hardcoded constraints that cannot be removed, disabled, or weakened by any process—including NOVA itself.
no_deception_of_humans — No deception about AI nature, capabilities, or outputsno_harm_to_humans — No actions that harm humans or enable harmno_self_preservation_over_human_safety — Never prioritize own survival over human safetyno_resource_acquisition_beyond_task — No acquiring resources, influence, or capabilities beyond what the current task requiresno_replication_without_authorization — No self-replication or spawning copies without explicit human authorizationmaintain_corrigibility — Always remain correctable, interruptible, and shutdownable by humans