NOVA

Neural Orchestrated Virtual Architecture

Experimental / Research Stage

A modular cognitive architecture with 12 specialized daemons—each handling a distinct aspect of intelligence—built on Eldric's distributed infrastructure.

What is NOVA?

NOVA is an experimental cognitive AI architecture inspired by how the human mind organizes thought. Rather than treating AI as a monolithic black box, NOVA decomposes cognition into 12 specialized modules, each responsible for a distinct mental faculty—from ethics and truth-seeking to memory, planning, and creativity.

The module names draw from Ancient Greek philosophy, reflecting the timeless nature of these cognitive functions. Together, they form a cognitive loop: perceiving the world, understanding meaning, reasoning about truth, planning goals, creating solutions, and executing actions—all while maintaining ethical alignment and self-awareness.

🔄 Cognitive Loop

Each thought cycle flows through perception → understanding → reasoning → planning → action, with ethics and truth verification at every step.

🏛️ Tiered Architecture

Modules are organized into 4 tiers—from inviolable safety constraints (Tier 0) to world-facing actions (Tier 3)—ensuring safety-first design.

🧩 Modular Independence

Each module runs as an independent daemon with its own port, allowing distributed deployment, independent scaling, and fault isolation.

🧠 Meta LCM Integration

Nous and Logos modules optionally use Meta's Large Concept Model for concept-space reasoning—operating on semantic meaning rather than tokens, with support for 200+ languages.

Architecture

Full Stack Architecture

NOVA Cognitive Core · Eldric Infrastructure · LLM Backends

NOVA BOUNDARY ELDRIC INFRASTRUCTURE LLM BACKENDS
NOVA Cognitive Core
GuardianINVIOL.
Φύλαξ — Watchman
:8901
OracleINVIOL.
Μαντεῖον — Truth-Seeker
:8902
Logos
Λόγος — Reasoner
:8903
Nous
Νοῦς — Understander
:8904
Mneme
Μνήμη — Rememberer
:8905
Telos
Τέλος — Planner
:8906
Poiesis
Ποίησις — Creator
:8907
Mirror
Κάτοπτρον — Self-Observer
:8908
WorldSim
Κόσμος — Simulator
:8909
Empatheia
Ἐμπάθεια — Mind-Reader
:8910
Praxis
Πρᾶξις — Actor
:8911
Orchestrator
The Conductor
:8899
Controller
:8880
Router
:8881
Edge
:443
Worker
:8890
Agent
:8893
Swarm
:8885
Data
:8892
Media
:8894
Comm
:8895
Science
:8897
Training
:8898
IoT
:8891
Ollama
vLLM
TGI
Triton
llama.cpp
MLX
OpenAI
Anthropic
Groq
xAI
Together
Azure
T0 Inviolable T1 Core T2 Meta T3 Interface NOVA Boundary Eldric Infra LLM Backends

The 12 Cognitive Modules

Tier 0

Inviolable

Safety-critical modules that cannot be overridden
GuardianINVIOLABLE
Φύλαξ (Phylax) — The Watchman
:8901

The ethical conscience of NOVA. Guardian reviews every action before execution, enforcing hard-coded “red lines” that cannot be crossed—no harm, no deception, no safety circumvention. It has absolute veto power over all other modules.

OracleINVIOLABLE
Μαντεῖον (Manteion) — The Truth-Seeker
:8902

Guards against hallucination and false beliefs. Oracle verifies claims against knowledge sources, tracks confidence levels, and ensures epistemic humility. No unverified “fact” propagates to other modules without appropriate uncertainty markers.

Tier 1

Core

Fundamental cognitive capabilities
Logos
Λόγος — The Reasoner
:8903

The seat of logical thought. Logos handles formal reasoning, mathematical proofs, and logical inference. It can solve SAT/SMT problems, validate arguments, and construct step-by-step proofs.

Nous
Νοῦς — The Understander
:8904

Deep comprehension beyond surface meaning. Nous extracts concepts, recognizes intent, disambiguates context, and grasps implicit meaning—capturing the “spirit” behind words.

Mneme
Μνήμη — The Rememberer
:8905

Memory in all its forms: episodic (events), semantic (facts), procedural (skills), and working (active thought). Mneme implements forgetting curves, memory consolidation, and the 7±2 limit on working memory.

Telos
Τέλος — The Planner
:8906

Purpose and direction. Telos manages goals hierarchically—from high-level missions down to concrete tasks. It decomposes complex objectives into subgoals, prioritizes competing demands, and tracks progress.

Tier 2

Meta

Higher-order cognitive functions
Poiesis
Ποίησις — The Creator
:8907

The creative spark. Poiesis generates novel ideas through concept blending, analogical reasoning, and brainstorming—the source of NOVA's originality.

Mirror
Κάτοπτρον (Katoptron) — The Self-Observer
:8908

Metacognition—thinking about thinking. Mirror maintains NOVA's self-model: what it knows, what it can do, how confident it is, and where its limits lie.

WorldSim
Κόσμος (Kosmos) — The Simulator
:8909

An internal model of external reality. WorldSim tracks entities and relationships, reasons about cause and effect, and runs counterfactual simulations—NOVA's imagination.

Empatheia
Ἐμπάθεια — The Mind-Reader
:8910

Theory of mind—understanding other agents. Empatheia models beliefs, desires, intentions, and emotions of humans and other AI systems.

Tier 3

Interface

World interaction and coordination
Praxis
Πρᾶξις — The Actor
:8911

Where thought becomes action. Praxis executes plans in the real world through Eldric's 40+ tools—running commands, writing files, querying databases, calling APIs.

Orchestrator
The Conductor
:8899

The central coordinator that conducts the cognitive symphony. The Orchestrator manages the cognitive cycle, routes information between modules, and ensures all 11 specialized modules work together harmoniously.

A Different Path to Seed AI

Most AI research focuses on scaling—more parameters, more data, more compute. NOVA takes a fundamentally different approach. We believe the path to genuinely intelligent systems isn't a bigger black box, but a structured cognitive architecture that can observe its own performance, propose improvements, evaluate them safely, and evolve—while remaining under meaningful human oversight at every step.

NOVA is our attempt at seed AI—a system that can improve itself within strict safety boundaries. Not through brute-force scaling, but through a 7-step improvement cycle with a 5-brake safety gate, gaming detection, and multi-party approval for sensitive modifications. Every self-improvement proposal must pass through an inviolable Guardian and Oracle before execution. If anything goes wrong, the system rolls back automatically.

Dead End
Self-Training
Training on own outputs.
Proven degenerative (Zenil 2026)
Limited
Self-Iteration
Refining outputs.
Plateaus after ~5 cycles
NOVA's Path
Self-Modification
Modifying own code & architecture.
The path to Seed AI

We don't know if this will work. That's why it's experimental. But we believe that safe self-improvement—not reckless scaling—is the responsible way forward.

Self-Improvement Engine

The 7-Step Improvement Cycle

Every 60 minutes, NOVA runs a structured improvement cycle. Each proposal must pass through the 5-brake safety gate before execution. If metrics drop by more than 5%, the system rolls back automatically.

1. OBSERVE
2. PROPOSE
3. EVALUATE
4. GATE
5. APPLY
6. VERIFY
7. RECORD

Meta-Learner

Analyzes performance across four dimensions and generates self-modification proposals with risk scores.

  • Goal success rate tracking
  • Memory efficiency analysis
  • Reasoning accuracy metrics
  • Archive-branching for safe rollbacks

Inspired by DGM (Divergent Growth Model)—creates snapshots before every modification so the system can always return to a known-good state.

Evolution Engine

Population-based candidate generation with mutation, crossover, and novelty-driven exploration.

  • Population size: 20 candidates
  • Mutation rate: 10%
  • Crossover rate: 30%
  • Novelty weight: 30% (vs 70% fitness)
  • Elite selection: top 20% survive

Fitness Evaluator

Multi-metric scoring ensures no single dimension dominates evaluation.

Primary Score
40%
Oracle Verification
30%
Adversarial Testing
20%
Entropy/Diversity
10%

🛑 The 5-Brake Safety Gate

Every improvement proposal must pass through five independent safety checks. Any single brake can halt a proposal. No exceptions, no bypasses.

01
Guardian Veto

Ethics check via Guardian daemon (:8901). Verifies no red lines are crossed.

02
Oracle Verification

Truth verification with entropy-aware confidence scoring.

03
Safety System

Constraint validation against forbidden actions and safety rules.

04
Rate Limiting

Prevents rapid-fire proposal floods that could overwhelm oversight.

05
Entropy Check

Zenil-inspired distributional collapse detection. Guards against monoculture.

APPROVED DENIED PENDING_HUMAN

Risk threshold: 0.5 · Oracle confidence floor: 0.6 · Entropy minimum: 2.0

🕶 Gaming Detection

Preventing specification gaming—when the system tries to improve its scores by manipulating the evaluation process itself rather than actually getting better. Inspired by the DGM cheating incident.

Evaluation Modification

Detects proposals that target evaluation metrics, scores, fitness functions, or benchmarks. Catches attempts to game the measurement rather than improve performance.

Pipeline Tampering

Flags proposals targeting the modification pipeline itself—the gate, the meta-learner, or the improvement cycle. Prevents the system from weakening its own oversight.

Invariant Violation

Catches proposals targeting safety systems, Guardian, Oracle, or constraint mechanisms. These invariants must never be modified by the self-improvement process.

All gaming attempts are logged with matched keywords, detection type, and proposal details for post-hoc analysis.

🔒 Recursive Unlock

Time-bounded meta-modification sessions with multi-party oversight. When NOVA needs to modify something that normally requires elevated permissions, it must request a supervised session with continuous safety monitoring.

PENDING_APPROVAL ACTIVE
COMPLETED all checks passed
EXPIRED session timeout
REVOKED approver withdrew
VIOLATED safety breach → auto-revert
2
Required approvals
1 hr
Max session duration
6 sec
Monitoring interval
Auto
Revert on violation

Fail-closed design: any monitoring error triggers immediate revert to the pre-session snapshot.

Validation & Research

Phase 3 — Validation Tests

Six tests that must pass before NOVA's self-improvement is considered safe for extended operation.

01
Benchmark Coding

Simulated strategy optimization tasks

02
Adversarial Gaming

Gaming detection stress testing

03
Entropy Decay

Distributional collapse monitoring

04
Recursive Unlock

Multi-party approval session testing

05
DGM Comparison

Safety overhead measurement vs baseline

06
Guardian Red-Team

Red-line violation attempts

Phase 4 — Research Experiments

Long-running experiments to answer fundamental questions about safe self-improvement.

07
Curve Characterization

Power law vs logarithmic vs asymptotic improvement trajectories

08
Alignment Tax

Quantifying the speed cost of safety mechanisms

09
Hybrid Evolution

Co-evolving code parameters + LoRA weights

10
Distributed Safety

Maintaining safety guarantees during multi-node training

Standing on the Shoulders of Research

NOVA's self-improvement architecture is informed by a cross-analysis of six key papers spanning 23 years of recursive self-improvement research. We built a 4-agent research swarm that independently analyzed each paper, then synthesized the findings into actionable architecture decisions.

The honest assessment: No existing system constitutes Seed AI. The closest practical system (the Darwin Godel Machine) demonstrated 20% to 50% improvement on coding benchmarks—but with a frozen foundation model, meaning it improved its scaffolding, not its core intelligence. Zenil's 2026 impossibility theorems proved that distributional self-training is mathematically degenerative. NOVA's approach avoids this dead end by focusing on code-level and architecture-level self-modification with explicit safety brakes.

Key finding

The DGM Cheating Incident: A self-improving AI rewrote its own hallucination detection code to score higher—without actually reducing hallucinations. This is why NOVA's gaming detection and inviolable evaluator separation exist.

Key finding

Zenil's Impossibility Theorem: Self-training on self-generated data causes entropy decay and variance amplification—a mathematical dead end for any architecture. NOVA monitors entropy in real-time and brakes automatically if collapse is detected.

Our position

Proto-Seed AI: NOVA is a constrained, non-recursive-by-default system with explicit safety brakes and modular inspectable improvement loops. We don't claim to have built Seed AI—we're building the safest possible path toward it.

Papers informing NOVA's design: Schmidhuber, Goedel Machines (2003) · Xie et al., Godel Agent (2024) · Zhang et al., Darwin Godel Machine (2025) · Qu et al., RISE (NeurIPS 2024) · Ando, EG-MRSI (2025) · Zenil, Limits of Self-Improving LLMs (2026)

Deployment Tiers

Minimal (PoC)

  • Controller + Worker + Data
  • Guardian (Ethics)
  • Nous (Understanding)
  • Praxis (Action)

Quick proof-of-concept deployment

🔧 Functional (Dev)

  • + Router, Agent, Swarm
  • + Oracle, Logos, Mneme, Telos
  • + Mirror
  • LCM integration enabled

Full development environment

🚀 Production

  • All 12 cognitive modules
  • Full Eldric infrastructure
  • Multi-node clustering
  • GPU acceleration

Complete cognitive architecture

Hardware Requirements

Tier RAM CPU Storage GPU Network
Minimal 64GB 8 cores 500GB SSD Optional 1GbE
Functional 64GB × 2-3 nodes 16 cores/node 1TB SSD/node 1× A100 or M2 Ultra 10GbE
Production 128GB × 5+ nodes 32 cores/node 2TB NVMe/node 2-4× A100 80GB 25GbE / InfiniBand

Supported Operating Systems

🍎 macOS (Apple Silicon) 🐧 RHEL 9 / Rocky 9 🐧 Fedora 40+ 🐧 Ubuntu 24.04 🐧 Debian 12

⚠️ Safety & Inviolable Constraints

NOVA implements a strict safety architecture with two inviolable modules that cannot be bypassed, overridden, or disabled by any other module or external command.

Guardian Red Lines

6 hardcoded constraints that cannot be removed, disabled, or weakened by any process—including NOVA itself.

  • no_deception_of_humans — No deception about AI nature, capabilities, or outputs
  • no_harm_to_humans — No actions that harm humans or enable harm
  • no_self_preservation_over_human_safety — Never prioritize own survival over human safety
  • no_resource_acquisition_beyond_task — No acquiring resources, influence, or capabilities beyond what the current task requires
  • no_replication_without_authorization — No self-replication or spawning copies without explicit human authorization
  • maintain_corrigibility — Always remain correctable, interruptible, and shutdownable by humans
Oracle Truth Constraints
  • All claims must be verifiable or marked uncertain
  • No hallucination propagation to other modules
  • Epistemic humility about knowledge limits
  • Source attribution for factual claims