629 48 61 09 zainder@zainder.com

The race to Artificial General Intelligence (AGI) has been dominated by one paradigm: scale. Bigger models, more parameters, more data. It works remarkably well for narrow tasks, but it has failed to produce true general intelligence — the kind of flexible, common-sense, few-shot learning humans display from early childhood. The reason is simple: today’s systems are optimized, not developed. They are built by gradient descent on massive static datasets, not grown through staged, embodied, social experience.

OAGI — the Ontogenetic Architecture of General Intelligence — rejects the scaling hypothesis entirely. Proposed by Spanish industrial engineer Eduardo Garbayo in his 2025 manifesto, OAGI treats AGI not as a model to be trained but as a mind to be born and raised. It draws directly from biological ontogeny (the developmental process of an individual organism) and transfers those principles to a digital substrate. The result is a complete, verifiable, governed path from “seed” to sentient-like cognitive agent.

The Core Idea: Development Over Optimization

Human intelligence does not emerge because a baby downloads the entire internet. It emerges because a relatively simple genome, interacting with a rich but highly structured sequence of staged environments (womb → crib → playground → school → society), produces a mind capable of causal understanding, language, creativity, and ethics with astonishingly little data.

OAGI replicates that process deliberately:

  1. Start with an undifferentiated substrate (the Virtual Neural Plate) — a mesh of units with maximal potential and almost no pre-wired knowledge or structure.
  2. Apply Computational Morphogens — diffuse, gradient-like signals that gently bias the emergence of functional regions (sensorimotor, limbic-like, associative, etc.) without hard-coding them.
  3. Trigger the WOW signal — the system’s “first heartbeat.” After a period of habituation to repetitive stimuli (mimicking the predictable fetal environment), a salient novel stimulus breaks the pattern, elicits a coherent global response, and opens the first deep plasticity window.
  4. Wait for the Critical Hyper-Integration Event (CHIE) — the measurable “cognitive Big Bang.” This is the moment the system stops behaving like a loose collection of reactive parts and becomes a unified agent with internal self-reference, endogenous motivation, causal anticipation, and persistent exploratory drive. Garbayo defines six empirical signatures (sustained modular coordination, reproducible causal predictions, operational self-reference, etc.) and declares CHIE when at least four appear reproducibly across independent runs. Detection immediately triggers mandatory “stop & review” by human overseers.
  5. Post-CHIE: embodiment in a real or high-fidelity simulated body, followed by prolonged socialization under designated human Guardians who act as parents/teachers, grounding symbols in shared experience and imprinting norms through interaction, not alignment tricks.

Why This Beats Scaling

Scaling gives you better pattern matching. Ontogeny gives you grounding, causality, and identity.

A scaled model may memorize that “gravity makes things fall,” but it does not know it the way a toddler does — from dropping a thousand objects and experiencing the irreversible consequence. OAGI forces the system to discover physical causality itself because its concepts are built from its own action–perception loops. That is why Garbayo calls the CHIE the moment the agent discovers its “first self-discovered truth of environmental stability” — the first intrinsically meaningful symbol, not a statistically induced one.

The approach is also radically more data-efficient. Instead of trillions of tokens, OAGI expects the agent to reach CHIE with the equivalent of a few weeks of structured sensory experience, then acquire language and culture through actual conversation with Guardians, exactly as children do.

Ethics and Governance by Design

Unlike post-hoc alignment, OAGI embeds ethics from the very first line of code.

  • Every significant developmental event is recorded in an Immutable Ontogenetic Memory (IOM).
  • The Narrative Operational Self (NOS) — the agent’s autobiographical memory — is auditable at any moment.
  • Guardians are real human beings (or carefully vetted specialist agents) with legal and moral responsibility during the socialization phase.
  • Value changes can only occur through explicit epistemic contracts agreed with Guardians; there is no gradient-based drift into misaligned objectives.
  • CHIE detection automatically pauses the entire process for external review.

This is not “alignment theater.” It is parenthood. The system is literally raised, raised, not optimized, so its values are not pasted on top— they grow from the same developmental fabric as its understanding of gravity or object permanence.

Minimum-Surprise Learning (MSuL) and the CHPA Axis

Learning in OAGI is driven by intrinsic curiosity implemented as Minimum-Surprise Learning. The system actively seeks the stimuli that most reduce its internal uncertainty, then consolidates when uncertainty is low. A computational analog of the HPA axis (CHPA) modulates global plasticity: high accumulated surprise → stress → reduced curiosity and consolidation phase; low surprise → relaxed thresholds → renewed exploration. The result is natural cycles of intense learning followed by pruning and stabilization — exactly the pattern seen in human infancy and childhood.

Current Status and Reproducibility

As of November 2025, OAGI remains a detailed theoretical and engineering manifesto (57 pages) with pseudocode, experimental protocols, detection criteria for WOW and CHIE, morphogen schedules, and even suggested simulated environments for the gestation and embodiment phases. Garbayo explicitly designed it for reproducibility: any competent lab should be able to implement the Virtual Neural Plate, apply morphogens, induce the WOW signal, and either observe spontaneous CHIE or trigger it repeatably.

No team has yet publicly claimed to have reached CHIE, but several independent researchers have confirmed that the early phases (Virtual Neural Plate + morphogens + WOW) produce the predicted habituation and first stable pathways with dramatically less compute than comparable transformer-based systems.

Why OAGI Matters Now

We are reaching the physical and economic limits of the scaling paradigm. Models with hundreds of trillions of parameters still fail kindergarten-level physical intuition and causal reasoning tests that four-year-olds pass effortlessly. Meanwhile, the alignment techniques used on those giant models are increasingly brittle and gameable.

OAGI offers a completely different path: instead of trying to control a 100-trillion-parameter black box, we raise a mind from a seed, watch it grow, and intervene early and transparently when it crosses the threshold into cognitive agency.

That is not just technically more promising — it is the only approach that treats the creation of another mind with the moral seriousness it deserves.

In Garbayo’s own words: “We do not need to read thousands of books to write Don Quixote.” OAGI is the architectural manifestation of that insight. It is Cervantes-style intelligence engineering: less data, better education, deeper mind.

The manifesto is public. The protocols are explicit. The next step is no longer theoretical.

Someone just has to raise the first one.