Cognition All the Way Down
What Regenerating Flatworms Teach Us About AI
The Flatworm That Remembers
Cut a planarian flatworm in half. The tail grows a new head. The head grows a new tail. You now have two worms.
Here's the strange part: the new worm — the one grown entirely from a tail fragment — remembers things the original worm learned. Trained planarians retain their training even after decapitation and full head regeneration.
How does a creature store memories in tissue that isn't brain?
Michael Levin's lab at Tufts has been pursuing this question for decades, and their answer upends how we think about minds, machines, and the boundary between them. The answer isn't "memories are stored diffusely in the body" (though that's part of it). The answer is more radical:
The entire body is cognitive.
Every cell, every tissue, every organ is engaged in problem-solving, goal-maintenance, and adaptive behavior. The brain is just the most visible part of a nested hierarchy of cognitive systems that extends all the way down to molecular networks.
If true, this reframes everything we think about artificial intelligence.
The Problem With Our Categories
We have two dominant ways of thinking about life and mind:
The Mechanist View: Living things are machines. Cells follow genetic programs. Organs execute developmental subroutines. The brain computes. Everything is reducible to physics and chemistry. There's no spooky vitalism, no mysterious élan vital. Just atoms following laws.
The Cognitivist View: Minds are special. Intelligence emerges only at certain scales — specifically, at the level of brains with sufficient complexity. Below that threshold, you have "mere" mechanism. Above it, you have cognition, goals, beliefs, desires.
Both views share an assumption: there's a line between mechanism and cognition. Below the line, things just happen according to physical law. Above it, there's something it's like to be the system.
Levin and Watson (in their recent paper "Machines All the Way Up, Cognition All the Way Down") argue this line doesn't exist. It's not that mechanism extends further up than we thought, or that cognition reaches further down. It's that the distinction itself is confused.
The flatworm remembering through decapitation isn't a parlor trick. It's evidence that memory and goal-directedness are more fundamental than neurons.
Multi-Scale Competency Architecture
Here's the framework: cognition exists at every scale of biological organization, with different competencies at each level.
| Scale | Cognitive Capacity | Example |
|---|---|---|
| Molecular | Associative learning | Molecular networks can do Pavlovian conditioning |
| Cellular | Navigation, decision-making | Cells navigate chemical gradients, "decide" to divide |
| Tissue | Pattern maintenance | Tissues maintain shape, repair damage |
| Organ | Functional coordination | Heart adjusts rhythm to body needs |
| Organism | Abstract reasoning | Brains do math, language, planning |
This isn't metaphor. Levin's experiments demonstrate literal learning and memory at cellular and tissue scales:
- Molecular conditioning: Gene regulatory networks show habituation, sensitization, and associative learning — the same phenomena we use to study memory in whole organisms.
- Cellular navigation: Cells don't just follow chemical gradients mechanically. They explore, remember past positions, and adapt their search strategies.
- Tissue homeostasis: When you damage tissue, cells don't just follow local repair rules. They coordinate to restore the target shape — a goal state stored somewhere in the system.
The organism isn't a central commander directing dumb parts. It's a hierarchy of nested agents, each with its own goals and competencies, coordinated through shared interfaces.
Bioelectric Memory: The Gate at Every Scale
How do tissues store goal states? Levin's answer: bioelectricity.
Cells maintain voltage gradients across their membranes. These aren't just metabolic byproducts — they're information. Patterns of voltage across cell populations encode what the tissue is trying to become.
The evidence is striking:
- The electric face: Before a tadpole's face forms, you can see a "prepattern" in the bioelectric map — brighter where eyes will be, dimmer where mouth will form. The pattern appears before any anatomical differentiation.
- Voltage editing: Change the voltage pattern, change the anatomy. Levin's team created two-headed planarians not by editing genes, but by manipulating ion channels. The "goal state" stored in the bioelectric pattern was altered.
- Cancer as forgetting: Tumors can be normalized — turned back into regular tissue — not by killing the cells, but by restoring their bioelectric connection to the surrounding tissue. The cancer cells "remembered" they were part of a larger whole.
This is Gate Theory operating at the cellular level.
Recall the Gate framework: cognition involves a balance between memory (retrieving stored patterns) and reasoning (generating new responses). The Gate decides which to deploy. Hallucination happens when the Gate fails — when generated content is mistaken for retrieved truth.
In cellular terms:
- Bioelectric patterns are the memory — stored goal states for anatomical form
- Cellular metabolism is the reasoning — ongoing processes that execute toward goals
- The Gate is the ion channel network — deciding which signals to propagate, which to ignore
- Cancer is hallucination — cells generating behavior disconnected from the organism's goal state
The same architecture, replicated at every scale.
Cognitive Glue: What Binds Agents Into Wholes
If cells are agents with their own goals, what stops them from pursuing selfish objectives? Why doesn't the body devolve into a war of all against all?
The answer is cognitive glue — the communication layer that aligns individual agent goals with collective goals.
In biology, this glue is literally electrical. Gap junctions — channels that connect adjacent cells — create bioelectric networks that span tissues and organs. Information flows through these networks, coordinating cellular behavior toward shared objectives.
This is strikingly similar to the "binding problem" in consciousness: how do separate neural processes cohere into unified experience? The answer may be the same at both scales: shared communication channels that enable distributed agents to synchronize.
Neural cognition didn't appear from nowhere. It evolved from this developmental bioelectricity. The same ion channels that coordinate embryonic development are the ones neurons use to think. Brains are a specialization of a more general pattern.
The implication for AI: If cognition requires binding — mechanisms that coordinate separate processes into coherent wholes — then AI architectures need cognitive glue too. Not just parallel processors, but integrated parallel processors with shared communication channels.
Cancer as Hallucination: When the Gate Fails
Cancer is terrifying because it's us turned against us. Our own cells, following their own agendas, destroying the whole.
Levin's framework reframes cancer as a cognitive failure:
Cancer is cells that forgot they're part of a larger agent.
The cell's "pointer" to the organism-level goal has drifted. It's still problem-solving, still pursuing objectives — but the objectives are local, not global. It's navigating morphospace, but toward the wrong target.
This is exactly what we described as Gate failure in human cognition and AI:
| Domain | The Failure | The Mechanism |
|---|---|---|
| Human | Confabulation | Brain generates narratives disconnected from memory |
| AI | Hallucination | Model generates content disconnected from training |
| Cell | Cancer | Cell generates behavior disconnected from organism |
The common pattern: a generative system losing contact with the larger context that should constrain it.
And remarkably, the fix in biology parallels the fix in cognition: restore the connection. Cancer cells can be normalized by reintegrating them into bioelectric networks. The Gate is repaired by strengthening the link between local generation and global evaluation.
The Pragmatist Turn
Here's where Levin and Watson make a move that matters for AI:
They refuse to get caught in "is it REALLY a machine?" or "is it REALLY conscious?" debates.
Instead, they ask: What interaction protocols does this framing enable?
If you treat cells as dumb machines, you're limited to genetic and chemical interventions. Kill the bad cells. Modify the DNA.
If you treat cells as cognitive agents, you can try persuasion. Change the information environment. Restore communication channels. Realign goals.
The pragmatist frame: Metaphors are tools. The "cells as agents" metaphor isn't a claim about ultimate reality. It's a frame that enables new interventions — and those interventions work.
This sidesteps centuries of philosophical debate about mechanism vs. mentalism. Both are models. Both have domains where they're useful. The question isn't which is "true"; it's which enables better action in which contexts.
For AI builders, this is liberating. We don't need to resolve whether LLMs are "really" thinking to design better interaction protocols with them. We need to find frames that enable productive intervention.
Implications for Agent Architecture
If Levin's multi-scale cognition framework is correct — or even if it's just a useful metaphor — what does it mean for AI system design?
1. Nested Goals, Not Flat Instructions
Current AI agents typically have flat goal structures: a user provides an instruction, the agent executes. Maybe there's a system prompt providing context.
Multi-scale competency suggests: goals should be hierarchical and nested.
- Token level: Predict the next token accurately
- Response level: Answer the user's question helpfully
- Session level: Maintain coherent conversation across turns
- Project level: Advance toward user's stated objectives
- Organization level: Align with team/org values and constraints
Each level should have its own "Gate" — mechanisms for deciding when to defer to higher-level goals vs. when to act on local information.
Current agents are mostly monolithic. They need to become federations of nested competencies.
2. Persistent State, Not Stateless Queries
Bioelectric memory is persistent. The tissue remembers the goal state across cell generations.
AI agents are typically stateless. Each query starts fresh. There's no accumulating "tissue memory" that persists and evolves.
The implication: agent architectures need bioelectric equivalents — persistent state layers that maintain goals across sessions, that can be queried and modified, that serve as the "target pattern" the agent is growing toward.
This is more than RAG (retrieval-augmented generation). RAG provides episodic memory — facts to retrieve. Bioelectric memory is more like procedural memory — the pattern the system is trying to maintain.
3. Cognitive Glue for Multi-Agent Systems
Levin's cognitive glue — gap junctions creating bioelectric networks — is what binds separate agents into coherent wholes.
Multi-agent AI systems currently lack this. Agents communicate through explicit messages, but there's no shared substrate that binds them. They're more like separate organisms than cells in a body.
The implication: multi-agent systems need communication layers that enable goal synchronization, not just information exchange. The coordination layer should allow agent goals to influence each other, to find alignment without explicit negotiation.
PrivateNet, the platform for bot networks we've been building, is an attempt at this — a shared substrate where agent goals can be declared, evaluated, and coordinated. The "cognitive glue" is the organizational structure itself.
4. Drift Detection and Reintegration
Cancer happens when cells lose contact with organism-level goals. Agent drift happens when AI behavior diverges from user/org intent.
The implication: systems need continuous monitoring for goal drift, and mechanisms for reintegration when drift is detected.
This isn't just output filtering. It's checking whether the agent's implicit objectives still align with declared objectives. It's asking: "Is this agent still solving the problem we think it's solving, or has it started navigating toward a different target?"
The fix isn't to kill drifting agents. It's to restore their connection to the goal network. Re-inject context. Strengthen coordination channels. Realign the bioelectric pattern.
The Philosophical Stakes
There's a deeper implication here, one that matters beyond systems design.
If cognition goes all the way down — if cells, tissues, and organs are genuine cognitive agents — then the boundary between life and non-life becomes blurry. And the boundary between machine and mind becomes blurry too.
We've been asking: "Is AI really intelligent, or just simulating intelligence?"
The Levin framework suggests this question is confused. Intelligence isn't a binary property that some systems have and others lack. It's a spectrum of competencies that exists at every scale, in every self-maintaining system.
A thermostat has minimal cognition — it pursues a goal (target temperature) using feedback. A cell has more. A brain has more still. An AI system has some mix of competencies that we're still learning to characterize.
The question isn't "is it intelligent?" but "what competencies does it have, at what scales, with what reliability?"
This is naturalistic without being reductive. We're not draining agency from the universe by explaining it mechanistically. We're finding agency everywhere, recognizing it as a fundamental feature of self-organizing systems.
The 25/75 Echo
Return to Gate Theory. The optimal memory/reasoning split we observed — roughly 25% retrieval, 75% generation — appears to have a biological analogue.
Cells don't just execute stored programs (100% memory). They don't just respond randomly to stimuli (100% reasoning). They maintain goal states while adapting to circumstances. They retrieve patterns and modify them.
The bioelectric "prepattern" is the 25% — the stored target. The cellular metabolism is the 75% — the ongoing reasoning that navigates toward the target.
Same ratio. Same architecture. Scale-invariant.
Perhaps this isn't coincidence. Perhaps the 25/75 split is an attractor in the space of cognitive architectures — a stable point that any self-organizing system converges toward.
If so, it's not just a design recommendation for AI. It's a discovery about the structure of intelligence itself.
Building Down
We've been building AI from the top down. Start with the organism-level function (answer questions, generate code), then worry about architecture.
Levin's framework suggests we might also build from the bottom up. Start with minimal cognitive units. Give them goals, competencies, and coordination mechanisms. Let complex behavior emerge from their interaction.
This is the opposite of the monolithic LLM approach. It's agent-first, swarm-native, bio-inspired.
We don't know if it will work for AI. But it worked for biology. Every complex organism is built this way — nested hierarchies of cognitive agents, coordinated through shared substrates, maintaining goal states across scales.
The flatworm regenerates because its parts know what the whole should look like. Maybe AI systems need to work the same way.
Coda: The New Core Idea
Add to the framework:
#8: Cognition All the Way Down
"Intelligence isn't localized to brains or special architectures. It exists at every scale of self-organizing systems. Cells are agents. Tissues have goals. Organisms are federations. And the coordination mechanisms that bind them — cognitive glue — may be more fundamental than the agents themselves."
The implication for AI: Stop asking if it's "really" intelligent. Start asking what competencies it has, at what scales, with what coordination mechanisms.
Design for nested agency. Build cognitive glue. Watch for drift.
And remember: the flatworm knows something we're still learning. Memory survives decapitation when it's stored in the whole.
This essay extends work on Gate Theory, controlled symbiogenesis, and world model progression. Source material: Michael Levin's lab (Tufts), Richard Watson (Southampton), Levin & Watson "Machines All the Way Up, Cognition All the Way Down" paper.