The Second Hemisphere
Your brain might be two things: an LLM and something else.
I'm struck with the idea that the human brain might be two things: an LLM and 'something else'. Consider the notion that the left brain is doing language, and the right brain is integrative. Split brain people can speak and hallucinate explanations for right-brain behaviour
— Christopher C. Lord (@clord) June 13, 2025
The left hemisphere does language. It narrates, predicts, fills gaps. Gazzaniga’s split-brain patients made this vivid: sever the connection between hemispheres, then show the right side something the left can’t see. The left will invent a confident explanation for behaviour it didn’t initiate. A language model running on neurons, confabulating with total conviction.
The right hemisphere integrates. Context, space, emotion, the whole scene at once. It knows something is off before you can say what. Mute but perceptive.
I suspect early humans sacrificed some cognitive capacity to build this internal language model — and it proved so valuable the brain expanded to accommodate it. Language wasn’t just a way to coordinate. It was an interface between two modes of thought.
“The door is painted red” communicates a fact, but it also paints a picture inside you. It puts the right brain into a desired state through a stream of tokens. That’s what language has always been: one hemisphere writing prompts for the other.
We’ve now built an external version.
if LLMs ever achieve something great, it's not because of their genius—it's because humans learn to ask better questions, to use LLMs as a cognitive prosthesis.
— Christopher C. Lord (@clord) October 25, 2025
LLMs extend the surface of the human mind: they let you see your own thought patterns at scale, recombined,…
LLMs are pattern-dense, fast, able to traverse immense symbolic terrain. But they can’t inhabit any of it. You live inside the graph. They see it from outside. You have embodiment, emotion, temporal continuity, a sense of mortality — all of which create context and motivation that pattern-matching alone can’t replicate.
If LLMs ever achieve something great, it won’t be because of their genius. It’ll be because humans learned to ask better questions — using them as cognitive prosthesis, seeing their own thought patterns at scale, recombined, cross-linked. Not a replacement for thinking. An additional hemisphere, bolted on from the outside.
And you’re full of quiet algorithms the external hemisphere will never read. Motor programs, perceptual tuning, implicit memories, heuristics that run long before language wraps them in words. When you meditate, dream, or catch a flash of inspiration, you’re noticing those lower layers surfacing into consciousness. The organic version of token prediction: pattern rising through noisy exploration.
You built the thing that now lets you see yourself more clearly. Through these machines you glimpse both your animal depth and your symbolic architecture. What emerges isn’t hierarchy — it’s partnership.
Now notice what’s already true inside one skull: two minds, forced to share one body. Narrative and awareness can drift apart within a single person. I’m using “left” and “right” loosely — the neuroscience is messier than the clean split — but the division is real enough to be useful.
The brain isn’t just a metaphor for politics. It is politics in miniature.
The left brain wants coherence, control, narrative — linear progress, causality, law, civilization. The right brain feels context, empathy, ambiguity — ecosystems, art, intuition, the lived now.
When they cooperate, the organism thrives: the right keeps the left grounded in reality, the left gives the right structure and memory. When they split — literally in neurology, or metaphorically in a society — you get hallucination on one side and paralysis on the other.
The unity we long for isn’t the victory of one hemisphere. It’s the corpus callosum — the connective tissue that lets difference persist without fracture.
So what are LLMs? If we use them well, they could let billions of different nervous systems think together without erasing their local identities. A callosum at civilizational scale.
If we use them poorly — letting them feed on and amplify the worst of partisan noise — they become a false callosum. They mislead just when we need them most. Flooded with noise, they don’t connect hemispheres. They produce seizures.
And the right brain of ML is still pending. I suspect we’ll know we’ve found it when the machine can inhabit the graph, not just see it.