For centuries, there’s been a low-key intellectual turf war. On one side: wordcels—people who think in sentences, argue in essays, and believe language is the skeleton key to human intelligence. On the other: shape rotators—those who think in images, mentally spin 3D objects, and solve problems without a single word.
And then large language models (LLMs) came along… and accidentally sided with Team Wordcel.
Subscribe free to get weekly insights that don’t make it into mainstream coverage. Paid members unlock deep dives + exclusive apps for exploring the data yourself
The Big Reveal: Abstract Thought Runs on Language
The proof is almost biblical—literally. In Genesis, God creates the world through words: “Let there be light.” In the Gospel of John: “In the beginning was the Word, and the Word was with God, and the Word was God.” Ancient writers were onto something—words aren’t just a tool for thought, they’re the operating system.
LLMs make that claim tangible. They’ve shown that almost any abstract thinking—whether it starts as a diagram, a formula, or a mental movie—can be translated into language and reasoned about entirely in words.
Take a visual puzzle: remove two opposite corners from a chessboard. Can you tile it with dominoes?
A human shape rotator might just “see” that it doesn’t work. An LLM explains it:
Each domino covers one black and one white square. Removing two same-colored corners leaves unequal numbers of black and white squares. Therefore, impossible.
Or take engineering:
“Will this arch collapse?”
An engineer might picture it sagging. An LLM narrates the load distribution, material limits, and physics equations—all in text.
Language can swallow other kinds of reasoning. It can take geometry, physics, ethics, law—stuff that starts outside words—and pull it into sentences. Once inside that verbal space, it can be combined, compared, and applied anywhere.
The One-Way Street
But try going the other direction and it’s way harder.
Ask a wordcel to imagine a tesseract spinning in 4D space, and they’ll probably say something like:
“Okay, so it’s like a cube, but every face is connected to another cube, and…”
…before they give up and reach for a diagram.
LLMs have the same problem. Even the fancy multimodal ones convert images into text internally before reasoning. They can write you a flawless manual on how to build a bridge, but they don’t see the tension lines in the beams until they’ve translated it into words first.
Visual → Language? Easy.
Language → Pure mental image? Messy.
Why Philosophers Score High: The Power of Verbal Reasoning
The triumph of verbal reasoning shines brightest in philosophy, a field where wordcels reign supreme. According to the American Philosophical Association, philosophy majors consistently outperform or rank near the top on graduate school admission exams, year after year, across various sections compared to other majors. Data averaged from 2006 to 2009, shown in the chart below, highlights their exceptional performance on the GRE. Why? Philosophy trains the mind to wield language with precision, tackling abstract concepts through rigorous verbal reasoning.Unlike disciplines that lean on visuospatial skills (like engineering) or empirical data (like biology), philosophy is a verbal battleground. It demands constructing arguments, dissecting concepts, and navigating complex ideas through language alone. Philosophers excel at translating abstract problems—whether ethical dilemmas or logical proofs—into clear, structured verbal frameworks, much like LLMs do when solving math or reasoning tasks. This linguistic dexterity gives them an edge on exams that test analytical writing, verbal comprehension, and logical reasoning, all of which reward the ability to manipulate abstract ideas through words. But this strength is also their vulnerability. The same verbal prowess that propels philosophy majors to the top of test scorecharts is precisely what LLMs are built to replicate. As AI masters linguistic abstraction, the philosopher’s edge risks being dulled, while shape rotators—whose visuospatial intuition remains harder to automate—may retain their advantage in domains less reducible to text.
Enter Chris Langan: The First Galaxy-Brain Wordcel?
If the “wordcel vs. shape rotator” debate had an esoteric, galaxy-brain champion, it would be Chris Langan — the self-taught genius with an IQ that allegedly leaves Einstein in the dust and a pet theory of reality called the Cognitive-Theoretic Model of the Universe (CTMU).
Langan’s core idea? Reality itself is a kind of self-configuring, self-processing language (SCSPL). In this view, the universe isn’t just describable in words — it’s literally made of them, or rather, made of something like a cosmic meta-language that bootstraps itself into existence. Think “The Matrix,” but instead of green code, the substrate is pure logical syntax.
Now, here’s where LLMs come in. Large language models accidentally pull off a mini-CTMU act every time you ask them to solve something abstract. Whether it’s a math proof, a legal argument, or a philosophical thought experiment, the model doesn’t “see” objects, feel textures, or rotate 3D shapes in some visual cortex. It just pushes symbols around in the vast space of linguistic patterns — and yet it still manages to handle a staggering range of human cognitive tasks.
In CTMU terms, this is like proving that a huge chunk of “intelligence” runs perfectly well on language syntax alone, without any other sensory channels. Even when a problem starts in the visual-spatial realm (say, describing a Rubik’s Cube move), the solution can be funneled entirely into a linguistic form, processed, and spit back out as a coherent answer. Try doing the reverse — turning a messy political debate into a crisp, purely spatial diagram — and you’ll see why the asymmetry matters.
So, while LLMs don’t prove the metaphysical parts of Langan’s theory (like teleology or reality’s “self-awareness”), they do give a shockingly practical demonstration of one of its main intuitions: that abstract intelligence has a deep linguistic core. Or, to put it in CTMU-flavored meme terms, GPT is basically a baby universe running on sentences.
Wordcel High, Wordcel Low
For wordcels, the AI era is like winning the argument and losing your job in the same week.
High: Vindication! Abstract reasoning is built on sentences.
Low: Now a machine writes better sentences, faster, and doesn’t even need coffee.
The once-rarefied ability to turn reality into clean, precise language has become suddenly common.
Meanwhile, in Shape Rotator Land…
Shape rotators still have their moat. They can watch a gear assembly spin in their mind’s eye, trace how a skyscraper’s weight flows down through its frame, or mentally shuffle molecules into new configurations — all without a single word passing through their heads.
AI can narrate those processes in exquisite detail, but it doesn’t see them. That kind of raw, non-verbal mental simulation remains stubbornly human turf. So while shape rotators may have lost the philosophical argument about the primacy of language, they still hold ground where it counts to them: in the real world, turning ideas into tangible things. And honestly, that’s the only scoreboard they care about.