Newslurp

<< Stories

Compelling Science Fiction - Intelligence Is a Landscape, Not a Ladder

joe@compellingsciencefiction.com

January 3, 6:21 pm

Intelligence Is a Landscape, Not a Ladder
Intelligence Is a Landscape, Not a Ladder
Over the holidays, many friends and family members asked me how I think about AI, given my background in science fiction and tech. The question usually boils down to: "when will AI be as smart as a human?" But I kept having to back up before I could answer, because the question itself contains a misconception.
The common mental model goes something like this: intelligence is a single dimension, a line from "dumb" to "smart." Rocks are at the bottom, insects are a bit higher, dogs are higher still, humans are at the top, and somewhere above us lurks the specter of superintelligent AI. This model is so deeply embedded in our thinking that we rarely question it. We ask "how intelligent is this?" as if it were a meaningful question with a single answer.
But here's a better way to think about it: intelligence isn't a ladder leading up to some AGI peak. It's a landscape, a vast terrain with countless dimensions, and there's no summit.
What does that actually mean?
Imagine a vast landscape with way more than just three spatial dimensions. Each dimension represents a different cognitive capability: reasoning (inductive, deductive, spatial, etc), learning, adaptation, abstraction, problem-solving, transfer of knowledge to novel domains, language comprehension, pattern recognition, social modeling, memory retrieval, motor coordination, emotional processing, and many more labeled and unlabeled dimensions. Many of these themselves can be deconstructed into finer dimensions. Every thinking entity occupies a point (or more accurately, a small region) on this high-dimensional surface.
Now here's where it gets interesting: entities that seem similar to us cluster together, but not in ways that form a neat hierarchy.
Some examples
A honeybee has a brain with about a million neurons, whereas humans have on the order of a hundred billion. By the linear model, it should be barely intelligent at all. But bees can navigate using the sun's position, communicate the location of food sources through elaborate dances, recognize human faces, and perform basic arithmetic. They occupy a point on the surface that's extremely high in certain dimensions (spatial navigation, efficient foraging optimization) and essentially zero in others (abstract reasoning, language).
An octopus is a genuinely alien intelligence. It can solve puzzles, use tools, remember individual humans, and escape from supposedly secure tanks with Houdini-like skills. But two-thirds of its neurons are in its arms, which can act semi-autonomously. It's not "smarter" or "dumber" than a crow or a dog. It's intelligent in a completely different shape.
Chess engines are off the charts in one particular dimension while being essentially zero in most others. Stockfish can calculate positions at a level no human has ever achieved or ever will, but it can't recognize that the board is on fire or that its opponent has fallen asleep. It's a needle-thin spike jutting up from the surface of the intelligence landscape.
And then there's us. Humans form a cluster on this surface. We're all pretty similar, relatively speaking, though of course there's variation (I'm looking at you, John Von Neumann). Some people are better at spatial reasoning, others at verbal processing, others at social modeling. We all share roughly the same shape, though — high in certain dimensions (language, social cognition, tool use, abstract planning) and lower in others (we have very little ability to do rapid arithmetic calculations, our memory is lossy and unreliable, our sensory processing is narrowband).
Where do LLMs fit?
Large language models are simply another cluster of points on this surface. They're remarkably high in some dimensions: language fluency (albeit with only tenuous world models), broad but spotty knowledge retrieval, pattern recognition across text, certain types of reasoning. They're essentially zero in others: they have no persistent memory across conversations, no embodiment, no sensory experience, no continuous learning ability.
This is why debates about whether LLMs are "really intelligent" or whether they've achieved "AGI" tend to generate more heat than light. Those questions assume the linear model. They assume there's a threshold you cross, a finish line where you become generally intelligent.
Since intelligence is a multi-dimensional surface, the question "is this AGI?" depends entirely on what dimensions you're measuring in and what you're comparing it to. An LLM is superintelligent already at some tasks and completely incapable at others. So is a calculator. Human brains themselves are incapable when compared to a migratory bird's magnetic navigation or a dog's ability to parse small molecules (smell).
Why this matters
The surface model changes how we should think about AI development and risk. Instead of imagining a smooth ascent toward superintelligence, we should expect AI systems to have spiky, irregular profiles that expand unevenly across different dimensions. Some of those spikes will be useful. Some will be dangerous. Most will be weird in ways we haven't anticipated.
It also changes how we should think about ourselves. Humans aren't at some global optima — our optima is very local. We're a cluster on a vast surface, one that evolution happened to carve out because it was useful for social primates on the African savanna. We're very good at some things and oblivious to entire dimensions of cognition that other minds navigate effortlessly.
A tool, not a stance
I want to be clear about what I'm not saying. I'm not making any particular ideological claim about AI development, AI ethics, AI safety, or AI policy. People with very different views on those questions can all accept the surface model of intelligence.
What I am saying is that we'll think more clearly about all of these questions if we stop imagining intelligence as a ladder and start imagining it as a landscape. The linear model is a cognitive shortcut that made some sense when we were mostly comparing humans to each other. It's increasingly misleading in a world where we're building intelligences that are shaped nothing like ours.

If you enjoyed this post you might also enjoy my new book, Think Weirder: The Year's Best Science Fiction Ideas.