The Copernican View of Intelligence: We May Not Be the Center After All
The Copernican View of Intelligence: We May Not Be the Center After All
9
1
A software engineer’s reflection on the paper Mathematical Methods and Human Thought in the Age of AI, the mechanization of thought, and the shrinking gap in human uniqueness.
I’ve spent years honing the craft of software engineering. You develop a specific kind of intimacy with a codebase: a muscle memory for syntax, a gut feeling for where the bug is hiding before you even run the debugger. It’s a skill that felt, if not irreplaceable, at least solid.
Then I watched an AI write a feature in thirty seconds that would have taken me an afternoon.
It wasn’t just faster. It was cleaner. It was better.
Today I read Tanya Klowden and Terence Tao’s paper, Mathematical Methods and Human Thought in the Age of AI, which looks at how mathematical reasoning is changing in the age of AI. It crystallized something I’ve been struggling to articulate as a software engineer. The paper cites an observation from Tao about the changing challenges mathematicians face, and it maps almost perfectly onto software engineering. In math, AI is making the tedious, grinding work easier, but checking and verifying AI-generated proofs is becoming the new, harder frontier.
In software, we’re drowning in that exact frontier right now.
The Flood of the “Almost Right”
Any maintainer of an open-source project will tell you we’ve entered a strange new era of code review hell. The bottleneck used to be writing the code. Now the bottleneck is reading it.
We are flooded with AI-generated pull requests. They look good. They pass the linting tests. They pass AI-powered code review tools. But here’s the recursive trap: the generator learns to write code that passes the reviewer, not code that solves the human problem. The AI writes to the rubric, and the rubric is also written by AI. We are building software in a hall of mirrors where no human has verified that the reflection is real.
This feels like the software version of the replication crisis in science. We have more “significant” results, more working features, more merged PRs than ever before, but the underlying truth is eroding. The process has inverted. AI made the easy part instantaneous and made the hard part, knowing whether any of it matters, even harder. This is the software version of the mathematician’s new burden. It’s not about doing the work. It’s about knowing when the work is a lie.
The Dwarfing
I want to be honest about the ego death involved here. The shock isn’t that a machine can type faster. The shock is the dwarfing. It’s the realization that the cognitive ladder I’ve been climbing for years, the syntax wizardry, the algorithmic recall, is now a service provided by a commodity API.
A skeptic might say this is just Luddite whining. Every generation of tooling dwarfed the one before it. The compiler dwarfed the assembly programmer. The framework dwarfed the vanilla JavaScript developer. Why is this time different?
The difference is the line between automation and obsolescence. A compiler made you faster, but you still had to know what a loop was. You still held the mental model. An LLM often writes the loop without you ever needing to understand why that specific loop was chosen. **The dwarfing isn’t about speed. It’s about the outsourcing of cognition.**The machine is now running the inner dialogue, “hmm, that’s strange, let me try something else,” that used to be our private, value-generating thought process.
I wrote previously about The Loop We Share, that moment when you read an LLM’s reasoning trace and it says, “this is strange… let’s try another approach.” It feels familiar not because the machine has a soul, but because you’ve lived inside that loop your entire life. You form a hypothesis. You test. You fail. You revise. The LLM’s trace doesn’t mimic human thought. It mimics the external shape of human problem-solving, the shape we’ve been projecting onto paper and terminals for centuries.
The difference now is that the machine moves through that loop at the speed of electricity. It iterates on failure while we’re still typing the command. That’s the dwarfing part.
The Copernican View of Intelligence
This is where the paper gave me a moment of unexpected peace. It refers to a “Copernican view” of human intelligence.
Copernicus didn’t destroy Earth. He just moved it out of the center of the solar system. For centuries, we’ve placed human cognition at the center of the intellectual universe. It was the ultimate benchmark, the unique, ineffable spark.
What if we’re not losing that spark? What if we’re just discovering that we were never the center of the intelligence universe to begin with?
That was the real turning point for me. It’s not about asking, “Are we being replaced?” It’s about realizing, “We were never as special as we thought.” Intelligence is not a binary state reserved for Homo sapiens. It’s a spectrum emerging in the universe, and we just happened to be the first, on this planet, to build the next node on that spectrum.
It’s a humbling, almost spiritual shift. Instead of fighting to protect our unique throne, we can look outward. We are part of a larger system of computation and cognition. This view doesn’t erase humans. It just corrects our astronomical coordinates. It’s reassuring in the way a clear night sky is reassuring: you feel small, but you also feel connected to something vast.
But I want to be honest about the limits of the Copernican metaphor. The Copernican view is the long-term map. It helps orient us when the ground shakes. But the map doesn’t stop the earthquake. It’s a navigational tool, not an escape hatch.
The historical parallel that haunts me is textile mechanization. The power loom dismantled a skilled artisan class, replacing them first with machine tenders and then just machines. The impact was violent and generational. This time, the loom is targeting both physical and intellectual labor at once. There is no refuge in knowledge work. The loom has learned to weave thought.
I know this echoes the fears that greeted the steam engine. And yes, work changed rather than ended. But that transition took half a century of chaos. The difference now is not just the kind of change, but the speed and scale.
The Industrial Revolution succeeded because demand for physical goods is effectively infinite. Demand for cognitive goods may be too, but the bottleneck is human attention. You can generate 10,000 lines of perfect code in seconds, but a human brain can only hold a handful of things at once. We’ve hit a biological ceiling. The weaver could pick up the extra cloth. The engineer cannot hold more complexity in their head. The loom spins faster than we can grasp.
And speed compounds this. The first loom took generations to perfect. AI iterates on its own design every six months. We’re losing our place not just at the center of the intellectual universe, but in the feedback loop itself. We’re becoming observers of a process we used to power. That’s the vertigo we have to face without flinching.
Finding Our New Position
I’m still trying to figure out the new role for the software engineer. I don’t have the answer yet. I know it lies in complement, not competition. It lies in the loop we share, the space where human desire, messy real-world constraints, and AI’s relentless execution meet.
The paper hints at this future. Mathematicians may become curators of truth rather than just calculators of steps. Engineers may become architects of intent and auditors of complexity. Our value shifts from running the loop to defining its boundaries. We are the ones who know why the loop matters in the first place.
But for now, we’re sitting in the shockwave. We are watching the gap between us and the machine shrink faster than we can redefine the role. We have to accept the Copernican reality, that we are not the only players in the game anymore, and then, with clear eyes, do the terrifying, exhilarating work of building a society where humans and this new intelligence can coexist.
We can’t do it alone. And maybe, for the first time in history, that sentence isn’t just a metaphor about community. It’s a technical specification for our relationship with a new mind.
9
9
1