The Loop We Share: What Reasoning Traces Reveal About How Humans Actually Think

I was reading through an LLM’s reasoning trace while debugging code. It was familiar. It tried something. It failed. It adjusted. It tried again, then again, then again, until it worked.

At some point, I caught myself thinking: this is exactly how humans solve problems. And then a second thought followed: or maybe this is just what we think thinking looks like.

The Loop We Recognize

If you’ve debugged a system, you know the pattern. You form a hypothesis, test it, hit an error, revise, and try again. It’s the rhythm of problem solving, and it doesn’t live only in code. It shows up when you’re troubleshooting, tuning a model, or just trying to figure out why something feels off.

So when an LLM’s trace reads “this didn’t work… let’s try another approach,” it doesn’t register as mechanical. It registers as familiar. You’ve thought that sentence yourself.

That familiarity is worth examining. Not to decide whether machines are “really” thinking, but because of what the resemblance reveals about us.

The Part That Feels Too Real

Sometimes the model goes further. The trace includes phrases like:

  • “this is strange”
  • “that’s unexpected”
  • “hmm… something is off”

These phrases aren’t just describing steps. They’re mirroring the language humans use to narrate uncertainty out loud, to themselves, or to a colleague looking over their shoulder.

That’s where the illusion starts to form. Not because the model is feeling anything. But because it reproduces the language we associate with thinking, at precisely the moments where thinking looks most human.

Not Feeling, But Not Random Either

The simplest explanation is that it’s mimicking language patterns. That’s true. But it’s not the whole story.

Recent research from Anthropic suggests that large language models develop internal representations of emotion related concepts, things like “uncertainty” or “error,” that correlate with how they respond in different situations. These aren’t emotions in any human sense. But they may function as signals associated with certain behaviors, especially when the model is navigating ambiguous or failing paths.

The operative word is correlate, not cause. The model isn’t distressed when it says “this is strange.” But it isn’t producing that phrase at random, either. Something in its internal state is marking this moment as different, and the trace reflects that.

What This Reveals About Problem Solving

Watching these traces doesn’t prove that models think like humans. But it does surface something worth sitting with: a large portion of practical problem solving, especially in engineering, follows a structured loop.

Propose. Test. Evaluate. Adjust. Repeat.

Humans experience this as: “why isn’t this working?”

Models express it as: “let’s try another approach.”

Different mechanisms, similar observable pattern, which raises an uncomfortable question: how much of what we call thinking is actually just this loop, dressed up in subjective experience?

Where the Comparison Breaks

It’s tempting to stop there and conclude that thinking is iteration. But that only captures part of it.

Humans don’t just move inside the loop. We move around it.

Sometimes the most important step isn’t solving the problem. It’s questioning whether the problem is worth solving at all. Not how do I fix this, but why am I fixing this.

Iteration explores a space. Humans can change the space itself.

There are also moments in thinking that don’t look like iteration at all: stepping back and reframing entirely, deciding when something is good enough, recognizing when a path isn’t worth pursuing, making non-linear leaps. What if we approached this completely differently?

These aren’t better iterations. They’re exits from the loop.

Two solutions can be equally correct. But humans still ask: which one fits what we’re actually trying to do? That layer of judgment, meaning, and context doesn’t come from iteration alone.

A More Grounded Reading

So what are we actually seeing in those reasoning traces?

Not AI crossing over into human thought, but something real nonetheless: LLMs are genuinely capable of exploring a problem space through iterative refinement, especially when guided by feedback. What looks like persistence is often continuation within a prompt, or loops created by surrounding systems. What looks like emotion is context appropriate language, possibly tied to internal representations we don’t fully understand yet.

That’s the more grounded reading. And it’s still significant.

The Shift of the Human Role

What’s changing isn’t that AI has developed something like a mind. It’s that we can now externalize parts of the problem solving loop, specifically the parts that are most mechanical: iteration, variation, retrying without fatigue.

That shifts the human role. Not toward less thinking, but toward a different kind of thinking: framing the problem, selecting among solutions, judging what fits.

The loop has always been there. We just never had anything to hand it off to before.

What You’re Actually Seeing

Go back to that trace:

  • “this is strange”
  • “let’s try another approach”
  • “that didn’t work”

It reads like a mind at work. But what you’re really watching is a system navigating a problem space using learned patterns of language and feedback. That’s enough to feel familiar, because the pattern it’s following is one you run yourself, constantly, often without noticing.

It doesn’t mean AI is thinking like us.

But it does suggest something harder to dismiss: some of what we call thinking, the part we use every day, the part that feels effortful and distinctly human, may be more structured, more repeatable, and less mysterious than we’ve assumed.

And the parts that remain genuinely ours, reframing, judgment, meaning, are becoming easier to see, precisely because something else is handling the rest.

If this resonated, I’d be curious what you’ve noticed in your own reasoning traces, or in your own thinking. The comparison tends to surface different things for different people.

Originally published on Medium.