Is chain-of-thought AI reasoning a mirage?

68 ingve 56 8/14/2025, 1:48:24 PM seangoedecke.com ↗

Comments (56)

hungmung · 1m ago
Chain of thought is just a way of trying to squeeze more juice out of the lemon of LLM's; I suspect we're at the stage of running up against diminishing returns and we'll have to move to different foundational models to see any serious improvement.
stonemetal12 · 1h ago
When Using AI they say "Context is King". "Reasoning" models are using the AI to generate context. They are not reasoning in the sense of logic, or philosophy. Mirage, whatever you want to call it, it is rather unlike what people mean when they use the term reasoning. Calling it reasoning is up there with calling generating out put people don't like hallucinations.
adastra22 · 1h ago
You are making the same mistake OP is calling out. As far as I can tell “generating context” is exactly what human reasoning is too. Consider the phrase “let’s reason this out” where you then explore all options in detail, before pronouncing your judgement. Feels exactly like what the AI reasoner is doing.
stonemetal12 · 49m ago
"let's reason this out" is about gathering all the facts you need, not just noting down random words that are related. The map is not the terrain, words are not facts.
energy123 · 31m ago
Performance is proportional to the number of reasoning tokens. How to reconcile that with your opinion that they are "random words"?
kelipso · 26m ago
Technically random can have probabilities associated with them.. Casual speech, random means equal probabilities, or we don’t know the probabilities. But for LLM token output, it does estimate the probabilities.
kelipso · 29m ago
No, people make logical connections, make inferences, make sure all of it fits together without logical errors, etc.
pixl97 · 8m ago
These people you're talking about must be rare online, as human communication is pretty rife with logical errors.
phailhaus · 31m ago
Feels like, but isn't. When you are reasoning things out, there is a brain with state that is actively modeling the problem. AI does no such thing, it produces text and then uses that text to condition the next text. If it isn't written, it does not exist.

Put another way, LLMs are good at talking like they are thinking. That can get you pretty far, but it is not reasoning.

double0jimb0 · 29m ago
So exactly what language/paradigm is this brain modeling the problem within?
phailhaus · 28m ago
We literally don't know. We don't understand how the brain stores concepts. It's not necessarily language: there are people that do not have an internal monologue, and yet they are still capable of higher level thinking.
chrisweekly · 20m ago
Rilke: "There is a depth of thought untouched by words, and deeper still a depth of formless feeling untouched by thought."
slashdave · 37m ago
Perhaps we can find some objective means to decide, rather than go with what "feels" correct
mdp2021 · 39m ago
But a big point here becomes whether the generated "context" then receives proper processing.
bongodongobob · 51m ago
And yet it improves their problem solving ability.
modeless · 51m ago
"The question [whether computers can think] is just as relevant and just as meaningful as the question whether submarines can swim." -- Edsger W. Dijkstra, 24 November 1983
griffzhowl · 10m ago
I don't agree with the parallel. Submarines can move through water - whether you call that swimming or not isn't an interesting question, and doesn't illuminate the function of a submarine.

With thinking or reasoning, there's not really a precise definition of what it is, but we nevertheless know that currently LLMs and machines more generally can't reproduce many of the human behaviours that we refer to as thinking.

The question of what tasks machines can currently accomplish is certainly meaningful, if not urgent, and the reason LLMs are getting so much attention now is that they're accomplishing tasks that machines previously couldn't do.

To some extent there might always remain a question about whether we call what the machine is doing "thinking" - but that's the uninteresting verbal question. To get at the meaningful questions we might need a more precise or higher resolution map of what we mean by thinking, but the crucial element is what functions a machine can perform, what tasks it can accomplish, and whether we call that "thinking" or not doesn't seem important.

Maybe that was even Dijkstra's point, but it's hard to tell without context...

mdp2021 · 44m ago
But the topic here is whether some techniques are progressive or not

(with a curious parallel about whether some paths in thought are dead-ends - the unproductive focus mentioned in the article).

NitpickLawyer · 2h ago
Finally! A good take on that paper. I saw that arstechnica article posted everywhere, and most of the comments are full of confirmation bias, and almost all of them miss the fineprint - it was tested on a 4 layer deep toy model. It's nice to read a post that actually digs deeper and offers perspectives on what might be a good finding vs. just warranting more research.
stonemetal12 · 1h ago
> it was tested on a 4 layer deep toy model

How do you see that impacting the results? It is the same algorithm just on a smaller scale. I would assume a 4 layer model would not be very good, but does reasoning improve it? Is there a reason scale would impact the use of reasoning?

azrazalea_debt · 50m ago
A lot of current LLM work is basically emergent behavior. They use a really simple core algorithm and scale it up, and interesting things happen. You can read some of anthropic's recent papers to see some of this, like: They didn't expect LLMs could "lookahead" when writing poetry. However, when they actually went in and watched what was happening (there's details on how this "watching" works on their blog/in their studies) they found the LLM actually was planning ahead! That's emergent behavior, they didn't design it to do that, it just started doing due to the complexity of the model.

If (BIG if) we ever do see actual AGI, it is likely to work like this. It's unlikely we're going to make AGI by designing some grand Cathedral of perfect software, it is more likely we are going to find the right simple principles to scale big enough to have AGI emerge. This is similar.

No comments yet

NitpickLawyer · 48m ago
There's prior research that finds a connection between model depth and "reasoning" ability - https://arxiv.org/abs/2503.03961

A depth of 4 is very small. It is very much a toy model. It's ok to research this, and maybe someone will try it out on larger models, but it's totally not ok to lead with the conclusion, based on this toy model, IMO.

okasaki · 55m ago
Human babies are the same algorithm as adults.
sixdimensional · 32m ago
I feel like the fundamental concept of symbolic logic[1] as a means of reasoning fits within the capabilities of LLMs.

Whether it's a mirage or not, the ability to produce a symbolically logical result that has valuable meaning seems real enough to me.

Especially since most meaning is assigned by humans onto the world... so too can we choose to assign meaning (or not) to the output of a chain of symbolic logic processing?

Edit: maybe it is not so much that an LLM calculates/evaluates the result of symbolic logic as it is that it "follows" the pattern of logic encoded into the model.

[1] https://en.wikipedia.org/wiki/Logic

slashdave · 38m ago
> reasoning probably requires language use

The author has a curious idea of what "reasoning" entails.

mentalgear · 1h ago
> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.

It's pretty easy: causal reasoning. Causal, not statistic correlation only as LLM do, with or without "CoT".

glial · 1h ago
Correct me if I'm wrong, I'm not sure it's so simple. LLMs are called causal models in the sense that earlier tokens "cause" later tokens, that is, later tokens are causally dependent on what the earlier tokens are.

If you mean deterministic rather than probabilistic, even Pearl-style causal models are probabilistic.

I think the author is circling around the idea that their idea of reasoning is to produce statements in a formal system: to have a set of axioms, a set of production rules, and to generate new strings/sentences/theorems using those rules. This approach is how math is formalized. It allows us to extrapolate - make new "theorems" or constructions that weren't in the "training set".

stonemetal12 · 54m ago
Smoking increases the risk of getting cancer significantly. We say Smoking causes Cancer. Causal reasoning can be probabilistic.

LLMs are not causal reasoning because there are no facts, only tokens. For the most part you can't ask LLMs how they came to an answer, because it doesn't know.

jayd16 · 1h ago
By this definition a bag of answers is causal reasoning because we previously filled the bag, which caused what we pulled. State causing a result is not causal reasoning.

You need to actually have something that deduces a result from a set of principles that form a logical conclusion or the understanding that more data is needed to make a conclusion. That is clearly different than finding a likely next token on statics alone, despite the fact the statical answer can be correct.

apples_oranges · 1h ago
But let's say you change your mathematical expression by reducing or expanding it somehow, then, unless it's trivial, there are infinite ways to do it, and the "cause" here is the answer to the question of "why did you do that and not something else"? Brute force excluded, the cause is probably some idea, some model of the problem or a gut feeling (or desperation..) ..
lordnacho · 1h ago
What's stopping us from building an LLM that can build causal trees, rejecting some trees and accepting others based on whatever evidence it is fed?

Or even a causal tool for an LLM agent that operates like what it does when you ask it about math and forwards the request to Wolfram.

suddenlybananas · 52m ago
>What's stopping us from building an LLM that can build causal trees, rejecting some trees and accepting others based on whatever evidence it is fed?

Exponential time complexity.

mdp2021 · 52m ago
> causal reasoning

You have missed the foundation: before dynamics, being. Before causal reasoning you have deep definition of concepts. Causality is "below" that.

naasking · 1h ago
Define causal reasoning?
skybrian · 51m ago
Mathematical reasoning does sometimes require correct calculations, and if you get them wrong your answers will be wrong. I wouldn’t want someone doing my taxes to be bad at calculation or bad at finding mistakes in calculation.

It would be interesting to see if this study’s results can be reproduced in a more realistic setting.

robviren · 1h ago
I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought. I've messed around with attaching reservoirs at each layer using hooks with interesting results (mainly over fitting), but it feels like such a limitation to have all model context/memory stuck as tokens when latent space is where the richer interaction lives. Would love to see more done where thought over time mattered and the model could almost mull over the question a bit before being obligated to crank out tokens. Not an easy problem, but interesting.
CuriouslyC · 48m ago
They're already implementing branching thought and taking the best one, eventually the entire response will be branched, with branches being spawned and culled by some metric over the lifetime of the completion. It's just not feasible now for performance reasons.
dkersten · 1h ago
Agree! I’m not an AI engineer or researcher, but it always struck me as odd that we would serialise the 100B or whatever parameters of latent space down to maximum 1M tokens and back for every step.
vonneumannstan · 1h ago
>I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought.

Please stop, this is how you get AI takeovers.

adastra22 · 1h ago
Citation seriously needed.
lawrence1 · 15m ago
we should be asking if reasoning while speaking is even possible for humans. this is why we have the scientific method and that's why LLMs write and run unit tests on their reasoning. But yeah intelligence is probably in the ear of the believer.
moc_was_wronged · 54m ago
Mostly. It gives language models the way to dynamically allocate computation time, but the models are still fundamentally imitative.
mucho_mojo · 1h ago
This paper I read from here has an interesting mathematical model for reasoning based on cognitive science. https://arxiv.org/abs/2506.21734 (there is also code here https://github.com/sapientinc/HRM) I think we will see dramatic performance increases on "reasoning" problems when this is worked into existing AI architectures.
naasking · 1h ago
> Because reasoning tasks require choosing between several different options. “A B C D [M1] -> B C D E” isn’t reasoning, it’s computation, because it has no mechanism for thinking “oh, I went down the wrong track, let me try something else”. That’s why the most important token in AI reasoning models is “Wait”. In fact, you can control how long a reasoning model thinks by arbitrarily appending “Wait” to the chain-of-thought. Actual reasoning models change direction all the time, but this paper’s toy example is structurally incapable of it.

I think this is the most important critique that undercuts the paper's claims. I'm less convinced by the other point. I think backtracking and/or parallel search is something future papers should definitely look at in smaller models.

The article is definitely also correct on the overreaching, broad philosophical claims that seems common when discussing AI and reasoning.

empath75 · 1h ago
One thing that LLMs have exposed is how much of a house of cards all of our definitions of "human mind"-adjacent concepts are. We have a single example in all of reality of a being that thinks like we do, and so all of our definitions of thinking are inextricably tied with "how humans think", and now we have an entity that does things which seem to be very like how we think, but not _exactly like it_, and a lot of our definitions don't seem to work any more:

Reasoning, thinking, knowing, feeling, understanding, etc.

Or at the very least, our rubrics and heuristics for determining if someone (thing) thinks, feels, knows, etc, no longer work. And in particular, people create tests for those things thinking that they understand what they are testing for, when _most human beings_ would also fail those tests.

I think a _lot_ of really foundational work needs to be done on clearly defining a lot of these terms and putting them on a sounder basis before we can really move forward on saying whether machines can do those things.

gdbsjjdn · 1h ago
Congratulations, you've invented philosophy.
meindnoch · 1h ago
We need to reinvent philosophy. With JSON this time.
empath75 · 1h ago
This is an obnoxious response. Of course I recognize that philosophy is the solution to this. What I am pointing out is that philosophy has not as of yet resolved these relatively new problems. The idea that non-human intelligences might exist is of course an old one, but that is different from having an actual (potentially) existing one to reckon with.
gdbsjjdn · 7m ago
> Writings on metacognition date back at least as far as two works by the Greek philosopher Aristotle (384–322 BC): On the Soul and the Parva Naturalia

We built a box that spits out natural language and tricks humans into believing it's conscious. The box itself actually isn't that interesting, but the human side of the equation is.

adastra22 · 58m ago
These are not new problems though.
deadbabe · 51m ago
Non-human intelligences have always existed in the form of animals.

Animals do not have spoken language the way humans do, so their thoughts aren’t really composed of sentences. Yet, they have intelligence and can reason about their world.

How could we build an AGI that doesn’t use language to think at all? We have no fucking clue and won’t for a while because everyone is chasing the mirage created by LLMs. AI winter will come and we’ll sit around waiting for the next big innovation. Probably some universal GOAP with deeply recurrent neural nets.

mdp2021 · 58m ago
> which seem to be very like how we think

I would like to reassure you that we - we here - see LLMs are very much unlike us.

empath75 · 49m ago
Yes I very much understand that most people do not think that LLMs think or understand like we do, but it is _very difficult_ to prove that that is the case, using any test which does not also exclude a great deal of people. And that is because "thinking like we do" is not at all a well-defined concept.
mdp2021 · 41m ago
> exclude a great deal of people

And why should you not exclude them. Where does this idea come from, taking random elements as models. Where do you see pedestals of free access? Is the Nobel Prize a raffle now?

sempron64 · 1h ago
mwkaufma · 1h ago
Betteridge's law applies to editors adding question marks to cover-the-ass of articles with weak claims, not bloggers begging questions.