Apple’s recent paper on the limits of AI reasoning is an uncomfortable but important read.
Instead of relying on standard benchmarks, the authors designed controlled environments—like Tower of Hanoi and River Crossing puzzles—to test how models handle increasing compositional complexity. The results: performance doesn’t taper off, it collapses. And even when the models fail, they continue to produce fluent, structured reasoning traces that sound convincing but fall apart logically.
If you’re building on top of LLMs or reasoning-augmented models, it’s well worth a look.
ForHackernews · 9h ago
Curious about the use of the word "uncomfortable" -- for people working on AI who thought that LLM or L"R"Ms were a path to AGI?
To me, that paper was reassuring that I wasn't taking crazy pills. I've worked with these tools to produce code, and they routinely make mistakes that no thinking entity (yes, I've worked with some dimwitted junior devs) ever would. Yes, they are powerful and useful tools, but they're not "thinking" in any meaningful sense (defined here as a rigorously determining an algorithm and applying it correctly).
salviati · 9h ago
If you ask me to solve increasingly dififcult Tower of Hanoi problems, I don't expect to be good at it. Neither would I expect a fellow human to be. So based on this should we question our intelligence?
I heard about that paper through an "AI explained" video [0], so I might be biased, but I agree with that video that the Apple paper is "meh" at best: it points out LLM limitations that are hardly a surprise.
Probably the difference between you and AI is that you would acknowledge that it's too difficult for you, and not to bullshit your way through.
saithound · 8h ago
That's _exactly_ what the LLM did: the article's authors decided to count that as a failure.
vincnetas · 8h ago
Hm was reading only TFA not the research paper. But TFA mentions this :
Perhaps the most unsettling finding is what failure looks like. Even when models are completely wrong, they sound persuasive. The reasoning is fluent, the explanations are structured, and the conclusions are confidently delivered. But the logic doesn’t hold.
rcarmo · 7h ago
That sounds a lot like a salesperson. And yes, there is a human tendency to twist reasoning to make the written word look polished, and I don’t think LLM training has fixed that bias.
jsnell · 9h ago
This paper, rebuttals, and rebuttals to rebuttals have been on HN repeatedly over the last couple of weeks (including literally now). At this point a summary of the original paper doesn't seem like it's adding much.
The so called "reasoning" of LLM programs is really a sham. And authors of those programs are sometimes expose it themselves. For example the article by Anthropic about Claude "reasoning". When they get to the math block they ask the program to add two numbers and then ask to write step by step flow how the LLM did it. LLM generates a human-based flow, because that's what it copied from the training data, while the real flow of LLM adds numbers is vastly different.
Basically so called "reasoning" is just generation of additional intermediary output, resembling real reasoning, but not being it.
The chain of thoughts is not where the reasoning capabilities of a model happens: models have reasoning capabilities that are part of the next token inference, what CoT does is searching/sampling the model space of representations and notions in order to "ground" the final reply, putting in the context window in an explicit way all the related knowledge and ideas the model possess about the question.
It is absolutely obvious that algorithmic problems like the Tower of Hanoi can't benefit from sampling. Also, algorithmic problems are domains that are comfortable for the paper authors to have a verifiable domain of puzzles, but are very far from what we want the models to do, and what they are good at. Models would solve this by implementing an algorithm in Python and calling a tool to execute it. This is how they can more easily solve such problems.
Moreover: in most benchmarks CoT improves LLMs performances a lot, because sampling helps immensely to provide a better reply. So this paper negative result is basically against a very vast experience of CoT being a powerful tool for LLMs, simply because most benchmarks operate on domains where sampling is very useful.
In short, the Apple paper mostly says things that were very obvious: it is like if they were trying to reach a negative result. It was a widespread vision that CoT can't help performing algorithmic work by concatenating tokens, if not in the most obvious ways. Yet, it helps a lot when there is to combine existing (inside the model) knowedge/ideas to provide a better reply.
pyman · 8h ago
What they're saying is that pattern-matching isn't the path to AGI. Humans and AI can both solve the Tower of Hanoi, but once the number of disks goes up, we both struggle.
Apple's point is that if we want to build something smarter than us, we need to look at intelligence and reasoning from a different angle.
rcarmo · 6h ago
Exploring how to consistently arrive at a negative result is still a valid research goal. I don’t think we’ve had enough of that kind of research regarding LLMs—-everything is so positive that it defies basic statistics…
archon1410 · 9h ago
The blog itself reads as if it was written by an LLM. (e.g. "This isn't about X, it's about Y." "... is timely ..." "X isn't Y".)
This might be a dumb question, and will inevitably showcase my ignorance in this field to others, but I will risk that;
Why can't AI at a certain level execute algorithms with solutions that have been proved to work for a very long time?
What I mean is, the solution of the Hanoi towers problem is known. It does not take a lot of computational power to achieve the result. What is stopping an AI such as the objects of exam in the paper to execute such algorithms and gather the solutions, like a human programmer would? Do they get sidetracked in the process due to the amount of tokens?
(edit: typo)
pyman · 8h ago
If humanity moves to Mars one day and leaves behind all the AI servers running on solar power, then comes back a billion years later, the AI would still be saying the same things. Why? Because no matter how powerful it is, AI doesn't evolve or grow on its own.
crowie · 8h ago
Gotcha, but I didn't mean it in that way. What I meant is, that problems like the case-study ones don't need a revolutionary nor an original answer which would require growth, they can be solved with old solutions which I would assume would be in some way embedded into the learning dataset of these models. Yeah, the scope of the problem is bigger, but the correct answer should come down in any case to a correct implementation of the known algorithm. The thing I'm asking is what causes the hindrance which prevents these AIs from performing in appropriate ways given old problems and old solutions.
ryandvm · 6h ago
I like your thought experiment and I think you're correct, but that's because we never gave it the physical possibility of a feedback loop (a.k.a. evolution).
I think if you added a step where the LLMs tweak their own build process and redeploy, your experiment would have wildly different results.
Instead of relying on standard benchmarks, the authors designed controlled environments—like Tower of Hanoi and River Crossing puzzles—to test how models handle increasing compositional complexity. The results: performance doesn’t taper off, it collapses. And even when the models fail, they continue to produce fluent, structured reasoning traces that sound convincing but fall apart logically.
If you’re building on top of LLMs or reasoning-augmented models, it’s well worth a look.
To me, that paper was reassuring that I wasn't taking crazy pills. I've worked with these tools to produce code, and they routinely make mistakes that no thinking entity (yes, I've worked with some dimwitted junior devs) ever would. Yes, they are powerful and useful tools, but they're not "thinking" in any meaningful sense (defined here as a rigorously determining an algorithm and applying it correctly).
I heard about that paper through an "AI explained" video [0], so I might be biased, but I agree with that video that the Apple paper is "meh" at best: it points out LLM limitations that are hardly a surprise.
[0] https://www.youtube.com/watch?v=wPBD6wTap7g
E.g.
https://news.ycombinator.com/item?id=44203562
https://news.ycombinator.com/item?id=44221900
https://news.ycombinator.com/item?id=44234626
https://news.ycombinator.com/item?id=44278403
https://news.ycombinator.com/item?id=44286086
Basically so called "reasoning" is just generation of additional intermediary output, resembling real reasoning, but not being it.
https://transformer-circuits.pub/2025/attribution-graphs/bio...
It is absolutely obvious that algorithmic problems like the Tower of Hanoi can't benefit from sampling. Also, algorithmic problems are domains that are comfortable for the paper authors to have a verifiable domain of puzzles, but are very far from what we want the models to do, and what they are good at. Models would solve this by implementing an algorithm in Python and calling a tool to execute it. This is how they can more easily solve such problems.
Moreover: in most benchmarks CoT improves LLMs performances a lot, because sampling helps immensely to provide a better reply. So this paper negative result is basically against a very vast experience of CoT being a powerful tool for LLMs, simply because most benchmarks operate on domains where sampling is very useful.
In short, the Apple paper mostly says things that were very obvious: it is like if they were trying to reach a negative result. It was a widespread vision that CoT can't help performing algorithmic work by concatenating tokens, if not in the most obvious ways. Yet, it helps a lot when there is to combine existing (inside the model) knowedge/ideas to provide a better reply.
Apple's point is that if we want to build something smarter than us, we need to look at intelligence and reasoning from a different angle.
Weird.
And it has been discussed to death already:
Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems) [https://www.lesswrong.com/posts/5uw26uDdFbFQgKzih/beware-gen...]
Seven replies to the viral Apple reasoning paper and why they fall short [https://news.ycombinator.com/item?id=44278403]
I think if you added a step where the LLMs tweak their own build process and redeploy, your experiment would have wildly different results.