AI "reasoning" models don't reason at all

3 doener 2 6/8/2025, 2:05:13 PM twitter.com ↗

Comments (2)

aurizon · 6h ago
AI 'reasoning' is a allied to password cracking, where every combination of the allowed symbol set is incremented across until the locked door is opened. Analogously the AI does a similar task with the ability to create and test code until one works, and similarly hack code with a blizzard of variances until an entry point(bug) is found = record and continue. The speed and parallelism is not a reasoned path, it is an exhaustive path that, hopefully, finds the hole. It also finds 'hallucinations' - which a reasoning mind recognises, but these early stage AI's often do not.
JPLeRouzic · 6h ago
LLMs are not based on brute force algorithms. They are based on finding the best string of tokens (for text based LLMs) that would correspond to a prompt.

To illustrate that you can see how DeepSeek "thinks". There are lots of "Given the instructions, if we", "Alternatively, we can ", "Also, if ", "wait", "but" etc...