Why Cannot Large Language Models Models Ever Make True Correct Reasoning?

3 Mallowram 1 8/26/2025, 1:00:40 PM arxiv.org ↗

Comments (1)

Mallowram · 5h ago
If we're going to be scientific about LLMs, the first thing engineering has to grasp is words aren't the sum of knowledge by a long shot. Words are arbitrary stand-ins for specific neural syntax that's nonetheless paradoxically idiosyncratic. Nothing arbitrary can be automated sans the underlying neural-syntax. LLMs discard the basis for knowledge and run empty. AI is essentially vaporware as a general tool. Time to set off in new directions.

“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024