I definitely would be okay if we hit an AI winter; our culture and world cannot adapt fast enough for the change we are experiencing. In the meantime, the current level of AI is just good enough to make us more productive, but not so good as to make us irrelevant.
bitmasher9 · 31m ago
I think negative feedback loops of AIs trained on AI generated data might lead to a position where AI quality peaks and slides backwards.
sgt101 · 12m ago
Thank goodness we have version control systems then.
extr · 3h ago
I find Gary's arguments increasingly semantic and unconvincing. He lists several examples of how LLMs "fail to build a world model", but his definition of "world model" is an informal hand-wave ("a computational framework that a system (a machine, or a person or other animal) uses to track what is happening in the world"). His examples are lifted from a variety of unclear or obsolete models - what is his opinion of O3? Why doesn't he create or propose a benchmark that researchers could use to measure progress of "world model creation"?
What's more, his actual point is unclear. Even if you simply grant, "okay, even SOTA LLMs don't have world models", why do I as a user of these models care? Because the models could be wrong? Yes, I'm aware. Nevertheless, I'm still deriving subtantial personal and professional value from the models as they stand today.
squirrel · 2h ago
He cites o3 and o4-mini as examples of LLMs that play illegal chess moves.
Lerc · 2h ago
I don't understand the reasoning behind drawing a conclusion that if something fails a task that requires reasoning implies that thing cannot reason.
To use chess as an example. Humans sometimes play illegal moves. That does not mean Humans cannot reason. It is an instance of failing to show proof of reasoning. Not a proof of the inability to reason.
voidhorse · 1h ago
I don't think that's a fair representation of the argument.
The argument is not "here's one failure case, therefore they don't reason". The argument is that systematically if you given an LLM problem instances outside training sets in domains with clear structural rules, they will fail to solve them. The argument then goes that they must not have an actual model or understanding of the rules, as they seem to only be capable of solving problems in the training set. That is, they have failed to figure out how to solve novel problem instances of general problem structures using logical reasoning.
Their strict dependence on having seen the exact or extremely similar concrete instances suggests that they don't actually generalize—they just compute a probability based on known instances—which everyone knew already. The problem is we just have a lot of people claiming they are capable of more than this because they want to make a quick buck in an insane market.
Lerc · 1h ago
That still seems unfalsifiable. If it fails one instance the claim is that the failure is representative of things outside the training set. If it succeeds the claim is that it is in the training set. Without a definitive way to say something is not in the training set (a likely impossible task) the measure of success or failure is the only indicator of the purported reason reason for the success or failure.
Given models can get things wrong even when the training data contains the answer, failure cannot show absence.
voidhorse · 1h ago
I do think there are cases which, in controlled environments, there is some degree of knowledge as to what is in the training set. I also don't thin it's as impossible as you assume.
If you really wanted to ensure this with certainty just use the natural numbers to parameterize an aspect of a general problem. Assume there are N foo problems in the training set, then there is always a case N+1 parameter not in the training set, and you can use this as an indicative case. Go ahead and generate an insane number of these and eventually the probability that the Mth instance is not in the set is effectively 1.
Edit: Of course, it would not be perfect certainty, but it is probabilistically effectively certain. The number of problem instances in the set is necessarily finite, so if you go large enough you get what you need. Sure, you wouldn't be able to say there is a specific problem instance not in the set, but the aggregate results would evidence whether or no the LLm deals with all cases or (on assumption) just known ones.
Lerc · 1h ago
Well there are models that can sum two many-digit numbers. They certainly have not been trained on every pair of integers up to that level. That either makes the claim they can't do things that they haven't seen trivially false, or the criteria for counting something as being in the training data includes a degree of inference.
What happens when someone makes a claim that they have gotten a model to do something not in the training data and another person claims it must be encoded in the training data in some form. It seems like an impasse.
energy123 · 1h ago
The lack of rigor and evidence behind the argument is the problem.
voidhorse · 3h ago
I think the point is that category errors or misinterpreting what a tool does can be dangerous.
Both statistical data generators and actual reasoning are useful in many circumstances, but there are also circumstances in which thinking that you are doing the latter when you are only doing the former can have severe consequences (example: building a bridge).
If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.
As for benchmarks, if you fundamentally don't believe that stochastic data generation leads to reason as an emergent property, developing a benchmark is pointless. Also, not everyone has to be on the same side. It's clear that Marcus is not a fan of the current wave. Asking him to produce a substantive contribution that would help them continue to achieve their goals is preposterous. This game is highly political too. If you think the people pushing this stuff are less than estimable or morally sound, you wouldn't really want to empower them or give them more ideas.
NitpickLawyer · 2h ago
> If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.
In other words, overhyped in the short term, underhyped in the long term. Where short and long term are extremely volatile.
Take programming as an example. 2.5 years ago, gpt3.5 was seen as "cute" in the programming world. Oh, look, it does poems and e-mails, and the code looks like python but it's wrong 9 times out of 10. But now a 24B model can handle end-to-end SWE tasks in 0-shot a lot of the times.
nmadden · 1h ago
The improvements in programming are largely due to the adoption of “agentic” architectures. This is really a hybrid neural-symbolic approach: the symbolic part being the interpreter/compiler. Effectively the LLM still produces an almost-correct-but-wrong program and then the compiler “fact-checks” it and then the LLM basically local-searches its way from there to something that passes the compiler. (If you want to be disabused of the idea that LLMs on their own are good at programming, just review the “reasoning” log of one trying to fix a simple string | undefined error in Typescript).
It seems clear to me therefore that further improvements in programming ability will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers. That is, the more types of errors that can be caught by the compiler, the better chance of the AI fuzzing its way to a good overall solution. Interestingly, I hear anecdotally that current LLMs are not great at writing Rust, which does have an advanced type system able to capture more types of errors. That’s where I’d focus if I was working on this. But we should be clear that the improvements are already largely coming via symbolic means, not better LLMs.
> The improvements in programming are largely due to the adoption of “agentic” architectures.
Yes, I agree. But it's not just the cradles, it's cradles + training on traces produced with those cradles. You can test this very easily with running old models w/ new cradles. They don't perform well at all. (one of the first things I did when guidance, a guided generation framework, launched ~2 years ago was to test code - compile - edit loops. There were signs of it working, but nothing compared to what we see today. That had to be trained into the models.)
> will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers.
Strong disagree. They have to work together. This is basically why RL is gaining a lot of traction in this space.
Also disagree on llms not improving much. Whatever they did with gemini 2.5 feels like gpt3-4 to me. The context updates are huge. This is the first model that can take 100k tokens and still work after that. They're doing something right to be able to support such large contexts with such good performance. I'd be surprised if gemini 2.5 is just gemini 1 + more data. Extremely surprised. There have to be architecture changes and improvements somewhere in there.
energy123 · 4h ago
Why was Anthropic's interpretability work not discussed? Inconvenient for the conclusion?
Speaking of chess, a fun experiment is building a few positions such as on Lichess, taking a screenshot, and asking a state-of-the-art VLM to count the number of pieces on the board. In my experience, it had a much higher error ratio in less likely or impossible board situations (three kings on the board, etc).
Animats · 51m ago
That LLMs are a black box and that LLMs lack an underlying model are both true, but orthogonal. It's possible to have a black box system which has an underlying model. That's true of many statistical prediction methods. Early attempts at machine learning were a white box with no underlying model. This is true of most curve-fitting.
The AI version was where you're trying to divide a high-dimensional space with a cutting plane to create a classifier. You can tell where the separating plane is, but not why.
The lack of a world model is a very real limitation in some problem spaces, starting with arithmetic. But this argument is unconvincing.
comp_throw7 · 49m ago
> LLMs lack an underlying model
Obviously false for any useful sense by which you might operationalize "world model". But agree re: being a black box and having a world model being orthogonal.
Animats · 1h ago
Note that this is the same problem engineers have talking to managers. The manager may lack a mental model of the task, but tries to direct it anyway.
But some words are redacted. So I've uploaded the picture to Gemini and asked it what the redacted words are, and it told me. Not sure if they are correct, and some are way longer to fit in the redacted black box, but it didn't refuse the request.
sdenton4 · 4h ago
"A wandering ant, for example, tracks where it is through the process of dead reckoning. An ant uses variables (in the algebraic/computer science sense) to maintain a readout of its location, even as as it wanders, constantly updated, so that it can directly return to its home."
Hm.
Dead reckoning is a terrible way to navigate, and famously led to lots of ships crashed on the shore of France before good clocks allowed tracking longitude accurately.
Ants lay down pheromone trails and use smell to find their way home... There's likely some additional tracking going on, but I would be surprised if it looked anything like symbolic GOFAI.
deadbabe · 4h ago
Even if you find a pheromone trail, it doesn’t tell you what direction is home, or what path to take at branching paths. You need dead reckoning. The trail just helps you reduce the complexity of what you have to remember.
The trail also leads the other ants to food, hard for them to use your own dead reckoning.
voidhorse · 3h ago
The whole thing is silly. Look, we know that LLMs are just really good word predictors. Any argument that they are thinking is essentially predicated on marketing materials that embrace anthropomorphic metaphors to an extreme degree.
Is it possible that reason could emerge as the byproduct of being really good at predicting words? Maybe, but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case. Many people think in images as direct sense datum, and it's not clear that a digital representation of this is equivalent to the thing in itself.
To use an example another HN'er suggested, We don't claim that submarines are swimming. Why are we so quick to claim that LLMs are "reasoning"?
SubiculumCode · 2h ago
I was having a discussion with Gemini. It claimed that because Gemini, as a large language model, cannot experience emotion, that the output of Gemini is less likely to be emotionally motivated. I countered that the experience of emotion is irrelevant. Gemini was trained on data written by humans who do experience emotion, who often wrote to express that emotion, and thus Gemini's output can be emotionally motivated, by proxy.
Velorivox · 2h ago
> Is it possible that reason could emerge as the byproduct of being really good at predicting words?
Imagine we had such marketing behind wheels — they move, so they must be like legs on the inside. Then we run around imagining what the blood vessels and bones must look like inside the wheel. Nevermind that neither the structure nor the procedure has anything to do with legs whatsoever.
Sadly, whoever named it artificial intelligence and neural networks likely knew exactly what they were doing.
cageface · 48m ago
but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic.
Most of these newer models are multi-modal, so tokens aren't necessary linguistic.
comp_throw7 · 43m ago
What use of the word "reasoning" are you trying to claim that current language models knowably fail to qualify for, except that it wasn't done by a human?
rented_mule · 2h ago
> this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case
I'm with you on this. Software engineers talk about being in the flow when they are at their most productive. For me, the telltale sign of being in the flow is that I'm no longer thinking in English, but I'm somehow navigating the problem / solution space more intuitively. The same thing happens in many other domains. We learn to walk long before we have the language for all the cognitive processes required. I don't think we deeply understand what's going in these situations, so how are we going to build something to emulate it? I certainly don't consciously predict the next token, especially when I'm in the flow.
And why would we try to emulate how we do it? I'd much rather have technology that complements. I want different failure modes and different abilities so that we can achieve more with these tools than we could by just adding subservient humans. The good news is that everything we've built so far is succeeding at this!
We'll know that society is finally starting to understand these technologies and how to apply them when we are able to get away from using science fiction tropes to talk about them. The people I know who develop LLMs for a living, and the others I know that are creating the most interesting applications of them, already talk about them as tools without any need to anthropomorphize. It's sad to watch their frustration as they are slowed down every time a person in power shows up with a vision based on assumptions of human-like qualities rather than a vision informed by the actual qualities of the technology.
Maybe I'm being too harsh or impatient? I suppose we had to slowly come to understand the unique qualities of a "car" before we could stop limiting our thinking by referring to it as a "horseless carriage".
voidhorse · 2h ago
Couldn't agree more. I look forward to the other side of this current craze where we actually have reasonable language around what these machines are best for.
On a more general level, I also never understood this urge to build machines that are "just like us". Like you I want machines that, arguably, are best characterized by the ways in which they are not like us—more reliable, more precise, serving a specific function. It's telling that critiques of the failures of LLMs are often met with "humans have the same problems"—why are humans the bar? We have plenty of humans. We don't need more humans. If we're investing so much time and energy, shouldn't the bar be bette than humans? And if it isn't, why isn't it? Oh, right it's because actually human error is good enough and the actual benefit of these tools is that they are humans that can work without break, don't have autonomy, and that you don't need to listen to or pay. The main beneficiaries of this path are capital owners who just want free labor. That's literally all this is. People who actually want to build stuff want precision machines that are tailored for the task at hand, not some grab bag of sort of works sometimes stochastic doohickeys.
etaioinshrdlu · 2h ago
I don't think it's accurate anymore to say LLMs are just really good word predictors. Especially in the last year, they are trained with reinforcement learning to solve specific problems. They are functions that predict next tokens, but the function they are trained to approximate doesn't have to be just plain internet text.
voidhorse · 2h ago
Yeah, that's fair. It's probably more accurate to call them sequence predictors or general data predictors than to limit it to words (unless we mean words in the broad, mathematical sense) they are free monoid emulators
Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent's policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.
voidhorse · 1h ago
I only skimmed it so far, but this seems to only argue against the functional import of the OP, not its philosophical import.
On my reading, the philosophical claim is that these models do not develop an actual logical, internal representation of domains.
The functional import is whether or not they are able to realize specific behaviors within a domain. The paper argues that a markov process can realize the functional equivalence of the initial goal oriented picture of its domain—that is can solve goals with an error bound—but not that it develops an actual representation of the domain.
Lack of an actual representation prevents such a machine from doing other things. For example, iiuc, it would be unable to solve problems in domains that are homomorphic to the original, while an explicit representation does enable this.
What's more, his actual point is unclear. Even if you simply grant, "okay, even SOTA LLMs don't have world models", why do I as a user of these models care? Because the models could be wrong? Yes, I'm aware. Nevertheless, I'm still deriving subtantial personal and professional value from the models as they stand today.
To use chess as an example. Humans sometimes play illegal moves. That does not mean Humans cannot reason. It is an instance of failing to show proof of reasoning. Not a proof of the inability to reason.
The argument is not "here's one failure case, therefore they don't reason". The argument is that systematically if you given an LLM problem instances outside training sets in domains with clear structural rules, they will fail to solve them. The argument then goes that they must not have an actual model or understanding of the rules, as they seem to only be capable of solving problems in the training set. That is, they have failed to figure out how to solve novel problem instances of general problem structures using logical reasoning.
Their strict dependence on having seen the exact or extremely similar concrete instances suggests that they don't actually generalize—they just compute a probability based on known instances—which everyone knew already. The problem is we just have a lot of people claiming they are capable of more than this because they want to make a quick buck in an insane market.
Given models can get things wrong even when the training data contains the answer, failure cannot show absence.
If you really wanted to ensure this with certainty just use the natural numbers to parameterize an aspect of a general problem. Assume there are N foo problems in the training set, then there is always a case N+1 parameter not in the training set, and you can use this as an indicative case. Go ahead and generate an insane number of these and eventually the probability that the Mth instance is not in the set is effectively 1.
Edit: Of course, it would not be perfect certainty, but it is probabilistically effectively certain. The number of problem instances in the set is necessarily finite, so if you go large enough you get what you need. Sure, you wouldn't be able to say there is a specific problem instance not in the set, but the aggregate results would evidence whether or no the LLm deals with all cases or (on assumption) just known ones.
What happens when someone makes a claim that they have gotten a model to do something not in the training data and another person claims it must be encoded in the training data in some form. It seems like an impasse.
Both statistical data generators and actual reasoning are useful in many circumstances, but there are also circumstances in which thinking that you are doing the latter when you are only doing the former can have severe consequences (example: building a bridge).
If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.
As for benchmarks, if you fundamentally don't believe that stochastic data generation leads to reason as an emergent property, developing a benchmark is pointless. Also, not everyone has to be on the same side. It's clear that Marcus is not a fan of the current wave. Asking him to produce a substantive contribution that would help them continue to achieve their goals is preposterous. This game is highly political too. If you think the people pushing this stuff are less than estimable or morally sound, you wouldn't really want to empower them or give them more ideas.
In other words, overhyped in the short term, underhyped in the long term. Where short and long term are extremely volatile.
Take programming as an example. 2.5 years ago, gpt3.5 was seen as "cute" in the programming world. Oh, look, it does poems and e-mails, and the code looks like python but it's wrong 9 times out of 10. But now a 24B model can handle end-to-end SWE tasks in 0-shot a lot of the times.
It seems clear to me therefore that further improvements in programming ability will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers. That is, the more types of errors that can be caught by the compiler, the better chance of the AI fuzzing its way to a good overall solution. Interestingly, I hear anecdotally that current LLMs are not great at writing Rust, which does have an advanced type system able to capture more types of errors. That’s where I’d focus if I was working on this. But we should be clear that the improvements are already largely coming via symbolic means, not better LLMs.
I wrote some notes about a year ago about the irony of LLMs being considered a refutation of GOFAI when they are actually now firmly recapitulating that paradigm: https://neilmadden.blog/2024/06/30/machine-learning-and-the-...
Yes, I agree. But it's not just the cradles, it's cradles + training on traces produced with those cradles. You can test this very easily with running old models w/ new cradles. They don't perform well at all. (one of the first things I did when guidance, a guided generation framework, launched ~2 years ago was to test code - compile - edit loops. There were signs of it working, but nothing compared to what we see today. That had to be trained into the models.)
> will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers.
Strong disagree. They have to work together. This is basically why RL is gaining a lot of traction in this space.
Also disagree on llms not improving much. Whatever they did with gemini 2.5 feels like gpt3-4 to me. The context updates are huge. This is the first model that can take 100k tokens and still work after that. They're doing something right to be able to support such large contexts with such good performance. I'd be surprised if gemini 2.5 is just gemini 1 + more data. Extremely surprised. There have to be architecture changes and improvements somewhere in there.
https://www.anthropic.com/news/tracing-thoughts-language-mod...
The lack of a world model is a very real limitation in some problem spaces, starting with arithmetic. But this argument is unconvincing.
Obviously false for any useful sense by which you might operationalize "world model". But agree re: being a black box and having a world model being orthogonal.
But some words are redacted. So I've uploaded the picture to Gemini and asked it what the redacted words are, and it told me. Not sure if they are correct, and some are way longer to fit in the redacted black box, but it didn't refuse the request.
Hm.
Dead reckoning is a terrible way to navigate, and famously led to lots of ships crashed on the shore of France before good clocks allowed tracking longitude accurately.
Ants lay down pheromone trails and use smell to find their way home... There's likely some additional tracking going on, but I would be surprised if it looked anything like symbolic GOFAI.
Is it possible that reason could emerge as the byproduct of being really good at predicting words? Maybe, but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case. Many people think in images as direct sense datum, and it's not clear that a digital representation of this is equivalent to the thing in itself.
To use an example another HN'er suggested, We don't claim that submarines are swimming. Why are we so quick to claim that LLMs are "reasoning"?
Imagine we had such marketing behind wheels — they move, so they must be like legs on the inside. Then we run around imagining what the blood vessels and bones must look like inside the wheel. Nevermind that neither the structure nor the procedure has anything to do with legs whatsoever.
Sadly, whoever named it artificial intelligence and neural networks likely knew exactly what they were doing.
Most of these newer models are multi-modal, so tokens aren't necessary linguistic.
I'm with you on this. Software engineers talk about being in the flow when they are at their most productive. For me, the telltale sign of being in the flow is that I'm no longer thinking in English, but I'm somehow navigating the problem / solution space more intuitively. The same thing happens in many other domains. We learn to walk long before we have the language for all the cognitive processes required. I don't think we deeply understand what's going in these situations, so how are we going to build something to emulate it? I certainly don't consciously predict the next token, especially when I'm in the flow.
And why would we try to emulate how we do it? I'd much rather have technology that complements. I want different failure modes and different abilities so that we can achieve more with these tools than we could by just adding subservient humans. The good news is that everything we've built so far is succeeding at this!
We'll know that society is finally starting to understand these technologies and how to apply them when we are able to get away from using science fiction tropes to talk about them. The people I know who develop LLMs for a living, and the others I know that are creating the most interesting applications of them, already talk about them as tools without any need to anthropomorphize. It's sad to watch their frustration as they are slowed down every time a person in power shows up with a vision based on assumptions of human-like qualities rather than a vision informed by the actual qualities of the technology.
Maybe I'm being too harsh or impatient? I suppose we had to slowly come to understand the unique qualities of a "car" before we could stop limiting our thinking by referring to it as a "horseless carriage".
On a more general level, I also never understood this urge to build machines that are "just like us". Like you I want machines that, arguably, are best characterized by the ways in which they are not like us—more reliable, more precise, serving a specific function. It's telling that critiques of the failures of LLMs are often met with "humans have the same problems"—why are humans the bar? We have plenty of humans. We don't need more humans. If we're investing so much time and energy, shouldn't the bar be bette than humans? And if it isn't, why isn't it? Oh, right it's because actually human error is good enough and the actual benefit of these tools is that they are humans that can work without break, don't have autonomy, and that you don't need to listen to or pay. The main beneficiaries of this path are capital owners who just want free labor. That's literally all this is. People who actually want to build stuff want precision machines that are tailored for the task at hand, not some grab bag of sort of works sometimes stochastic doohickeys.
https://arxiv.org/abs/2506.01622
Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent's policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.
On my reading, the philosophical claim is that these models do not develop an actual logical, internal representation of domains.
The functional import is whether or not they are able to realize specific behaviors within a domain. The paper argues that a markov process can realize the functional equivalence of the initial goal oriented picture of its domain—that is can solve goals with an error bound—but not that it develops an actual representation of the domain.
Lack of an actual representation prevents such a machine from doing other things. For example, iiuc, it would be unable to solve problems in domains that are homomorphic to the original, while an explicit representation does enable this.