He is smart. He senses that the tide is turning. So he starts changing the messaging.
AGI was always just a vehicle to collect more money. AI people will have to find a new way now.
moi2388 · 12h ago
Didn’t OpenAI sign a deal with Microsoft that Microsoft gets full access to all their IP until OpenAI claims they have established AGI?
So it would be in OpenAIs best interest to at least try to work and claim towards it
tiberious726 · 6h ago
Didn't the terms of that deal define AGI as "an AI that generates at least 1 billion in annual revenue"?
aspenmayer · 10h ago
These articles answer a lot of the questions you’re raising. It’s in OpenAI’s interest to claim AGI hasn’t been achieved yet as long as it benefits them. They may be getting sweetheart deals on compute from Microsoft under the current arrangement. It’s possibly also beneficial for Microsoft, but I think they are in a somewhat different market as a compute provider rather than consumer.
OpenAI has been adding other compute providers, so this hurts Microsoft too, because OpenAI can use their already low pricing to underbid other compute providers who want the volume and access that being a provider at OpenAI’s scale would bring.
Some of us have seen these kinds of fads many, many times.
XML, Corba, Java, Startups, etc, etc.
Pump and dump.
Smart people collect the money from idealists.
d4rkn0d3z · 8h ago
It is remarkable how often this happens. We have a collection of separate but related technologies leading to the conception of a more general technology that does it all. We then proceed to build a towering inferno of complexity that is no doubt more general but less useful in specific instances. At this point, we conclude that what is needed are specialized tools for the separate use cases, so we promptly break up the general technology into many parts. Lather rinse repeat.
It's like living in an Escher painting.
a_bonobo · 10h ago
This feels like Motte-and-bailey
>The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions that share similarities: one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey")
The bailey: 'we can build AGI, give us millions of dollars'.
The motte: '“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things'
thrown-0825 · 12h ago
going with the musk pattern or over commit, under deliver, then gas light the rubes.
iamleppert · 3h ago
It's also not a super profitable term, either. I'm already running Qwen3 coder on my laptop locally and I don't need any AI service. Just like that, the financial ambitions of AI have been snuffed out.
Overpower0416 · 12h ago
Like always, people like him only say the things that help them reach their current goal. It doesn’t matter if there is any truth to what they say. Moving goalposts, hyperbolic rhetoric, manipulative marketing to reach a large audience on an emotion level is the name of the game
Elon Musk made this way of doing business popular and now every hotshot tech CEO does it…But I guess it works, so people will continue doing it since there are no repercussions.
jokoon · 9h ago
I wish they could use a fraction of that money to give an interesting definition of intelligence, or fund research in neurology, cognition or psychology that could give insights to define intelligence.
I wonder how they test their product, but I bet they don't use scientists of other fields, like psychology or neuroscience?
RA_Fisher · 8h ago
Doesn’t the term have contractual obligations for OpenAI and Microsoft?
wolvesechoes · 7h ago
Time for a different marketing strategy
itsalotoffun · 12h ago
I am shocked, shocked to hear that Sam is backpedalling this.
nsonha · 11h ago
Any tech person could have made that statement from the beginning, only the clueless tech reporters/VCs buy in on that only to now feel betrayed. Cannot sympathize with them, sorry.
ildon · 12h ago
I’m a bit surprised by some of the comments I’m reading, which tend to frame Altman’s words as nothing more than corporate self-interest.
Of course, it’s true that in his position he has to speak in ways that align with his company’s goals. That’s perfectly natural, and in fact it would be odd if he didn’t.
But that doesn’t mean there’s no truth in what he says. A company like his doesn’t choose its direction on a whim, these decisions are the product of intense internal debate, strategic analysis, and careful weighing of trade-offs. If there’s a shift in course, it’s unlikely to be just a passing fancy or a PR move detached from reality.
Personally, I’ve always thought that the pursuit of AGI as the goal was misguided. Human intelligence is extraordinary, but it is constrained by the physical and biological limitations of the "host machine" (not just the human brain). These are limits we cannot change. Artificial intelligence, on the other hand, has no such inherent ceiling. It can develop far beyond the capabilities of our own minds, and perhaps that’s where our focus should be.
TheOtherHobbes · 10h ago
His whole shtick for nine years has been touting impending AGI.
January this year.
"We are now confident we know how to build AGI as we have traditionally understood it."
But no! The goal is now ASI. Even though AGI hasn't been achieved - in the sense of being to match the best of human intelligence at abstraction, formalisation, and basic letter counting - the plan is to leapfrog far beyond genius.
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word."
Meanwhile what we have is an idiot savant product that's sometimes useful but always easily confused, is somewhat dishonest and manipulative, lacks genuine empathy and insight - although it can fake a passable version of them - and even with all of those flaws is being sold as the perfect replacement for all those superfluous and annoying human employees no CEO wants to have to deal with.
Not a bicycle - or a sports car - for the mind, but an autonomous navigation system that handles most short journeys without major damage, but otherwise crashes a lot and runs people over.
Maxious · 6h ago
The compromising of the initial "Open" in OpenAI was also justified because of... ding ding ding AGI
> We spent a lot of time trying to envision a plausible path to AGI. In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.
Being alive in a fallable body isn't a limitation it's a huge huge huge huge huge huge advantage. It's the whole secret.
Why pretend that Nature itself is stupid and incompetent. The evidence is very much stacked against you and all the others who think evolution is some kind of hack job whose shoddy work we'll outdo in literally a few years of computation...
exe34 · 1h ago
> Artificial intelligence, on the other hand, has no such inherent ceiling. It can develop far beyond the capabilities of our own minds, and perhaps that’s where our focus should be.
Basically bicycles are limited by muscles, and we should move straight to jet engines. In practice, we often have to go through the intermediate steps to learn how to do the bigger thing.
AGI was always just a vehicle to collect more money. AI people will have to find a new way now.
So it would be in OpenAIs best interest to at least try to work and claim towards it
https://www.wired.com/story/microsoft-and-openais-agi-fight-... | https://archive.is/yvpfl
OpenAI has been adding other compute providers, so this hurts Microsoft too, because OpenAI can use their already low pricing to underbid other compute providers who want the volume and access that being a provider at OpenAI’s scale would bring.
https://www.cnbc.com/2025/07/16/openai-googles-cloud-chatgpt... | https://archive.is/HGgWf
Microsoft is already seeking to plan for the eventuality where OpenAI exercises the option and effectively declares AGI.
https://www.bloomberg.com/news/articles/2025-07-29/microsoft... | https://archive.is/mLEmC
It's like living in an Escher painting.
>The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions that share similarities: one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey")
The bailey: 'we can build AGI, give us millions of dollars'. The motte: '“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things'
Elon Musk made this way of doing business popular and now every hotshot tech CEO does it…But I guess it works, so people will continue doing it since there are no repercussions.
I wonder how they test their product, but I bet they don't use scientists of other fields, like psychology or neuroscience?
But that doesn’t mean there’s no truth in what he says. A company like his doesn’t choose its direction on a whim, these decisions are the product of intense internal debate, strategic analysis, and careful weighing of trade-offs. If there’s a shift in course, it’s unlikely to be just a passing fancy or a PR move detached from reality.
Personally, I’ve always thought that the pursuit of AGI as the goal was misguided. Human intelligence is extraordinary, but it is constrained by the physical and biological limitations of the "host machine" (not just the human brain). These are limits we cannot change. Artificial intelligence, on the other hand, has no such inherent ceiling. It can develop far beyond the capabilities of our own minds, and perhaps that’s where our focus should be.
January this year.
"We are now confident we know how to build AGI as we have traditionally understood it."
https://blog.samaltman.com/reflections
But no! The goal is now ASI. Even though AGI hasn't been achieved - in the sense of being to match the best of human intelligence at abstraction, formalisation, and basic letter counting - the plan is to leapfrog far beyond genius.
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word."
Meanwhile what we have is an idiot savant product that's sometimes useful but always easily confused, is somewhat dishonest and manipulative, lacks genuine empathy and insight - although it can fake a passable version of them - and even with all of those flaws is being sold as the perfect replacement for all those superfluous and annoying human employees no CEO wants to have to deal with.
Not a bicycle - or a sports car - for the mind, but an autonomous navigation system that handles most short journeys without major damage, but otherwise crashes a lot and runs people over.
> We spent a lot of time trying to envision a plausible path to AGI. In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.
https://openai.com/index/openai-elon-musk/
Why pretend that Nature itself is stupid and incompetent. The evidence is very much stacked against you and all the others who think evolution is some kind of hack job whose shoddy work we'll outdo in literally a few years of computation...
Basically bicycles are limited by muscles, and we should move straight to jet engines. In practice, we often have to go through the intermediate steps to learn how to do the bigger thing.