On the one hand, that isn't necessarily a problem. It can be just a useful algorithm for tool calling or whatever.
On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years. Rumor has it that such claims were made? Hopefully no big investors actually believed that.
The real question to be asking is, based on current applications of LLMs, can one pay for the hardware to sustain it? The comparison to smartphones is apt; by the time we got to the "Samsung Galaxy" phase, where only incremental improvements were coming, the industry was making a profit on each phone sold. Are any of the big LLMs actually profitable yet? And if they are, do they have any way to keep the DeepSeeks of the world from taking it away?
What happens if you built your business on a service that turns out to be hugely expensive to run and not profitable?
Salgat · 3m ago
>On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years.
Musk has been doing this with autonomous driving since 2015. Machine learning has enough hype surrounding it that you have to embellish to keep up with every other company's ridiculous claims.
seanalltogether · 1h ago
Do we have a reasonable definition for what intelligence is? Is it like defining porn, you just know it when you see it?
erikerikson · 43m ago
One of my favorites is efficient cross domain maximization
mmphosis · 1h ago
mayhaps a prediction by an Artificial General Intelligence that is already here
maxhille · 1h ago
I mean there are different definitions on what to call an AGI. Most of the time people don't specify which one they use.
For me an AGI would mean truly at least human level as in "this clearly has a consciousness paired with knowledge", a.k.a. a person. In that case, what do the investors expect? Some sort of slave market of virtual people to exploit?
rationalpath · 1h ago
Feels like we’re all just betting on the biggest “what if” in history.
bpodgursky · 1h ago
It is critical to remember that there is a market for people who say "AGI is not coming"
It doesn't matter whether they are lying. People want to hear it. It's comforting. So the market fills the void, and people get views and money for saying it.
Don't use the fact that people are saying it, as evidence that it is true.
TheOtherHobbes · 1h ago
You can remove the "not" and everything you wrote is just as true. If not more so.
It's not the AGI sceptics who are getting $500bn valuations.
toasterlovin · 53m ago
Right, it’s exactly the opposite. What is the AI skeptic version of MIRI, for instance?
drdeca · 14m ago
I guess the AI skeptic version of MIRI would be like, an organization that tries to anticipate possible future large problems that could arise from people anticipating an AGI that never arrives, but which people might believe has arrived, and proposes methods to attempt to prevent or mitigate those potential problems?
d4rkn0d3z · 47m ago
Can you say non-sequitor.
d4rkn0d3z · 1h ago
Oddly, in a bubble the highest valuations come just before the burst. This is obvious mathematical certainty that can be read by anyone viewing an exponential growth curve.
gls2ro · 31m ago
Usually the burden of proof is on the one who is making a positive claim like: AGI is here or even AGI is coming.
The default position that does not need any more justification is the one that is skeptic or even agnostic to the claim that is made until proof is shown.
So when talking about evidence as a way to prove a claim: AGI is coming is the team that needs to provide this evidence. Someone saying AGI is not coming can add as many arguments or opinions as they like but it does not usually invite to such a high scrutiny as saying they need to provide evidence.
lostmsu · 24m ago
Using the very basic definition of AGI where "general" is just cross-domain, as in e.g. chemistry and law, the very first ChatGPT was already it. Not very smart one though.
"Modern" definitions that include non-intelligence related stuff like agency sound like goalpost moving, so it's unclear why would you want them.
good_stuffs · 1h ago
Nobody even knows what AGI even is. This will most likely be defined by a corporation, not science. Due to obvious incentives.
righthand · 1h ago
Waiting for Agi-dot…
The inverse can be true too: Just because people ARE saying that Agi is coming, isn’t evidence that it is true.
bpodgursky · 1h ago
OK, but your null hypothesis should always be a first or second degree linear projection.
"AI is getting better rapidly" is the current state of affairs. Arguing "AI is about to stop getting better" is the argument that requires strong evidence.
righthand · 1h ago
“AI is getting better rapidly” is a false premise. As AI is a large domain. There is no way to quanitify the better as compared to the entire domain. “Llms are improving rapidly during a short period of time where they gain popularity” is more accurate.
Llms getting better != a path to AGI.
backpackviolet · 1h ago
> "AI is getting better rapidly"
… is it? I hear people saying that. I see “improvement”: the art generally has the right number of fingers more often, the text looks like text, the code agents don’t write stuff that even the linter says is wrong.
But I still see the wrong number of fingers sometimes. I still see the chat bots count the wrong number of letters in a word. I still see agents invent libraries that don’t exist.
I don’t know what “rapid” is supposed to mean here. It feels like Achilles and the Tortoise and also has the energy costs of a nation-state.
righthand · 1h ago
Agreed there really isn’t any metrics that indicate this is true. Considering many models are still too complex to run locally. Llms are getting better for the corporations that sell access to them. Not necessarily for the people that use them.
camillomiller · 1h ago
Compare Altman outlandish claims about GPT-5 and the reality of this update. Do you think they square out in any reasonable way?
bpodgursky · 1h ago
Please, please seriously think back to your 2020 self, and think about whether your 2020 self would be surprised by what AI can do today.
You've frog-boiled yourself into timelines where "No WORLD SHAKING AI launches in the past 4 months" means "AI is frozen". In 4 months, you will be shocked if AI doesn't have a major improvement every 2 months. In 6 months, you will be shocked if it doesn't have a major update ever 1 month.
It's hard to see exponential curves while you're on it, I'm not trying to fault you here. But it's really important to stretch yourself to try.
jononor · 9m ago
I for one am quite surprised. Sometimes impressed. But also often frustrated. And occasionally disappointed. Sometimes worried about the negative follow-on effects. Working with current LLMs spans the whole gamut...
But for coding we are at the point where even the current level is quite useful. And as the tools/systems get better, the usefilness is going to increase quite a bit. Even if models improve slowly from this point on. It will impact the whole industry over the next years, and since software is eating the world, will impact many other industries as well.
Exponential? Perhaps in the same way as computers and Internet have been exponential - cost per X (say tokens) will probably go down exponentially the next years and decades, the same way cost per FLOP went down, on megabytes transferred. But those exponential gains did not results in exponential growth in productivity, or if so, the exponent is much much lower. And I suspect it will likely be the same for artificial intelligence.
backpackviolet · 1h ago
I’m still surprised by what AI can do. It’s amazing. … but I still have to double check when it’s important that I get the right answer, I still have to review the code it writes, and I still am not sure there is actually enough business to cover what it will actually cost to run when it needs to pay for itself.
righthand · 1h ago
What if I’ve not been impressed by giving a bunch of people a spam bot tuned to education materials? Am I frog boiled? Who cares about the actual advancement of this singular component if I was never impressed.
You assume everyone is “impressed”.
th0ma5 · 1h ago
To be honest, I had the right idea back then... This technology has fundamental qualities that require it to provide inaccurate token predictions that are only statistically probable. They aren't even trying to change this situation other than trying to find more data to train, saying you have to keep adding layers of them, or are saying it is the user's responsibility.
There's been the obvious notion that digitizing the world's information is not enough and that hasn't changed.
politelemon · 44m ago
What products are people building on not-AGI?
camillomiller · 1h ago
By this measure, considering the current capex all over the board, there is a lot more incentive in pushing the “AGI IS NEAR AND WE AINT READY” narrative than the opposite.
If AGI won’t come, as it’s highly probable, these companies are bust for billions and billions…
SalmoShalazar · 1h ago
One could flip your post to say “AGI is coming” and be claiming the opposite, and it would be equally lacking insight. This is not “critical” to remember.
There are interesting and well thought out arguments for why the AGI is not coming with the current state of technology, dismissing those arguments as propaganda/clickbait is not warranted. Yannic is also an AI professional and expert, not one to be offhandedly dismissed because you don’t like the messaging.
TheCraiggers · 1h ago
I doing think that's fair to the person you replied to. At no time did they say they didn't like/dislike the message. Merely that there's a market for it, and thus, people may be biased.
Telling us all to remember that there's potential for bias isn't so bad. It's a hot button issue.
d4rkn0d3z · 52m ago
To share my experience, 25 years ago I looked into AI and had inclinations to see what scaling compute would do. It took no time to find advisors who told me the whole program I had in mind could not gain ethics approval and was mathematically limited. The former road block is now lifted due to the fact that nobody cares about ethics any more, the latter seems to be the remaining hurdle.
vlan121 · 1h ago
The goal of economic is not to reach AGI. It would solve the problems we have with the current market, therefore would it make less money, then to just "chase" for the AGI. Shirky principle in a nutshell.
On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years. Rumor has it that such claims were made? Hopefully no big investors actually believed that.
The real question to be asking is, based on current applications of LLMs, can one pay for the hardware to sustain it? The comparison to smartphones is apt; by the time we got to the "Samsung Galaxy" phase, where only incremental improvements were coming, the industry was making a profit on each phone sold. Are any of the big LLMs actually profitable yet? And if they are, do they have any way to keep the DeepSeeks of the world from taking it away?
What happens if you built your business on a service that turns out to be hugely expensive to run and not profitable?
Musk has been doing this with autonomous driving since 2015. Machine learning has enough hype surrounding it that you have to embellish to keep up with every other company's ridiculous claims.
For me an AGI would mean truly at least human level as in "this clearly has a consciousness paired with knowledge", a.k.a. a person. In that case, what do the investors expect? Some sort of slave market of virtual people to exploit?
It doesn't matter whether they are lying. People want to hear it. It's comforting. So the market fills the void, and people get views and money for saying it.
Don't use the fact that people are saying it, as evidence that it is true.
It's not the AGI sceptics who are getting $500bn valuations.
The default position that does not need any more justification is the one that is skeptic or even agnostic to the claim that is made until proof is shown.
So when talking about evidence as a way to prove a claim: AGI is coming is the team that needs to provide this evidence. Someone saying AGI is not coming can add as many arguments or opinions as they like but it does not usually invite to such a high scrutiny as saying they need to provide evidence.
"Modern" definitions that include non-intelligence related stuff like agency sound like goalpost moving, so it's unclear why would you want them.
The inverse can be true too: Just because people ARE saying that Agi is coming, isn’t evidence that it is true.
"AI is getting better rapidly" is the current state of affairs. Arguing "AI is about to stop getting better" is the argument that requires strong evidence.
Llms getting better != a path to AGI.
… is it? I hear people saying that. I see “improvement”: the art generally has the right number of fingers more often, the text looks like text, the code agents don’t write stuff that even the linter says is wrong.
But I still see the wrong number of fingers sometimes. I still see the chat bots count the wrong number of letters in a word. I still see agents invent libraries that don’t exist.
I don’t know what “rapid” is supposed to mean here. It feels like Achilles and the Tortoise and also has the energy costs of a nation-state.
You've frog-boiled yourself into timelines where "No WORLD SHAKING AI launches in the past 4 months" means "AI is frozen". In 4 months, you will be shocked if AI doesn't have a major improvement every 2 months. In 6 months, you will be shocked if it doesn't have a major update ever 1 month.
It's hard to see exponential curves while you're on it, I'm not trying to fault you here. But it's really important to stretch yourself to try.
You assume everyone is “impressed”.
There's been the obvious notion that digitizing the world's information is not enough and that hasn't changed.
There are interesting and well thought out arguments for why the AGI is not coming with the current state of technology, dismissing those arguments as propaganda/clickbait is not warranted. Yannic is also an AI professional and expert, not one to be offhandedly dismissed because you don’t like the messaging.
Telling us all to remember that there's potential for bias isn't so bad. It's a hot button issue.