I'm a little skeptical of a full on 2008-style 'burst'. I imagine it'll be closer to a slow deflation as these companies need to turn a profit.
Fundamentally, serving a model via API is profitable (re: Dario, OpenAI), and inference costs come down drastically over time.
The main expense comes twofold:
1. The cost of train a new model is extremely expensive. GPUs / yolo runs / data
2. Newer models tend to churn through more tokens and be more expensive to serve in the beginning before optimizations are made.
(not including payrolls)
OpenAI and Anthropic can become money printers once they downgrade the Free tiers, add ads or other attention monetizing methods, and rely on a usage model once people and businesses become more and more integrated with LLMs, which are undoubtedly useful.
In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments
tartoran · 57m ago
When the bubble burts, what kind of effects are we going to see? What are your thoughts on this?
ProllyInfamous · 30m ago
Pre ChatGPT:
•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).
•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]
•Top 7 public techs are where predominant gains have grown / held
•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T
•Gold has doubled, to present.
>what kind of effects are we going to see
•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.
•UBI & conscription, if only to lessen previous bullet-point.
¢¢, hoping I'm wrong. But if I'm not, maybe we can focus on domestics instead of endless struggles abroad (reimpliment Civilian Conservation Corps?).
warkdarrior · 56m ago
Massive layoffs from BigTech and lots of startups going under.
jihadjihad · 54m ago
When AI is on the rise, layoffs are "because AI", and then when the AI bubble pops the layoffs are also conveniently "because AI".
iamgopal · 56m ago
What I think is, the team that pulled such large LLM off, is no stupid.
No comments yet
mortsnort · 17m ago
It's a race to see which runs out of steam first: AI investment or Ed Zitron's schtick.
(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)
bryanlarsen · 1h ago
The Internet bubble left physical artifacts behind, like thousands of miles of unlit fiber. However, that pales in comparison to the value of virtual artifacts like Apache et al. Similarly, the AI bubble's artifacts will primarily be virtual.
baal80spam · 53m ago
So many hot takes for the AI bubble bursting ANY DAY NOW, yet we keep chugging on.
heathrow83829 · 51m ago
they said there's 6 more quarters of funding left, so should be busted by early to mid 2027
mountainriver · 50m ago
Lots of AI apps are creating a lot of value, that somehow gets overlooked in these convos
great_psy · 36m ago
Can you provide some names of AI apps who’s revenue > cost ?
ricericerice · 21m ago
I mean, ChatGPT could easily be profitable today if they wanted to, but they're prioritizing growth
VohuMana · 37m ago
A lot of value is being created with some of these AI apps but are the people funding the development of these apps seeing a return on investment? (Honest question, I don't really know)
The article mentions
> This is a bubble driven by vibes not returns ...
I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.
heathrow83829 · 47m ago
meta already has a hiring freeze in AI
toss1 · 1h ago
There is no question LLMs are truly useful in some areas, and the LLM bubble will inevitably burst. Both can be simultaneously true, and we're just running up the big first slope on the hype curve [0].
As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).
So one question is timing — When will the crash come?
The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?
We even have prior art. Web 1.0 and e-Commerce were truly useful and the bubble also burst.
I should also think further, railroads and radio also good examples!
shishy · 59m ago
Yes well bubbles are a core part of the innovation process (new tech being useful doesn't imply a lack of bubbles), see e.g."Technological Revolutions and Financial Capital" by Carlota Perez https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
heathrow83829 · 1h ago
Unlike that time, some money is actually being made. I heard some figures thrown around yesterday, total combined investments of over 500 billion! and revenues of about 30 billion, 10 bil of which was payments to cloud providers, so actually 20 billion in revenues. that's not nothing.
fred_is_fred · 10m ago
Plenty of e-commerce places had revenue, they just didn't have profit and they usually spent on crazy stuff, like Super Bowl ads.
jbreckmckye · 1h ago
It might not be a paradox: Bubbles are most likely to occur when something is plausibly valuable.
If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.
Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.
wood_spirit · 37m ago
I use LLMs all the time and do ML and stuff. But at the same time, they are literally averaging the internet, approximately. I think the terms glorified autocomplete and stochastic parrot describe how they work under the hood really well.
bubblelicious · 33m ago
Really hard to believe articles like this and even more hard to believe this is the hive mind of hacker news today.
Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.
There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.
It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.
dehrmann · 19m ago
Upvoted for a different perspective.
The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.
michaeldoron · 6m ago
I agree completely.
I work as a ML researcher in a small startup researching, developing and training large models on a daily basis. I see the improvements done in my field every day in academia and in the industry, and newer models come out constantly that continue to improve the product's performance. It feels as if people who talk about AI being a bubble are not familiar with AI which is not LLMs, and the amazing advancements it already did in drug discovery, ASR, media generation, etc.
If foundation model development stopped right now and chatgpt would not be any better, there would be at least five if not ten years of new technological developments just to build off the models we have trained so far.
AndrewKemendo · 36m ago
Having been through at least two AI hype cycles professionally, this is just another one.
Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.
I have a private list of these starting from 2006 to today.
LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.
Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.
Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs
We’ll be fine
jsnell · 1h ago
Paywalled.
jasonjmcghee · 56m ago
Not for me? Never heard of this site but had no issues.
great_psy · 31m ago
The introduction to the article is not paywalled. But the actual 2027 ai story is paywalled
jasonjmcghee · 21m ago
Ah.
jasonjmcghee · 34m ago
The author labels LLMs as "empty hype".
LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.
But empty hype? I just can't disagree more.
They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.
That's incredibly powerful.
They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.
But empty hype?
Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.
They are just too useful.
Programming and the internet were overhyped too and had many of the same classes of problems.
We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.
politelemon · 21m ago
I would love to reach a point where competent language models become commodities that anyone can run on low key hardware. Having one at your disposal can open up for some gorgeous applications and workflows by the community. As it stands at present though, there are insurmountable moats or very expensive ones.
Fundamentally, serving a model via API is profitable (re: Dario, OpenAI), and inference costs come down drastically over time.
The main expense comes twofold: 1. The cost of train a new model is extremely expensive. GPUs / yolo runs / data
2. Newer models tend to churn through more tokens and be more expensive to serve in the beginning before optimizations are made.
(not including payrolls)
OpenAI and Anthropic can become money printers once they downgrade the Free tiers, add ads or other attention monetizing methods, and rely on a usage model once people and businesses become more and more integrated with LLMs, which are undoubtedly useful.
In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments
•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).
•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]
•Top 7 public techs are where predominant gains have grown / held
•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T
•Gold has doubled, to present.
>what kind of effects are we going to see
•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.
•UBI & conscription, if only to lessen previous bullet-point.
¢¢, hoping I'm wrong. But if I'm not, maybe we can focus on domestics instead of endless struggles abroad (reimpliment Civilian Conservation Corps?).
No comments yet
(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)
The article mentions
> This is a bubble driven by vibes not returns ...
I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.
As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).
So one question is timing — When will the crash come?
The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?
[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle
I should also think further, railroads and radio also good examples!
If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.
Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.
Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.
There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.
It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.
The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.
I work as a ML researcher in a small startup researching, developing and training large models on a daily basis. I see the improvements done in my field every day in academia and in the industry, and newer models come out constantly that continue to improve the product's performance. It feels as if people who talk about AI being a bubble are not familiar with AI which is not LLMs, and the amazing advancements it already did in drug discovery, ASR, media generation, etc.
If foundation model development stopped right now and chatgpt would not be any better, there would be at least five if not ten years of new technological developments just to build off the models we have trained so far.
Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.
I have a private list of these starting from 2006 to today.
LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.
Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.
Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs
We’ll be fine
LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.
But empty hype? I just can't disagree more.
They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.
That's incredibly powerful.
They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.
But empty hype?
Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.
They are just too useful.
Programming and the internet were overhyped too and had many of the same classes of problems.
We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.