The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:
>In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.
Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.
>You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:
> According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
onlyrealcuzzo · 35m ago
1. People aren't going to take on risk and deploy capital if they can't get a return.
2. If people think they can get an abnormally high return, they will invest more than otherwise.
3. Whatever other money would've got invested would've gone wherever it could've gotten the highest returns, which is unlikely to have the same ratio as US AI investments .
So while it's unlikely the US would've had $0 investment if not for AI, it's probably even less likely we would've had just as much investment.
thiago_fm · 1m ago
I agree, in any time in US history there has always been those 5-10 companies leading the economic progress.
This is very common, and this happens in literally every country.
But their CAPEX would be much smaller, as if you look at current CAPEX from Big Tech, most of it are from NVidia GPUs.
If a Bubble is happening, when it pops, the depreciation applied to all that NVidia hardware will absolute melt the balance sheet and earnings of all Cloud companies, or companies building their own Data centers like Meta and X.ai
biophysboy · 49m ago
Its also not fair to compare AI firms with others using growth because AI is a novel technology. Why would there be explosive growth in rideshare apps when its a mature niche with established incumbents?
dragontamer · 37m ago
I think the explosive growth that people want is in manufacturing. Ex: US screws, bolts, rivets, dies, pcbs, assembly and such.
The dollars are being diverted elsewhere.
Intel a chip maker who can directly serve the AI boom, has failed to deploy its 2nm or 1.8nm fabs and instead written them off. The next generation fabs are failing. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.
biophysboy · 1m ago
They're not going to get it. The political economy of East Asia is simply better suited for advanced manufacturing. The US wants the manufacturing of East Asia without its politics. Sometimes for good reason - being an export economy has its downsides!
thrance · 37m ago
> > Without AI, US economic growth would be meager.
> The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic.
That's the really damning thing about all of this, maybe all this capital could have been invested into actually growing the economy instead of fueling this speculation bubble that will burst sooner or later, bringing along any illusion of growth into its demise.
ryandrake · 31m ago
Or all that money might have been churning around chasing other speculative technologies. Or it might have been sitting in US Treasuries making 5% waiting for something promising. Who knows what is happening in the parallel alternate universe? Right now, it feels like everyone is just spamming dollars and hoping that AI actually becomes a big industry, to justify all of this economic activity. I'm reminded of Danny DeVito's character's speech in the movie Other People's Money, after the company's president made an impassioned speech about why its investors should keep investing:
"Amen. And amen. And amen. You have to forgive me. I'm not familiar with the local custom. Where I come from, you always say "Amen" after you hear a prayer. Because that's what you just heard - a prayer."
At this point, everyone is just praying that AI ends up a net positive, rather than bursting and plunging the world into a 5+ year recession.
hnhg · 1h ago
I found this the most interesting part of the whole essay - "the ten largest companies in the S&P 500 have so dominated net income growth in the last six years that it’s becoming more useful to think about an S&P 10 vs an S&P 490" - which then took me here:
https://insight-public.sgmarkets.com/quant-motion-pictures/o...
Can anyone shed light on what is going on between these two groups. I wasn't convinced by the rest of the argument in the article, and I would like something that didn't just rely on "AI" as an explanation.
nowayno583 · 53m ago
It is a very complex phenomenon, with no single driving force. The usual culprit is uncertainty, which itself can have a ton of root causes (say, tariffs changing every few weeks, or higher inflation due to government subsidies).
In more uncertain scenarios small companies can't take risks as well as big companies. The last 2 years have seen AI, which is a large risk these big companies invested in, pay off. But due to uncertainty smallish companies couldn't capitalize.
But that's only one possible explanation!
automatic6131 · 49m ago
> The last 2 years have seen AI, which is a large risk these big companies invested in, pay off
LOL. It's paying off right now, because There Is No Alternative. But at some point, the companies and investors are going to want to make back these hundreds of billions. And the only people making money are Nvidia, and sort-of Microsoft through selling more Azure.
Once it becomes clear that there's no trillion dollar industry in cheating-at-homework-for-schoolkids, and nvidia stop selling more in year X than X-1, very quickly will people realize that the last 2 years have been a massive bubble.
nowayno583 · 44m ago
That's a very out of the money view! If you are right you could make some very good money!
foolswisdom · 49m ago
The primary goal of big companies is (/has become) maintaining market dominance, but this doesn't always translate to a well run business with great profits, it depends on internal and external factors. Maybe profits should have actually gone down due to tarrifs and uncertainty but the big companies have kept profit stable.
andsoitis · 8m ago
> Maybe profits should have actually gone down due to tarrifs and uncertainty but the big companies have kept profit stable.
If you’re referencing Trump’s tariffs, they have only come into effect now, so the economic effects will be felt in the months and years ahead.
whitej125 · 42m ago
That which might be of additional interest... look at how the top 10 of the S&P 500 has changed over the decades[1].
At any point in time the world thinks that those top 10 are unstoppable. In the 90's and early 00's... GE was unstoppable and the executive world was filled with acolytes of Jack Welch. Yet here we are.
Five years ago I think a lot of us saw Apple and Google and Microsoft as unstoppable. But 5-10 years from now I bet you we'll see new logos in the top 10. NVDA is already there. Is Apple going to continue dominance or go the way of Sony? Is the business model of the internet changing such that Google can't react quick enough. Will OpenAI go public (or any foundational model player).
I don't know what the future will be but I'm pretty sure it will be different.
There was always some subset of the S&P that mattered way more than the rest, just like the S&P matters way more than the Russel.
Typically, you probably need to go down to the S&P 25 rather than the S&P 10.
rogerkirkness · 1h ago
Winner takes most is now true at the global economy level.
moi2388 · 1h ago
They are 40% of the S&P 500, so it makes sense that they are primary drivers of its growth.
They are also all tech companies, which had a really amazing run during Covid.
They also resemble companies with growth potential, whereas other companies such as P&G or Walmart might’ve saturated their market already
andsoitis · 6m ago
> They are also all tech companies, which had a really amazing run during Covid.
Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is also arguable whether Tesla is a tech company or whether it is a car company.
freetonik · 1h ago
Interesting that the profits of those bottom 490 companies of S&P 500 do not rise with the help of AI technology, which is supposedly sold to them at a reduced rate as AI vendors are bleeding money.
roncesvalles · 1h ago
Other than NVIDIA, the profits of the S&P 10 haven't risen either. It's just that the market is pricing them very optimistically.
IMO this is an extremely scary situation in the stock market. The AI bubble burst is going to be more painful than the Dotcom bubble burst. Note that an "AI bubble burst" doesn't necessitate a belief that AI is "useless" -- the Internet wasn't useless and the Dotcom burst still happened. The market can crash when it froths up too early even though the optimistic hypotheses driving the froth actually do come true eventually.
andsoitis · 3m ago
> Other than NVIDIA, the profits of the S&P 10 haven't risen either.
That’s not correct. Did you mean something else?
Workaccount2 · 57m ago
We are still in the "land grab" phase where companies are offering generous AI plans to capture users.
Once users get hooked on AI and it becomes an indispensable companion for doing whatever, these companies will start charging the true cost of using these models.
It would not be surprising if the $20 plans of today are actually just introductory rate $70 plans.
esafak · 40m ago
I'd be surprised because (free) open source are continually closing the gap, exerting downward pressure on the price.
Workaccount2 · 32m ago
I don't think it will be much of an issue for large providers, anymore than open source software has ever been a concern for Microsoft. The AI market is the entirety of the population, not just the small sliver who knows what "VRAM" means and is willing to spend thousands on hardware.
esafak · 31m ago
You can get open source models hosted for cheap too; e.g., through OpenRouter, AWS Bedrock, etc. You do not have to run it yourself.
onlyrealcuzzo · 29m ago
We'll never know what would've happened without AI.
1. There profits could otherwise be down.
2. The plan might be to invest a bunch up front in severance and AI Integration that is supposed to pay off in the future.
3. In the future that may or may not happen, and it'll be hard to tell, because it may pay off at the same time an otherwise recession is hitting, which smoothes it out.
It's almost as if it's not that simple.
biophysboy · 1h ago
>“The top 100 AI companies on Stripe achieved annualized revenues of $1 million in a median period of just 11.5 months—four months ahead of the fastest-growing SaaS companies.”
This chart is extremely sparse and very confusing. Why not just plot a random sample of firms from both industries?
I'd be curious to see the shape of the annualized revenue distribution after a fixed time duration for SaaS and AI firms. Then I could judge whether its fair to filter by the top 100. Maybe AI has a rapid decay rate at low annualized revenue values but a slower decay rate at higher values, when compared to SaaS. Considering that AI has higher marginal costs and thus a larger price of entry, this seems plausible to me. If this is the case, this chart is cherry picking.
stackbutterflow · 2h ago
Predicting the future is always hard.
But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?
baxtr · 1h ago
"It is difficult to make predictions, especially about the future" - Yogi Berra (?)
But let’s assume we can for a moment.
If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.
Which means that the "trough of disillusionment" will follow.
This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.
keiferski · 1h ago
Well, there’s a fundamental difference: the Internet blew up because it enabled people to connect with each other more easily: culturally, economically, politically.
AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.
So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.
dfedbeef · 48m ago
There's also the difference that the internet worked.
baggachipz · 1h ago
Classic repeat of the Gartner Hype Cycle. This bubble pop will dwarf the dot-bomb era. There's also no guarantee that the "slope of enlightenment" phase will amount to much beyond coding assistants. GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives.
This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.
ben_w · 1h ago
Mm. Partial agree, partial disagree.
These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".
I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.
Other than that, to the extent that I agree with you that:
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.
But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.
thecupisblue · 1h ago
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.
lm28469 · 1h ago
> especially for non-technical people.
The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise
thecupisblue · 45m ago
Considering they were already creating useless noise, they can create it faster now.
But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.
Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.
ducktective · 1h ago
Very simple question:
How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.
However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.
simianwords · 8m ago
Your internal verifier model in your head is actually good enough and not random. It knows how the world works and subconsciously applies a lot of sniff tests it has learned over the years.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
threetonesun · 1h ago
There's a few cases:
1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.
2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.
3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.
dsign · 1h ago
This is true.
But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.
Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.
There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.
I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.
keiferski · 1h ago
I get around this by not valuing the AI for its output, but for its process.
Treat it like a brilliant but clumsy assistant that does tasks for you without complaint – but whose work needs to be double checked.
svara · 1h ago
This is really strange to me...
Of course you don't trust the answer.
That doesn't mean you can't work with it.
One of the key use cases for me other than coding is as a much better search engine.
You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.
It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...
There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.
dfedbeef · 46m ago
Such as
svara · 40m ago
I went through my ChatGPT history to pick a few examples that I'm both comfortable sharing and that illustrate the use-case well:
> There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.
> When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?
> As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.
> If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?
> what would be the primary naics code for the business with website at [redacted]
I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.
With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).
For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.
likium · 1h ago
We place plenty of trust with strangers to do their jobs to keep society going. What’s their error rate?
It all ends up with the track record, perception and experience of the LLMs. Kinda like self-driving cars.
rwmj · 46m ago
When it really matters, professionals have insurance that pays out when they screw up.
morpheos137 · 54m ago
Strangers have an economic incentive to perform. AI does not. What AI program is currently able to modify its behavior autonomously to increase its own profitablity? Most if not all current public models are simply chat bots trained on old data scraped off the web. Wow we have created an economy based on cultivated Wikipedia and Reddit content from the 2010s linked together by bots that can make grammatical sentences and cogent sounding paragraphs. Isn't that great? I don't know, about 10 years ago before google broke itself, I could find information on any topic easily and judge its truth using my grounded human intelligence better than any AI today.
For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.
For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.
It is pathetic realy.
Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.
Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.
During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.
thecupisblue · 42m ago
If you are a subject matter expert, as is expected to be of the person working on the task, then you will recognise the issue.
Otherwise, common sense, quick google search or let another LLM evaluate it.
jcranmer · 1h ago
One of the most amusing things to me is the amount of AI testimonials that basically go "once I help the AI over the things I know that it struggles with, when it gets to the things I don't know, wow, it's amazing at how much it knows and can do!" It's not so much Gell-Mann amnesia as it is Gell-Mann whiplash.
brookst · 1h ago
The Internet in its 1999 form was never going to be fast enough or secure enough to support commerce, banking, or business operations.
falcor84 · 1h ago
Exactly, it took an evolution, but there was no discontinuity. At some point, things evolved enough for people like Tim O'Reilly to say that we know have "Web 2.0", but it was all just small steps by people like those of us here on this thread, gradually making things better and more reliable.
api · 1h ago
I too lived through the dot.com bubble and AI feels identical in so many ways.
AI is real just like the net was real, but the current environment is very bubbly and will probably crash.
krunck · 12m ago
Please stop using stacked bar charts where individual lines(plus a Total) line would help the poor reader comprehend the data better.
vannevar · 1h ago
>Nobody can say for sure whether the AI boom is evidence of the next Industrial Revolution or the next big bubble.
Like the Internet boom, it's both. The rosy predictions of the dotcom era eventually came true. But they did not come true fast enough to avoid the dotcom bust. And so it will be with AI.
dsign · 1h ago
I don't think AI is having much impact on the bits of the economy that have to do with labor and consumption. Folk who are getting displaced by AI are, for now, probably being re-hired to fix AI mess-ups later.
But if, or when AI gets a little better, then we will start to see a much more pronounced impact. The thing competent AIs will do is to super-charge the rate at which profits don't go to labor nor to social security, and this time they will have a legit reason: "you really didn't use any humans to pave the roads that my autonomous trucks use. Why should I pay for medical expenses for the humans, and generally for the well-being of their pesky flesh? You want to shutdown our digital CEO? You first need to break through our lines of (digital) lawyers and ChatGPT-dependent bought politicians."
snitzr · 1h ago
Billion-dollar Clippy.
PessimalDecimal · 1h ago
Trillions, right?
hackable_sand · 1h ago
What about food and housing? Why can't America invest in food and housing instead?
margalabargala · 55m ago
America has spent a century investing in food. We invested in food so hard we now have to pay farmers not to grow things, because otherwise the price crash would cause problems. Food in America is very cheap.
righthand · 14m ago
Is anyone starving in America? Why would there need to be focus on food production? We have huge food commodities.
andsoitis · 1h ago
> Artificial intelligence has a few simple ingredients: computer chips, racks of servers in data centers, huge amounts of electricity, and networking and cooling systems that keep everything running without overheating.
What about the software? What about the data? What about the models?
amunozo · 2h ago
This is going to end badly, I am afraid.
m_ke · 2h ago
Could all pop today if GPT5 doesn’t benchmark hack hard on some new made up task.
falcor84 · 59m ago
I don't see how it would "all pop" - same as with the internet bubble, even if the massive valuations disappear, it seems clear to me that the technology is already massively disruptive and will continue growing its impact on the economy even if we never reach AGI.
m_ke · 32m ago
Exactly like the internet bubble. I've been working in Deep Learning since 2014 and am very bullish on the technology but the trillions of dollars required for the next round of scaling will not be there if GPT-5 is not on the exponential growth curve that sama has been painting for the last few years.
Just like the dot com bubble we'll need to wash out a ton of "unicorn" companies selling $1s for $0.50 before we see the long term gains.
mewpmewp2 · 1h ago
I don't expect GPT-5 to be anything special, it seems OpenAI hasn't been able to keep its lead, but even current level of LLMs to me justifies the market valuations. Of course I might eat my words saying that OpenAI is behind, but we'll see.
apwell23 · 1h ago
> I don't expect GPT-5 to be anything special
because ?
Workaccount2 · 51m ago
Well word on the street is that the OSS models released this week were Meta-Style benchmaxxed and their real world performance is incredibly underwhelming.
input_sh · 1h ago
Because everything past GPT 3.5 has been pretty unremarkable? Doubt anyone in the world would be able to tell a difference in a blind test between 4.0, 4o, 4.5 and 4.1.
falcor84 · 44m ago
I would absolutely take you on a blind test between 4.0 and 4.5 - the improvement is significant.
And while I do want your money, we can just look at LMArena which does blind testing to arrive at an ELO-based score and shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's over twice likely to be judged better on an arbitrary prompt, and the difference is more significant on coding and reasoning tasks.
apwell23 · 1h ago
> Doubt anyone in the world would be able to tell a difference in a blind test between 4.0, 4o, 4.5 and 4.1.
But this isn't 4.6 . its 5.
I can tell difference between 3 and 4.
dwater · 40m ago
That's a very Spinal Tap argument for why it will be more than just an incremental improvement.
Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises?
Seems like the only ones getting rich in this gold rush are the shovel sellers. Business as usual.
mewpmewp2 · 2h ago
If it's automation it could also reduce costs of the customers. But that is a very complex question. It could be that there isn't enough competition in AI and so the customers are getting only marginal gains while AI company gets the most. It could also be that for customers the revenue / profits will be delayed as implementation will take time, and it could be upfront investment.
sofixa · 1h ago
> Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises
Not necessarily, see the Jevons paradox.
thecupisblue · 1h ago
The biggest problem is the inability of corporate middle management to actually leverage GenAI.
bravetraveler · 1h ago
They mention rate of adoption, compared to the internet. Consider the barriers to entry. Before we all got sick of receiving AOL CDs, the prospect of 'going online' was incredibly expensive and sometimes laborious.
More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s
The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:
>In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.
Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.
>You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:
> According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
2. If people think they can get an abnormally high return, they will invest more than otherwise.
3. Whatever other money would've got invested would've gone wherever it could've gotten the highest returns, which is unlikely to have the same ratio as US AI investments .
So while it's unlikely the US would've had $0 investment if not for AI, it's probably even less likely we would've had just as much investment.
This is very common, and this happens in literally every country.
But their CAPEX would be much smaller, as if you look at current CAPEX from Big Tech, most of it are from NVidia GPUs.
If a Bubble is happening, when it pops, the depreciation applied to all that NVidia hardware will absolute melt the balance sheet and earnings of all Cloud companies, or companies building their own Data centers like Meta and X.ai
The dollars are being diverted elsewhere.
Intel a chip maker who can directly serve the AI boom, has failed to deploy its 2nm or 1.8nm fabs and instead written them off. The next generation fabs are failing. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.
> The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic.
That's the really damning thing about all of this, maybe all this capital could have been invested into actually growing the economy instead of fueling this speculation bubble that will burst sooner or later, bringing along any illusion of growth into its demise.
"Amen. And amen. And amen. You have to forgive me. I'm not familiar with the local custom. Where I come from, you always say "Amen" after you hear a prayer. Because that's what you just heard - a prayer."
At this point, everyone is just praying that AI ends up a net positive, rather than bursting and plunging the world into a 5+ year recession.
Can anyone shed light on what is going on between these two groups. I wasn't convinced by the rest of the argument in the article, and I would like something that didn't just rely on "AI" as an explanation.
In more uncertain scenarios small companies can't take risks as well as big companies. The last 2 years have seen AI, which is a large risk these big companies invested in, pay off. But due to uncertainty smallish companies couldn't capitalize.
But that's only one possible explanation!
LOL. It's paying off right now, because There Is No Alternative. But at some point, the companies and investors are going to want to make back these hundreds of billions. And the only people making money are Nvidia, and sort-of Microsoft through selling more Azure.
Once it becomes clear that there's no trillion dollar industry in cheating-at-homework-for-schoolkids, and nvidia stop selling more in year X than X-1, very quickly will people realize that the last 2 years have been a massive bubble.
If you’re referencing Trump’s tariffs, they have only come into effect now, so the economic effects will be felt in the months and years ahead.
At any point in time the world thinks that those top 10 are unstoppable. In the 90's and early 00's... GE was unstoppable and the executive world was filled with acolytes of Jack Welch. Yet here we are.
Five years ago I think a lot of us saw Apple and Google and Microsoft as unstoppable. But 5-10 years from now I bet you we'll see new logos in the top 10. NVDA is already there. Is Apple going to continue dominance or go the way of Sony? Is the business model of the internet changing such that Google can't react quick enough. Will OpenAI go public (or any foundational model player).
I don't know what the future will be but I'm pretty sure it will be different.
[1] https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...
Typically, you probably need to go down to the S&P 25 rather than the S&P 10.
They are also all tech companies, which had a really amazing run during Covid.
They also resemble companies with growth potential, whereas other companies such as P&G or Walmart might’ve saturated their market already
Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is also arguable whether Tesla is a tech company or whether it is a car company.
IMO this is an extremely scary situation in the stock market. The AI bubble burst is going to be more painful than the Dotcom bubble burst. Note that an "AI bubble burst" doesn't necessitate a belief that AI is "useless" -- the Internet wasn't useless and the Dotcom burst still happened. The market can crash when it froths up too early even though the optimistic hypotheses driving the froth actually do come true eventually.
That’s not correct. Did you mean something else?
Once users get hooked on AI and it becomes an indispensable companion for doing whatever, these companies will start charging the true cost of using these models.
It would not be surprising if the $20 plans of today are actually just introductory rate $70 plans.
1. There profits could otherwise be down.
2. The plan might be to invest a bunch up front in severance and AI Integration that is supposed to pay off in the future.
3. In the future that may or may not happen, and it'll be hard to tell, because it may pay off at the same time an otherwise recession is hitting, which smoothes it out.
It's almost as if it's not that simple.
This chart is extremely sparse and very confusing. Why not just plot a random sample of firms from both industries?
I'd be curious to see the shape of the annualized revenue distribution after a fixed time duration for SaaS and AI firms. Then I could judge whether its fair to filter by the top 100. Maybe AI has a rapid decay rate at low annualized revenue values but a slower decay rate at higher values, when compared to SaaS. Considering that AI has higher marginal costs and thus a larger price of entry, this seems plausible to me. If this is the case, this chart is cherry picking.
But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?
But let’s assume we can for a moment.
If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.
Which means that the "trough of disillusionment" will follow.
This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.
AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.
So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.
This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.
These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".
I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.
Other than that, to the extent that I agree with you that:
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.
But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.
No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.
The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise
But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.
Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.
How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.
However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.
2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.
3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.
But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.
Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.
There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.
I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.
Treat it like a brilliant but clumsy assistant that does tasks for you without complaint – but whose work needs to be double checked.
Of course you don't trust the answer.
That doesn't mean you can't work with it.
One of the key use cases for me other than coding is as a much better search engine.
You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.
It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...
There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.
> There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.
> When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?
> As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.
> If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?
> what would be the primary naics code for the business with website at [redacted]
I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.
With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).
For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.
For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.
For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.
It is pathetic realy.
Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.
Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.
During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.
Otherwise, common sense, quick google search or let another LLM evaluate it.
AI is real just like the net was real, but the current environment is very bubbly and will probably crash.
Like the Internet boom, it's both. The rosy predictions of the dotcom era eventually came true. But they did not come true fast enough to avoid the dotcom bust. And so it will be with AI.
But if, or when AI gets a little better, then we will start to see a much more pronounced impact. The thing competent AIs will do is to super-charge the rate at which profits don't go to labor nor to social security, and this time they will have a legit reason: "you really didn't use any humans to pave the roads that my autonomous trucks use. Why should I pay for medical expenses for the humans, and generally for the well-being of their pesky flesh? You want to shutdown our digital CEO? You first need to break through our lines of (digital) lawyers and ChatGPT-dependent bought politicians."
What about the software? What about the data? What about the models?
Just like the dot com bubble we'll need to wash out a ton of "unicorn" companies selling $1s for $0.50 before we see the long term gains.
because ?
And while I do want your money, we can just look at LMArena which does blind testing to arrive at an ELO-based score and shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's over twice likely to be judged better on an arbitrary prompt, and the difference is more significant on coding and reasoning tasks.
But this isn't 4.6 . its 5.
I can tell difference between 3 and 4.
AI is propping up the US economy
https://news.ycombinator.com/item?id=44802916
There, summed it up for you.
Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises?
Seems like the only ones getting rich in this gold rush are the shovel sellers. Business as usual.
Not necessarily, see the Jevons paradox.
More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s