For those around for the .com bust it does feel very similar. In both cases the tech is amazing and isn’t going away, but the business models of many/most companies “innovating” with the tech is simply unsustainable. A lot of “AI” currently looks like a dry forest waiting for lighting to strike and burn it to the ground. The latest round of PR puff from CEOs saying they’re doing layoffs because of AI (vs their poor performance or prior bad business decisions) is fueling the perception that the hype is a mile wide and a millimeter thick, just waiting for the moment when it all comes crashing down.
This is a longstanding predictable pattern in tech. Most of these “AI companies” will go bust or become a shell of their former self and sold off for parts. The tech will be commoditized and become pretty ubiquitous across the board but not a profit center in its own right.
windexh8er · 9h ago
In most implementations of generative AI, as of today, the use case is as a feature. People don't buy features, they buy products. If your company is built solely on this hot button you better be sure you either have some IP backing it up, are building and own it (models), or are the best mouse trap for your target market. Because, I'm watching an entire industry segment all show up with slight iterations of the exact same thing and those things are not great. They're unreliable and mostly mediocre.
nancyminusone · 8h ago
"Thank god <insert every service> added an AI chatbot to their site! It makes it much faster and easier to use!" - things no living soul has ever said
dinfinity · 7h ago
Actually, compared to the non-AI chatbots they are incredibly good.
Sharlin · 2h ago
Compared to being tortured to death, merely being waterboarded is incredibly good.
Yizahi · 6h ago
Considering that the only purpose of chat-bots is to bullshit customers and filter out as much as they can before redirecting to the human support, then yeah - LLM chat-bots are good. Just not good for the the customers. I recently had to call my telco and they had a temerity to demand that I have(!) to speak to the fucking bot. And then bot declared it doesn't understand me and hang up the call. I have no idea if that bot was neural network powered, but the only improvements LLM bot may have achieved is to keep me in the hamster loop longer and delay transfering to the support even more.
sheepscreek · 7h ago
Yes. But those feature companies are not IPO’d and listed on the exchanges. That is the key distinction with the markets this time. Most public companies in this space are actually growing their bottom lines. I’m not talking about just Nvidia here. You can see this effect in the entire food-chain. Everyone is benefitting.
Sharlin · 2h ago
So… where is the money coming from? Where is the value added?
rented_mule · 3h ago
I have inside knowledge (that I'm not trading on) of multiple publicly listed companies with "every feature is AI powered or it doesn't ship" policies that are being applied to their entire product or major parts of their product. Those policies come from execs who are primarily informed by science fiction and FOMO, not by an understanding of the available tools.
AI is great when it's the right tool. But the rate at which features are shipping at these companies is plummeting because so many features would be cheaper, easier, and better without AI. It feels like self-gaslighting on a scale that I haven't seen since the dot com days. And I see engineers and their managers struggling to maintain their sanity as they try to improve their products within the constraints of these policies.
Workaccount2 · 9h ago
CEOs are doing layoffs because Elon dumped 80% of Twitter staff and it didn't collapse.
Those layoffs were a make or break moment for tech.
sealeck · 7h ago
> Elon dumped 80% of Twitter staff and it didn't collapse.
Have you looked at a graph of Twitter revenue and profit recently (hint: it is very low). Nobody has ever claimed that you cannot fire all the employees and keep Twitter online with a skeleton crew; the claim has always been that you can't do that _and remain a viable business_.
Workaccount2 · 7h ago
Twitter is now a private company and doesn't share financial numbers. The only insight we have is that the banks that were stuck with the debt from the purchase (the banks who loaned Elon money) were recently able to find buyers for that debt. That is not a sign of bad financial health or diminishing value.
sealeck · 7h ago
Have you considered why investors were bidding something like 60 cents on the dollar right until Trump's election? (Hint: because Elon has been using the US government to press companies into buying his ad product, despite the fact that it is materially worse than Google or Facebook's).
> The only insight we have
This is untrue. There's fairly credible reporting (e.g. in the FT) that Twitter's reveue numbers have bottomed and they are not making money. Of course, a lot of these problems are being disguised by the fact that Elon Musk has merged in xAI (and investors are very happy to pile money into AI without even the slightest due diligence).
ekianjo · 5h ago
credible reporting based on what exactly?
sealeck · 3h ago
Credible reporting based on what institutions the banks have tried to offload the Twitter debt to have said.
furyofantares · 7h ago
> Nobody has ever claimed that you cannot fire all the employees and keep Twitter online with a skeleton crew
Lots of people claimed that, it was a common claim, even on HN, which was odd to me because I felt it had been common for 10 years on HN to wonder what the heck these companies are doing with 80% of their employees.
It didn't "collapse" but it lost a ton of users and stopped being a public service where anyone could freely read most tweets. The load on Twitter's servers and the scope of its services have gone down dramatically, and it's childish to conclude "Twitter was overstaffed." Musk made Twitter into a much smaller service.
nancyminusone · 8h ago
I think you'll find Twitter to be completely dead to many more people than it was in 2021.
ponector · 1h ago
You can say the same about Facebook, but they had small layoff.
dreamcompiler · 9h ago
It didn't collapse in the sense of ceasing to exist. It did collapse as a place where decent human beings cared to gather, and it lost over 75% of its market value.
lenkite · 7h ago
> it lost over 75% of its market value.
No longer true. In March 2025, X Corp. was acquired by xAI, Musk's artificial intelligence company. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt.
Investor sentiment through secondary share trades still prices the company at $44 billion.
disgruntledphd2 · 7h ago
Well sure, but that was an inter-party transaction so basically valuations are irrelevant (unless they get sued, which probably won't happen unless the AI stuff collapses).
To be fair, the banks did offload the debt at par (two years later) so some people think X/Twitter is worth some money.
graemep · 8h ago
> Elon dumped 80% of Twitter staff and it didn't collapse.
That was because Twitter was massively overstaffed. Are all these other businesses also overstaffed? The result of cheap money? Are they all atu
bentt · 8h ago
They are all overstaffed because they were financially incentivized to overstaff, both for valuation and defensive talent hoarding reasons. This was a positive feedback loop which unwound in tech and will unwind in AI too.
c16 · 8h ago
Definitely feels like since AI there's been a shift and it's gone from total head count as a measure of success to revenue per employee as a measure of success. The fewer employees the better.
surgical_fire · 4h ago
> The fewer employees the better.
This has always been true for any mature companies.
The conundrum is that tech companies in particular are really averse to be perceived as mature companies, because their valuations have to be more grounded in reality.
bentt · 5h ago
Probably because they need to burn all their investment money on compute.
moomin · 8h ago
The thing is, you need those staff to implement expansion plans. Twitter, famously, didn’t make the kind of money people expected a site with that many loyal users to make. Give up on your ambitions and you can lose people. Meta did the same thing.
FirmwareBurner · 8h ago
>That was because Twitter was massively overstaffed.
What's the proof it was the only tech company being overstaffed?
For example Meta doubled its headcount during the pandemic without any increase in market share or new products. How do you explain that not being overstaffed?
xnx · 8h ago
> For example Meta doubled its headcount during the pandemic without any increase in market share or new products.
Meta probably overhired, but if the overall market grows immensely due to even more commerce moving online because of the pandemic, it would make sense to hire a lot more people even without a change in market share or new products.
graemep · 8h ago
I think it very likely a lot of big tech companies are overstaffed. However a lot of companies in many industries are cutting back.
FirmwareBurner · 8h ago
That's what I said.
gatinsama · 8h ago
Exactly this. Everyone else is doing what everyone else does. There's no direct relationship between AI adoption and layoffs, except for inflated board/CEO expectations.
moomin · 8h ago
He definitely started a trend, but the takeaway was a mess. The company makes less money, the technology/UX is objectively getting worse. Yes, some parts of the audience have proven sticky, but I don’t think that was ever a huge revelation. It’s always been possible to take some seeds off the bun.
apercu · 8h ago
I don’t think most (non social companies) were as overstaffed as twitter because most companies have valuations based on revenue/profit rather than “it’s internet/social so 1000x”. Many “traditional” companies have been running leaner since the 90s and can’t function at all with even a 25% headcount reduction.
surgical_fire · 4h ago
> CEOs are doing layoffs because Elon dumped 80% of Twitter staff and it didn't collapse.
Anyone that ever worked in a sufficiently large tech company knows that it can keep the lights on with a skeleton crew. It doesn't mean the results will be good or desirable, but it can be done.
Over time cracks will start to show. New featurs are seldom delivered and half-assed. Bugs will take a long time to fix or just become part of the landscape. That sort of thing.
Twitter is a special case that Musk did not buy it to make it into a profitable company, or to make it more valuable, or to improve it in any way. He just bought it because he wanted to rid it of the "woke mind virus" (and factually tried to get out of the deal and was sued into actually buying it).
CEOs are doing layoffs for a variety of reasons, but mostly to improve profits by reducing costs. These cycles happen, and what it actually points to is that those companies are not seeing a viable path for growth at the moment that justifies their valuation. Layoffs with some vague gestures towards AI is a good smokescreen for the time being.
Traubenfuchs · 8h ago
What were all those people doing all day long?
nyarlathotep_ · 4h ago
I'm the first to attack "useless busywork" employees, but my experience at a large, uh, software firm at this time was there were loads of "promo doc"/pet projects going on.
There were plenty of people working very hard on things that had no path to profitability/utility (beyond the attempt to get higher-ups promoted for "showing impact" or "ownership") or whatever. This was paired with incentives at this time for managers to have more underlings.
I have no idea of the ratio of this to "day in the life" TikToks, but there were lots of people working on things that had no utility or time horizon to be "valuable" in the business sense.
kalleboo · 7h ago
Twitter was a whole media company. They had staff in every country writing news summaries for trending hash tags, etc.
FirmwareBurner · 8h ago
Search on youtube "A day in the life of a Twitter/$FAANG employee".
junga · 7h ago
Did so. According to Josh and Katie most of the day was spent eating.
carlosjobim · 4h ago
Planning their holidays, their kid's birth day parties and who's going to pick up the ice cream for the barbecue with the aunts and uncles, planning the kitchen renovation or house renovation, negotiating with the realtor for their investment property they are buying to rent out on AirBnB, buying cool stuff on Amazon and buying some cute clothes on Temu, planning a fishing trip or skiing trip, raging on social media against fascist who are oppressing them, arguing with lawyers over inheritance, complaining about their boss, thinking about what they're going to have for lunch.
Only complete idiots actually work for a living in this day and age. A job is first and foremost a tool to get approved for a nice/nicer mortgage so that you can focus on your real career which is getting rich from real estate appreciation. Secondly a job is a tool for getting exposed to the right people so you can get an even better job and thus an even nicer mortgage. You can't waste your time at the office by actually working.
nyarlathotep_ · 3h ago
This is a juvenile take that seems brash but ironically I can't disagree with at this point in my life. Well played.
mock-possum · 8h ago
Are you still using twitter?
exe34 · 9h ago
Twitter produced mechahitler and didn't collapse. I don't like this timeline.
weinzierl · 9h ago
"For those around for the .com bust it does feel very similar."
I was around for the .com boom and it feels very different.
I experienced the boom as exuberant
without limits, the current situation is much more nuanced.
apercu · 8h ago
I was around as well and while tech financing is more sophisticated and mainstream, it feels like a similar cliff in regards to valuations - what are some of the nuances you see that separate these too time periods?
noosphr · 8h ago
My grandmother wasn't using altavista in 2000. She is using chatgpt in 2025.
arevno · 8h ago
I was also around, and I concur.
NVDA has a P/E of 55, which is definitely elevated, but nowhere near the 230+ that CSCO had at that time. TO say nothing of SUNW.
The big AI labs are definitely losing money, but they're doing it on the back of tens of (rapidly growing) billions of dollars in ARR, versus the dot com e-commerce and portal flameouts who would go public on (maybe) a million in revenue, at best.
We also have large AI teams at FAANG who are being funded directly by the fat margins of these companies, whose funding is not dependent on the whims of VC, PE or public markets.
These times are not really comparable.
potatototoo99 · 7h ago
Tesla's P/E is at 177.04, a lot of other AI companies are private so we can't really say.
kasey_junk · 7h ago
Teslas pe imbalance long predates the ai cycle so is not relevant.
AIPedant · 7h ago
I don't think Mark Zuckerberg salivating about data centers bigger than Manhattan is "nuanced." People gleefully predicting a 30% increase in national energy consumption strikes me as pretty darn exuberant.
pjmlp · 8h ago
I did as well, and then we had a few layoff rounds after having "positive" results when the VC money dried out, and those that stayed like myself, had several months of delayed salaries.
Applejinx · 8h ago
Maybe you were in the handbasket for the .com boom, and more of an outsider this time around?
weinzierl · 8h ago
I am certainly much older now;-)
So my question to the youngsters in the handbasket:
Do you feel pure and completely untroubled for being part of something big that is certainly not going away anymore? Do you look into your future and see bright skies without the slightest hint of a cloud?
chrisweekly · 7h ago
The .com bubble casualties were companies acting like "the internet means we don't need a viable business model". I see echoes of that in today's "AI means we don't need (m)any human experts anymore, now everyone is a 10x engineer".
Unless it's leveraged by skilled experts, AI-generated code is the payday loan / high-interest credit card of tech debt.
Sharlin · 2h ago
"The AI means we don't need a viable business model" seems to describe the current reality quite closely.
pfisherman · 7h ago
I was around for the dot com boom and bust, and this does not feel similar. The issue with the internet was that much of the value came from network effects that were not there in the late 90s and early aughts when personal computing was desktop boxes with 56k dial up connections. Very much, “if you build it, they will come.” It was the mass rollout of cable modems and then smart phones that changed the math.
There is no cart before the horse here. AI is coming for you, not the other way around. The pessimistic takes are underestimating the impact by at least a couple orders of magnitude. Think smart phones as a lower bound.
I have no idea what capacity people here work with AI, but given my view and experience the pessimistic takes I commonly see on here do not seem realistic.
ZephyrBlu · 7h ago
Smartphones as a lower bound is crazy
pfisherman · 6h ago
Karpathy’s talk about computing 3.0 was spot on. Look at what is going on with pydantic and langchain. “LLM programming” is about to be a thing.
Yizahi · 5h ago
Wait, there was computing 2.0? Damn, I missed whole revolution again...
danaris · 7h ago
I would say exactly the opposite, frankly.
With the internet, there was a clear value proposition for the vast majority of use cases. Even if some of the specific businesses were poorly-conceived or overly optimistic, the underlying technology was very obviously a) growing organically, b) going to be something everyone used & wanted, and c) a commodity.
All three of those parts are vital for a massive boom like that.
Generative AI is growing some, yes, but a lot of the growth is being pushed by the companies creating or otherwise massively invested in gen-AI. And yes, many people try out ChatGPT's webapp, but that's mostly a gimmick—and frankly, many of the cases where people are attempting to use it for more are fairly awful cautionary tales (eg, the people trying to use it as a therapist, and instead getting a cheerleader that confirms their worst impulses).
Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them. Yes, it can help some with coding—with mixed results—but not everyone's a programmer. Not everyone's even an office worker. The internet has obvious useful applications for a plumber or a lawyer; if I hired one of those and they said they were using generative AI to help them in their work, I'd fire them instantly. There are already a bunch of (both amusing and harrowing) stories of lawyers getting reamed out in court for using gen-AI to help them write legal filings.
OpenAI may or may not have a robust moat—I've seen people arguing both ways; personally I suspect lean slightly toward the "not" side—but generative AI as a whole is not something that's an interchangeable commodity the way internet access, or even hosting, is. First of all, in order to use the models that are touted as being advanced enough to actually look like more than spicy autocorrect, you need a serious GPU farm. Second of all, AFAIK, those models are being kept private by the big players like Google and OpenAI. That means that if you build your business on generative AI, unless you're able to both fork out for a massive hardware investment and do your own training to match what the big boys are already doing, you're going to be 100% dependent on another specific for-profit company for your entire business model. That's not a sound business decision, especially during this time where both the technology and the legal aspect of generative AI are still so much in flux.
Generative AI may be here to stay, but it's not going to take over the world the way the internet did.
arbitrary_name · 2h ago
I know lawyers using it, plumbers using it. My sweet little ol grandma uses it. I depend on it as an office worker.
My belief is that it is similarly transformative as the Internet. The bubble will burst, some use cars will never materialize, others will emerge as costs come down, and as value chains adapt.
Your belief is opposite to mine. Time will tell who is right.
bwfan123 · 6h ago
- internet & ecommerce & online changed the way we shopped.
- smartphones in every pocket changed behaviors around entertainment, communication, and commerce
what behaviors of people will gen-ai change ? perhaps the way we learn (instead of google, we head over to a chatbot), perhaps coding .. all up in the air, and unclear at the moment.
pfisherman · 7h ago
Hindsight is 20/20. The company I worked at went under because people questioned whether enough people would ever buy stuff over the internet to make the business viable. It was very much not obvious then.
> Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them…
The problem is that you are going to have to compete with people who are using AI. There is a learning curve, and some people are better at using it than others. Some people know how to use it really well.
surgical_fire · 4h ago
> The problem is that you are going to have to compete with people who are using AI.
This doesn't sound as much of a game changer as you seem to think.
Generative AI is a productivity tool, that can help (to an extent) in certain professional settings. It's usage will never be as ubiquitous as the internet (unless you want to build an economy out of users generating memes using AI).
While I find it somewhat useful (although pretty far of how hyped it is) the economics of it are still super unclear. Right now companies are willing to dump money into this because it is fashionable with the investor class, but I don't know how long it will take for it to lose steam.
dgfitz · 4h ago
> There is a learning curve, and some people are better at using it than others. Some people know how to use it really well.
It's just google-fu 2.0. It isn't hard at all. There isn't really a trick to it. I daresay learning google-fu was harder.
npalli · 7h ago
>For those around for the .com bust it does feel very similar.
Not it doesn't. I was around and we didn't have an entire group of GenX gang warning about the .com crash everywhere, everytime. Compared to nonsense metrics like eyeballs, this time we have real revenue and the biggest companies are tech. It might end in some crash but nothing like the .com one.
zcw100 · 4h ago
Some of the most valuable companies in the world right now are remnants of the dot com crash, Facebook (Meta), Google, Microsoft, Oracle, etc. By this logic the "AI companies" will go bust and coalesce into a few winners who will go on to become the worlds first multi trillion dollar companies and dominate the economic landscape for the next couple of decades.
xnx · 8h ago
It's interesting how both of those period have their tech stock flagship. dotcom: Cisco ai: Nvidia
chii · 8h ago
on the other hand, it's easier to "copy" telecommunications equipment than state of the art chips. Not saying there won't be competitions to nvidia's dominance, but so far, not a pip from anyone (realistically that is).
xnx · 7h ago
> Not saying there won't be competitions to nvidia's dominance, but so far, not a pip from anyone (realistically that is).
It's so surprising. So much money at stake and there is zero competition for hardware purchase. Google's Tensor chips are excellent, but can only be rented.
esafak · 8h ago
I think Cisco imploded because people moved to the cloud. But even cloud providers are stuck with nvidia; they have a software moat.
Curiously, nvidia's P/E ratio is lower than it was two years ago!
high_na_euv · 7h ago
Cloud is 2010+ thing
4b11b4 · 6h ago
I keep coming back to this thought that the ability for computation itself sets the stage for speculation.. or rather, widely available and cheap computation.
ParanoidShroom · 8h ago
Do you think FANG companies inflate AI on purpose in order to create a scenario for a bust to happen, they can survive it given their vast warchest of cash
Spooky23 · 7h ago
It’s one of those scenarios where the high level value prop is obvious and compelling, just like the dotcom bubble.
80% of the hype is about 20% of the bullshit. And the bullshit attracts 80% of the dollars. The current cohort of leaders are Jedi at separating sovereign wealth and markets from their treasure.
jstummbillig · 8h ago
Disagree, because the very few very big AI players are (in contrast to the 90s) very solid. Yes, there is a breadth of absolute bullshit built on top of current AI, but if it were only ChatGPT-alikes and LLMs for coding from here on out, that in itself is enough real value, that requires very little imagination, a lot of implementation and there's more demand than can easily be satisfied right now for both.
dontlaugh · 8h ago
You can’t be serious, surely?
Most of the LLM applications are either entirely useless or trivially reproduced with much simpler free models (or even entirely non-“AI” methods).
petesergeant · 8h ago
I’m an AI enthusiast, and it’s not clear to me that selling inference to a prop model is a winning business model, which is what Anthropic and OpenAI are doing. The open models are good enough today for many things, and are likely to only get better. Feels like inference is a commodity, and not clear how much money there is in it.
I would love to know how much of the inference I pay for is being paid for by VC cash: I suspect a lot of it.
jstummbillig · 8h ago
To be fair though though, the "open" models are fairly sketchy as well, from a business standpoint, because somebody is paying for a lot of GPUs and expensive talent. It's probably the least obviously sustainable open source product of all time, and it's not at all clear to me why that would change going forward.
Right now there seem to be roughly two paths, when it comes to frontier-level LLMs: Meta just not giving a fuck, spending instagram money and pretending it's a business, and whatever Deepseek is doing that might make it both good and also super cost-effective (and there it's even less clear, how much of it is real and what the actual costs are).
xnx · 8h ago
> I would love to know how much of the inference I pay for is being paid for by VC cash: I suspect a lot of it.
Definitely a lot of VC subsidies for OpenAI and Anthropic, none for Google.
petesergeant · 6h ago
Sure, that doesn’t mean we’re not getting subsidised tokens from Google though
optimlayer · 7h ago
yeah, most of it
digitcatphd · 8h ago
If I’m not mistaken they are using high valuations of top companies to conclude AI is overhyped?
Sorry, but weren’t these valuations escalated because of low interest rates and quantitative easing? Perhaps combined with increased concentration in Top 10 by investors navigating uncertainty?
Typical BS coming from a mega fund only supported by management fees. not saying AI isn’t hyped but this is laughable.
thunky · 8h ago
Exactly. Their chart even shows that these companies were more overvalued in 2020, before the AI "bubble" even started.
sigmoid10 · 8h ago
Good rule of thumb: When everybody talks about bubbles while rates are going down, it's a good time to invest. When everybody's talking about investing and rates are going up, it's a good time to drop out. Right now we are in the former timeframe. As long as cash remains cheap, there is no good reason from a financial market perspective for this to not go on. Is it sustainable indefinitely? No. But almost nothing in our current economy is. AI nowadays just generates easy clicks for opinion pieces like this looking at a single data point. That doesn't mean there is any reason to act on it or even just to read too much into it.
jimbokun · 8h ago
Better rule of thumb: have an automated investment strategy that takes a set percentage of your income every paycheck and invest it, regardless of current rates or what anyone's saying.
sigmoid10 · 8h ago
Note that this applies to vanilla investing, like index funds. You can easily automate that if you want. If you're really just looking for modest stable yields, you may as well invest in bonds right now. The US GOV 12 month is at >4%. With inflation significantly below, that's like free money (if you've been an adult in the 2010s you'll know what I mean). But don't expect to make a lot of money in less than a generational timeframe either way.
jimbokun · 8h ago
To make A LOT of money you probably need to start a successful business.
Looking to take on more risk in equity investments is just as likely to end up with you going broke as it is to get outsize returns.
sigmoid10 · 8h ago
>Looking to take on more risk in equity investments is just as likely to end up with you going broke as it is to get outsize returns.
You should look at the chances of your business becoming that successful. They are equally slim. And you have a lot more personal exposure if your business fails vs. if one fails that you only invested your money in and not your time and health.
bdangubic · 8h ago
this can (will, given enough years) get you rich but it won't get you wealthy :)
BenGosub · 8h ago
I think that at the moment the One Big Beautiful Bill ensures that the spending spree will continue and the world will stay afloat with cheap money so I would assume that we are about to see the last part of the bubble. But, I wouldn't bet on my assumptions.
lucidone · 8h ago
I find AI useful, I use it most days to write snippets of code or to rubber duck with. It hasn't changed my workflows that much, just replaced Stackoverflow with ChatGPT. Feels like the sweet spot for me, everything else is noise.
variadix · 1h ago
For the majority of the questions asked and answered on StackOverflow, LLMs are undeniably better. I’m thinking of things like ‘How do I do X in Python’, ‘How do I do X on Linux’, etc., questions that are small in scope, not open ended, and can be easily verified. For everything else LLMs range from rarely useful to outright misleading and counterproductive.
LLMs also don’t really enable coincidental discovery like search engines do. Having to RTFM or read a spec or a book or a blog post to figure out the answer to your question sometimes also teaches you about related and important concepts that you wouldn’t have come across otherwise, and usually there will be suggestions for further reading or a side bar with other interesting topics etc. Completely replacing search feels like a a bit like a trap, where what you get in immediate answers you lose in an unseen opportunity cost.
barbazoo · 7h ago
Chat is the obvious application but the real value imho is using LLM to bridge a gap non-deterministically you couldn’t bridge deterministically before. Entity extraction for example allows us to connect two workflows that often required a human in the loop. Not anymore. I see this everywhere in our SaaS product.
jimbokun · 8h ago
Yep, replaces Google, Stackoverflow and autocomplete for coding with a much superior experience.
But anyone taking a vibe coded project with no human understanding of the code produced and puts it straight into production is going to have a bad time.
bwfan123 · 6h ago
same,
- a better summarizing google for some queries
- snippet generator
but has not changed any workflows.
EZ-E · 9h ago
The LLM/AI tech has clear use cases and benefits. However, no, I do not need a shoehorned, dedicated AI in every single product and service I use. That is where is the bubble is in my opinion, everywhere the AI is built or applied in cases where it does not work or does not make sense.
vincefutr23 · 9h ago
A single chart can be found to support just about any conclusion
gww · 9h ago
Reminds me of one of my favorite Simpsons lines: "Aw, you can come up with statistics to prove anything, Kent. Forty percent of all people know that"
jrmg · 8h ago
I’m partial to Homer’s “Facts are meaningless. You can use facts to prove anything that's even remotely true!”
mvcalder · 8h ago
“They say sixty-five percent of all statistics Are made up right there on the spot”
Also, the chart doesn't take into account that the biggest companies have more power, are bigger right now, and it's not inherent to them using AI. If not AI, it would be something else. Shares and revenue are growing, and people are getting fired. They will not collapse.
jvanderbot · 9h ago
The conclusion I drew was "The level of value inequality in the S&P 500 is higher than before".
From that, any number of conclusions are possible, including perhaps:
* The level of innovation at those companies is high. Certainly the 90s tech booms were actually very innovative and profitable.
HPsquared · 9h ago
It's a monetary phenomenon. The economy as a whole is very bubbly and frothy.
sjw987 · 8h ago
We've been in a bubble ever since people starting believing "data is the new oil".
Data has only driven advertising, and it's done it in such a botched way that it's tearing down the whole discipline of advertising. These companies know all the little tidbits of information about all of us that they need to put the right products directly in front of our eyes multiple times per day, and they still get it wrong.
Advert engagement goes down, people who use advertising realise their budgets are being wasted on the wrong audience and the whole thing will pop. It was naïve to ever believe that data really means anything. At a certain scale it just becomes loads of noise.
ghc · 7h ago
> Data has only driven advertising
This is not remotely true. I mean it's so incredibly not true I wonder how you came to believe this.
Haven't you ever heard of how hedge funds pay for cellular data to understand retail store traffic, or how satellite photos help them estimate the fullness of gas tanks at ports to predict pricing?
Or how data about predicted electrical pricing based on usage helps factories schedule energy-intensive production during times of low pricing?
Or how aircraft maintenance companies like AAR rely on "big data" to position replacement parts in a globally distributed system of warehouses to reduce the time it takes to repair aircraft (their contracts are based on airline uptime), thereby reducing passenger delays due to mechanical issues?
Or how farms use weather and satellite data to deal with droughts, identify areas to spray, and estimate competitor yields for the purposes of planning?
Or how governments now conduct surveillance of pathogens and drug use through sewer water data?
Or how semiconductor companies use massive amounts of data collected from production line sensors to massively increase yields and reduce chip prices, despite the complexity of chip production having increased massively?
You benefit directly or indirectly from companies using data all the time.
sjw987 · 7h ago
Those are good points.
I got a bit carried away in my original statement and undersold data a little bit. I think the point of the statement at the time ("data is the new oil" as an article in The Economist) was mostly hinging on data for use in digital advertising, but I didn't provide any of that context in my original post, and I was mostly considering user data.
bwfan123 · 6h ago
Similar to dot-com, part of the reason is the multiplier effect of all the AI investments. If these investments prove to be uneconomic, which I strongly suspect, the backend of this investment cycle is going to be brutal.
amelius · 8h ago
Yes, we need the economic equivalent of anti-foaming agents.
khurs · 7h ago
It's the usual HypeCycle[1] and most of the players playing know this.
Not even close. Dotcom bubble was massive. Mom and pops were leveraging into tech stocks. I don’t see anything like that today. Is your 75-yo aunt bragging about how she bought Nvidia options? People who lost everything in dotcom and lost it all again during the financial crisis have become PERMANENTLY risk-averse. These are a majority of retired boomers which makes them even more risk-averse because they’re now retired. Dotcom equivalent would be if S&P more than doubles from here.
ghc · 7h ago
I think it's unlikely the next bubble will involve the stock market. I mean the last bubble (real estate) didn't either. It can still be a bubble even if it's mostly VC money going into it, because more companies, endowments, pension funds, and ETFs than ever are exposed through VC. I don't know what the "total VC money invested" graph looks like right now, but even if investment stays constant, the lack of exits would still cumulatively result in a bubble-like inflation over time.
Jensson · 5h ago
Bubbles happen because they haven't happened before, people know their history and don't repeat the same bubble. So just because there hasn't been a catastrophic stock market bubble in USA before doesn't mean it can't happen, it has happened in other countries and those stocks didn't recover.
Bubbles looks very impressive until they pop, most fall for them, that is why they are bubbles.
lvl155 · 4h ago
These “players” (but I like to think of them as scammers since they add zero economic value) have been playing this game between tech stocks and crypto for 5+ years now. It’s so blatantly obvious but we have no regulators and banks don’t give a damn since they’re making money.
torginus · 8h ago
I do wonder if investors have a game-plan of what happens (to their investments, not society), based on future AI trajectories - where it becomes superintelligent, where it tapers off at the level where it's useful but still needs to be babied, or where it can genuinely replace some people, but it's clearly not superhuman.
In all these cases, it's very likely no AI shop is going to have a monopoly on the tech, and cartels are not very likely, considering China (and maybe Europe) is in the game as well.
In a gold rush, sell shovels, and the company's having a monopoly in shovels in Nvidia.
brookst · 8h ago
Investors are just people. They don’t know what superintelligence would mean any more than you or I do. Some will guess right, some will guess wrong.
ghc · 7h ago
I wish more people would understand this. VC is just professional guessing. But you're not guessing impacts, you're guessing future value of company stock compared to its value at time of offer. Some people are really, really, really good at it. That doesn't make them any more qualified to assess the future impact of a new technology than a university PR office.
aiforecastthway · 8h ago
A lot of the comments here are talking about startups. But the chart in the article is the forward P/E of the top 10 companies in the S&P.
For reference, those 10 companies are: Nvidia, Microsoft, Apple, Amazon, Meta, Braodcom, Alphabet (Class A), Alphabet again (Class C), Tesla, Berkshire.
This isn't a pets.com situation.
These companies are ENORMOUS cash engines with incredibly well-proven moats operating in an extremely monopoly-friendly political climate. Nothing like this existed in the 90s. Microsoft, but anti-trust still had some teeth.
The author makes a comparison between these companies and the rest of corporate America, arguing (implicitly) that the forward P/E of these ten symbols is too high relative to the rest of the S&P 500 index.
So let's look at the flip side. Many of the other companies in the S&P are vulnerable to these exact players' moats and pricing power. It's a zero-sum game and the winner is clear, so of course the winner's P/E looks really high compared to the expected loser.
Every single one of them has an AWS bill. Every single one of them has a big Windows/Office install base. Every single one of them probably has a huge apple install base. Every single one of them needs to pay to play in the App Store.
And many of them are also in the unenviable position of being on the losing side of an unfair competition in their actual core business. Walmart/HD/Coca-Cola vs Amazon. IBM/Oracle vs AWS. Or other complicated market dynamics that pose only upside to the big guys and potential downside to the rest (Biotechs vs Amazon Pharmacy).
The remainders are competing margins away from one another, are vulnerable to disruption of mid-market non-S&P players (or similarly sized companies that just aren't on the public markets -- see the huge size of privacy capital relative to the 90s). Some also face significant tariff risk. Think banks, consumer goods.
What percent of the difference in P/Es between the best and rest is justifiable on the thesis that we are entering a multi-decade period of (1) tech feudalism and (2) unpredictable populist fits that wreck havoc on everyone except the tippy top of the echelon who can blow enough cash to control the narrative?
orangebread · 9h ago
I don't think there's just one bubble. There's a meta-bubble and the normie-bubble.
If you're a CEO of a giant AI corp you're currently racing for superintelligence (meta-bubble).
The rest of us apes are flinging AI slop at each other until we've saturated each other in AI slop.
I don't really know what will happen, just offering my observation (ape noises)
gatinsama · 8h ago
CEOs know (or don't care) that they won't reach superintelligence. The reason they are where they are is that they are good at saying what they need to get the next round of funding.
NoGravitas · 4h ago
CEOs know that superintelligence is the only way their unbelievable investments will pay off, and they probably also know that they won't (if they don't know, they certainly don't care). But what's important is convincing investors that a mature industry (IT) is still in an exponential growth period, and they are absolutely willing to hype anything that can do this, up until the point it can't. Thus blockchain and all its applications, now generative AI.
klabb3 · 7h ago
Yes. As well as other hype-men and useful idiots. People are extremely naive when it comes to vested interest talking points. There’s a very natural reason why these guys want to keep talking about AGI and ASI: ”soon” is the magic word that makes investors feel fomo and make rash decisions.
During peak crypto madness vagueposting was an extremely effective market manipulation tool. I know people who made a lot of money on unconfirmed rumors in hours but of course it was just zero-sum gambling - the ”early adopters” made their money at the expense of the latecomers. No value was generated.
People don’t even need to be convinced that AGI/ASI is near, just ”but what if there’s a chance?”. It’s similar psychological tricks as selling lottery tickets.
HPsquared · 9h ago
There's also the monetary "everything bubble".
dcchambers · 9h ago
With the amount of money being tossed around I am convinced this is going to be 10x worse than the dotcom bubble when it pops. And it will pop. You simply can't have pre-product companies valued at 10s of billions of dollars and expect a good outcome.
gooseus · 9h ago
Everyone knows it's mostly bullshit, but that _someone_ is going to end up coming out as the Amazon-level winner.
Every single one of these "fake it till you make it" AI CEO/Founders are betting they are the Amazon.com and not the Pets.com... but if they are the Pets.com then what is the downside?
The CEO of pets.com certainly didn't end up out on the streets for being the biggest disaster of one of the largest bubbles and effectively burning billions of investor dollars (including institutions investing pensions and retirements funds).
jimbokun · 8h ago
Looks like she ended up with a net worth over $20 million so I guess you're right:
The “talent wars” and VCs with a really nasty case of FOMO clouding investment fundamentals is throwing more dry kindling in the pile. For anyone that’s been around a while we’ve seen this movie before
roflcopter69 · 9h ago
I have no problem with the amount of money that is dumped into AI, but I'm annoyed by the false promises. People telling me that Claude Code has no problem implementing clearly defined little feature requests but when I let it tackle this one here https://github.com/JaneySprings/DotRush/issues/89 (add inlay hints for a C# VSCode extension) it kept on failing and never made it work. Even with me guiding it as good as I can. And I tried for a good 4 hours. So yeah, there's still way to go for AI. Right now, it's not as good as the amount of money dumped in would make you believe, but I'm willing to believe that this can change.
tines · 8h ago
Cue the "you're doing it wrong" crowd.
roflcopter69 · 8h ago
Yup. The frustrating thing is that I already read tons of material on how to "hold it right", for example [Agentic Engineering in Action with Mitchell Hashimoto](https://www.youtube.com/watch?v=XyQ4ZTS5dGw) and other stuff, but in my personal experience it just does not work. Maybe the things I want to work on are too niche? But to be fair, that example from Mitchell Hashimoto is working with zig, which is for LLM standards very niche so I dunno man.
Really, someone, just show me how you vibecode that seemlingly simple feature https://github.com/JaneySprings/DotRush/issues/89 without having some deep knowledge of the codebase. As of now, I don't believe this works.
ghc · 7h ago
I think it really, really depends on the language. I haven't been able to make it work at all for Haskell (it's more likely to generate bullshit tests or remove features than actually solve a problem), but for Python I've been able to have it make a whole (working!) graph database backup program just by giving it an api spec and some instructions like, "only use built in python libraries".
The weirdest part about that is Haskell should be way easier due to the compiler feedback and strong static typing.
What I fear most is that it will have a chilling effect on language diversity: instead of choosing the best language for the job, companies might mandate languages that are known to work well with LLMs. That might mean typescript and python become even more dominant :(.
roflcopter69 · 6h ago
(user name checks out, nice)
I share similar feelings. I don't want to shit on Python and JS/TS. Those are languages that get stuff done, but they are a local optimum at best. I don't want the whole field to get stuck with what we have today. There surely is place for a new programming language that will be so much better that we will scratch our heads why we ever stuck with what we have today. But when LLMs work "good enough" why even invent a new programming language? And even if that awesome language exists today, why adopt it then? It's frustrating to think about. Even language tooling like static analyzers and linters might get less love now. Although I'm cautiously optimistic, as these tools can feed into LLMs and thus improve how they work. So at least there is an incentive.
s-lambert · 7h ago
>that example from Mitchell Hashimoto is working with zig
While Ghostty is mostly in Zig, the example Mitchell Hashimoto is using there is the Swift code in Ghostty. He has said on Twitter that he's had good success with Swift for LLMs but it's not as good with Zig.
I think it doesn't work as well with Zig because there's more recent breaking changes not in the training dataset, it still sort of works but you need to clean up after it.
roflcopter69 · 6h ago
Thanks for pointing that out. And yeah, with how Zig is evolving over time, it's a tough task for LLMs. But one would imagine that it should be no problem giving the LLM access to the Zig docs and it will figure out things on its own. But I'm not seeing such stories, maybe I have to keep looking.
bwfan123 · 6h ago
> Cue the "you're doing it wrong" crowd.
or the "humans make mistakes too" crowd
or the "just wait, we are at an inflection point in the sigmoid curve" crowd
jimbokun · 8h ago
Even conceding the "doing it wrong" point, it demonstrates that these tools will require a massive amount of training or retraining to get the desired results. Which means don't lay off your current coders anytime soon.
rayiner · 8h ago
I don’t think AI is overhyped—am I missing something? I remember being skeptical of Dropbox and SpaceX but LLMs seem genuinely revolutionary. Yeah, it’s not “AI” as we understand it from the movies. But it can write papers better than a college freshman. That’s amazing.
Velorivox · 6h ago
In this very thread people are discussing “superintelligence” being around the corner. So yes, it is overhyped. Like if I took the invention of a steam engine and said teleportation is coming tomorrow.
Of course the steam engine was revolutionary. That doesn’t excuse or legitimize the nonsense.
Jensson · 5h ago
> I remember being skeptical of Dropbox and SpaceX but LLMs seem genuinely revolutionary
Dropbox or SpaceX wasn't valued at many trillions of dollars though. Just because its very useful doesn't mean it lives up to the biggest hype ever in human history in terms of monetary investment.
DamnInteresting · 6h ago
Well, most college freshmen aren't plagiarizing and inserting random falsehoods, while consuming an excess of electricity all at an artificially low cost.
ffsm8 · 5h ago
It's over hyped insofar you're looking at the current valuation of the AI companies and look at the value they actually produce at the end of the day.
AI is here to stay, and long term, it's likely going to revolutionize almost all parts of the job market.
But to get there... I'm really not sure what a reasonable time estimate would be. I can see it take something like 3 years, which would make current evaluation plausible, but I wouldn't bet on it.
It'd bet on it taking a tad longer, but I strongly suspect by 10-20 yrs, we'll get there.
Under that time horizon, it feels overvalued and hyped, because the winner's of this revolution might not even have been founded yet.
Havoc · 7h ago
A large part of the ecosystem around it is certainly going to implode in a pets.com fashion. But the underlying tech seems valid to me so think a handful will come out of this stronger than before
cadamsdotcom · 33m ago
I mean yeah. There’s more money and people now.
infecto · 8h ago
I am having a hard time drawing the same conclusions. Half of the companies in 99 were not tech related compared to to 90% today.
itomato · 7h ago
First mover advantage then meant access to a tier one ISP.
Today is it just a matter of cash and DC capacity?
nikolayasdf123 · 8h ago
AI is like a rocket engine, it just keeps on exploding
xyst · 8h ago
This is why many of the companies are trying to get sold to big tech. "Windsurf" is an example here. They want to exit, get paid, pay off investors, and let big tech hold the bag.
Another example is "Devin" or whatever the parent company is. Recently acquired some unknown company and they are cooking their books for the next acquisition
jasonjmcghee · 7h ago
You're thinking of Cognition (makers of Devin) who acquired a _known_ company, Windsurf, right after Google "acquired" hand picked staff including the CEO for a total of $2.4b
kypro · 8h ago
I think the term "bubble" is far too presumptuous. You can only know if something is a bubble with hindsight.
There have been examples where things look like a bubble to some market participants, but turn out to be more or less a good reflection of that thing's future value.
AI is uniquely hard to value too because there's so many exponentials which may or may not occur, with those exponentials both having the potential to make products exponentially more valuable or redundant.
There's also different parts of the AI stack and again it's really hard to see which part of the AI stack holds secure value, perhaps with the exception of the hardware providers.
Anyway, I suspect in a few years those calling AI a bubble will mostly be proven wrong, but that's just my sense of things.
catlifeonmars · 8h ago
ITT: “it’s not possible to tell…”
Also ITT: “it’s not a bubble”
This is a longstanding predictable pattern in tech. Most of these “AI companies” will go bust or become a shell of their former self and sold off for parts. The tech will be commoditized and become pretty ubiquitous across the board but not a profit center in its own right.
AI is great when it's the right tool. But the rate at which features are shipping at these companies is plummeting because so many features would be cheaper, easier, and better without AI. It feels like self-gaslighting on a scale that I haven't seen since the dot com days. And I see engineers and their managers struggling to maintain their sanity as they try to improve their products within the constraints of these policies.
Those layoffs were a make or break moment for tech.
Have you looked at a graph of Twitter revenue and profit recently (hint: it is very low). Nobody has ever claimed that you cannot fire all the employees and keep Twitter online with a skeleton crew; the claim has always been that you can't do that _and remain a viable business_.
> The only insight we have
This is untrue. There's fairly credible reporting (e.g. in the FT) that Twitter's reveue numbers have bottomed and they are not making money. Of course, a lot of these problems are being disguised by the fact that Elon Musk has merged in xAI (and investors are very happy to pile money into AI without even the slightest due diligence).
Lots of people claimed that, it was a common claim, even on HN, which was odd to me because I felt it had been common for 10 years on HN to wonder what the heck these companies are doing with 80% of their employees.
No longer true. In March 2025, X Corp. was acquired by xAI, Musk's artificial intelligence company. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt.
Investor sentiment through secondary share trades still prices the company at $44 billion.
To be fair, the banks did offload the debt at par (two years later) so some people think X/Twitter is worth some money.
That was because Twitter was massively overstaffed. Are all these other businesses also overstaffed? The result of cheap money? Are they all atu
This has always been true for any mature companies.
The conundrum is that tech companies in particular are really averse to be perceived as mature companies, because their valuations have to be more grounded in reality.
What's the proof it was the only tech company being overstaffed?
For example Meta doubled its headcount during the pandemic without any increase in market share or new products. How do you explain that not being overstaffed?
Meta probably overhired, but if the overall market grows immensely due to even more commerce moving online because of the pandemic, it would make sense to hire a lot more people even without a change in market share or new products.
Anyone that ever worked in a sufficiently large tech company knows that it can keep the lights on with a skeleton crew. It doesn't mean the results will be good or desirable, but it can be done.
Over time cracks will start to show. New featurs are seldom delivered and half-assed. Bugs will take a long time to fix or just become part of the landscape. That sort of thing.
Twitter is a special case that Musk did not buy it to make it into a profitable company, or to make it more valuable, or to improve it in any way. He just bought it because he wanted to rid it of the "woke mind virus" (and factually tried to get out of the deal and was sued into actually buying it).
CEOs are doing layoffs for a variety of reasons, but mostly to improve profits by reducing costs. These cycles happen, and what it actually points to is that those companies are not seeing a viable path for growth at the moment that justifies their valuation. Layoffs with some vague gestures towards AI is a good smokescreen for the time being.
There were plenty of people working very hard on things that had no path to profitability/utility (beyond the attempt to get higher-ups promoted for "showing impact" or "ownership") or whatever. This was paired with incentives at this time for managers to have more underlings.
I have no idea of the ratio of this to "day in the life" TikToks, but there were lots of people working on things that had no utility or time horizon to be "valuable" in the business sense.
Only complete idiots actually work for a living in this day and age. A job is first and foremost a tool to get approved for a nice/nicer mortgage so that you can focus on your real career which is getting rich from real estate appreciation. Secondly a job is a tool for getting exposed to the right people so you can get an even better job and thus an even nicer mortgage. You can't waste your time at the office by actually working.
I was around for the .com boom and it feels very different. I experienced the boom as exuberant without limits, the current situation is much more nuanced.
NVDA has a P/E of 55, which is definitely elevated, but nowhere near the 230+ that CSCO had at that time. TO say nothing of SUNW.
The big AI labs are definitely losing money, but they're doing it on the back of tens of (rapidly growing) billions of dollars in ARR, versus the dot com e-commerce and portal flameouts who would go public on (maybe) a million in revenue, at best.
We also have large AI teams at FAANG who are being funded directly by the fat margins of these companies, whose funding is not dependent on the whims of VC, PE or public markets.
These times are not really comparable.
So my question to the youngsters in the handbasket:
Do you feel pure and completely untroubled for being part of something big that is certainly not going away anymore? Do you look into your future and see bright skies without the slightest hint of a cloud?
Unless it's leveraged by skilled experts, AI-generated code is the payday loan / high-interest credit card of tech debt.
There is no cart before the horse here. AI is coming for you, not the other way around. The pessimistic takes are underestimating the impact by at least a couple orders of magnitude. Think smart phones as a lower bound.
I have no idea what capacity people here work with AI, but given my view and experience the pessimistic takes I commonly see on here do not seem realistic.
With the internet, there was a clear value proposition for the vast majority of use cases. Even if some of the specific businesses were poorly-conceived or overly optimistic, the underlying technology was very obviously a) growing organically, b) going to be something everyone used & wanted, and c) a commodity.
All three of those parts are vital for a massive boom like that.
Generative AI is growing some, yes, but a lot of the growth is being pushed by the companies creating or otherwise massively invested in gen-AI. And yes, many people try out ChatGPT's webapp, but that's mostly a gimmick—and frankly, many of the cases where people are attempting to use it for more are fairly awful cautionary tales (eg, the people trying to use it as a therapist, and instead getting a cheerleader that confirms their worst impulses).
Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them. Yes, it can help some with coding—with mixed results—but not everyone's a programmer. Not everyone's even an office worker. The internet has obvious useful applications for a plumber or a lawyer; if I hired one of those and they said they were using generative AI to help them in their work, I'd fire them instantly. There are already a bunch of (both amusing and harrowing) stories of lawyers getting reamed out in court for using gen-AI to help them write legal filings.
OpenAI may or may not have a robust moat—I've seen people arguing both ways; personally I suspect lean slightly toward the "not" side—but generative AI as a whole is not something that's an interchangeable commodity the way internet access, or even hosting, is. First of all, in order to use the models that are touted as being advanced enough to actually look like more than spicy autocorrect, you need a serious GPU farm. Second of all, AFAIK, those models are being kept private by the big players like Google and OpenAI. That means that if you build your business on generative AI, unless you're able to both fork out for a massive hardware investment and do your own training to match what the big boys are already doing, you're going to be 100% dependent on another specific for-profit company for your entire business model. That's not a sound business decision, especially during this time where both the technology and the legal aspect of generative AI are still so much in flux.
Generative AI may be here to stay, but it's not going to take over the world the way the internet did.
My belief is that it is similarly transformative as the Internet. The bubble will burst, some use cars will never materialize, others will emerge as costs come down, and as value chains adapt.
Your belief is opposite to mine. Time will tell who is right.
- smartphones in every pocket changed behaviors around entertainment, communication, and commerce
what behaviors of people will gen-ai change ? perhaps the way we learn (instead of google, we head over to a chatbot), perhaps coding .. all up in the air, and unclear at the moment.
> Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them…
The problem is that you are going to have to compete with people who are using AI. There is a learning curve, and some people are better at using it than others. Some people know how to use it really well.
This doesn't sound as much of a game changer as you seem to think.
Generative AI is a productivity tool, that can help (to an extent) in certain professional settings. It's usage will never be as ubiquitous as the internet (unless you want to build an economy out of users generating memes using AI).
While I find it somewhat useful (although pretty far of how hyped it is) the economics of it are still super unclear. Right now companies are willing to dump money into this because it is fashionable with the investor class, but I don't know how long it will take for it to lose steam.
It's just google-fu 2.0. It isn't hard at all. There isn't really a trick to it. I daresay learning google-fu was harder.
Not it doesn't. I was around and we didn't have an entire group of GenX gang warning about the .com crash everywhere, everytime. Compared to nonsense metrics like eyeballs, this time we have real revenue and the biggest companies are tech. It might end in some crash but nothing like the .com one.
It's so surprising. So much money at stake and there is zero competition for hardware purchase. Google's Tensor chips are excellent, but can only be rented.
Curiously, nvidia's P/E ratio is lower than it was two years ago!
80% of the hype is about 20% of the bullshit. And the bullshit attracts 80% of the dollars. The current cohort of leaders are Jedi at separating sovereign wealth and markets from their treasure.
Most of the LLM applications are either entirely useless or trivially reproduced with much simpler free models (or even entirely non-“AI” methods).
I would love to know how much of the inference I pay for is being paid for by VC cash: I suspect a lot of it.
Right now there seem to be roughly two paths, when it comes to frontier-level LLMs: Meta just not giving a fuck, spending instagram money and pretending it's a business, and whatever Deepseek is doing that might make it both good and also super cost-effective (and there it's even less clear, how much of it is real and what the actual costs are).
Definitely a lot of VC subsidies for OpenAI and Anthropic, none for Google.
Sorry, but weren’t these valuations escalated because of low interest rates and quantitative easing? Perhaps combined with increased concentration in Top 10 by investors navigating uncertainty?
Typical BS coming from a mega fund only supported by management fees. not saying AI isn’t hyped but this is laughable.
Looking to take on more risk in equity investments is just as likely to end up with you going broke as it is to get outsize returns.
You should look at the chances of your business becoming that successful. They are equally slim. And you have a lot more personal exposure if your business fails vs. if one fails that you only invested your money in and not your time and health.
LLMs also don’t really enable coincidental discovery like search engines do. Having to RTFM or read a spec or a book or a blog post to figure out the answer to your question sometimes also teaches you about related and important concepts that you wouldn’t have come across otherwise, and usually there will be suggestions for further reading or a side bar with other interesting topics etc. Completely replacing search feels like a a bit like a trap, where what you get in immediate answers you lose in an unseen opportunity cost.
But anyone taking a vibe coded project with no human understanding of the code produced and puts it straight into production is going to have a bad time.
- a better summarizing google for some queries
- snippet generator
but has not changed any workflows.
https://www.youtube.com/watch?v=IUK6zjtUj00
From that, any number of conclusions are possible, including perhaps:
* The level of innovation at those companies is high. Certainly the 90s tech booms were actually very innovative and profitable.
Data has only driven advertising, and it's done it in such a botched way that it's tearing down the whole discipline of advertising. These companies know all the little tidbits of information about all of us that they need to put the right products directly in front of our eyes multiple times per day, and they still get it wrong.
Advert engagement goes down, people who use advertising realise their budgets are being wasted on the wrong audience and the whole thing will pop. It was naïve to ever believe that data really means anything. At a certain scale it just becomes loads of noise.
This is not remotely true. I mean it's so incredibly not true I wonder how you came to believe this.
Haven't you ever heard of how hedge funds pay for cellular data to understand retail store traffic, or how satellite photos help them estimate the fullness of gas tanks at ports to predict pricing?
Or how data about predicted electrical pricing based on usage helps factories schedule energy-intensive production during times of low pricing?
Or how aircraft maintenance companies like AAR rely on "big data" to position replacement parts in a globally distributed system of warehouses to reduce the time it takes to repair aircraft (their contracts are based on airline uptime), thereby reducing passenger delays due to mechanical issues?
Or how farms use weather and satellite data to deal with droughts, identify areas to spray, and estimate competitor yields for the purposes of planning?
Or how governments now conduct surveillance of pathogens and drug use through sewer water data?
Or how semiconductor companies use massive amounts of data collected from production line sensors to massively increase yields and reduce chip prices, despite the complexity of chip production having increased massively?
You benefit directly or indirectly from companies using data all the time.
I got a bit carried away in my original statement and undersold data a little bit. I think the point of the statement at the time ("data is the new oil" as an article in The Economist) was mostly hinging on data for use in digital advertising, but I didn't provide any of that context in my original post, and I was mostly considering user data.
[1]https://commons.wikimedia.org/wiki/File:Gartner_Hype_Cycle.s...
Bubbles looks very impressive until they pop, most fall for them, that is why they are bubbles.
In all these cases, it's very likely no AI shop is going to have a monopoly on the tech, and cartels are not very likely, considering China (and maybe Europe) is in the game as well.
In a gold rush, sell shovels, and the company's having a monopoly in shovels in Nvidia.
For reference, those 10 companies are: Nvidia, Microsoft, Apple, Amazon, Meta, Braodcom, Alphabet (Class A), Alphabet again (Class C), Tesla, Berkshire.
This isn't a pets.com situation.
These companies are ENORMOUS cash engines with incredibly well-proven moats operating in an extremely monopoly-friendly political climate. Nothing like this existed in the 90s. Microsoft, but anti-trust still had some teeth.
The author makes a comparison between these companies and the rest of corporate America, arguing (implicitly) that the forward P/E of these ten symbols is too high relative to the rest of the S&P 500 index.
So let's look at the flip side. Many of the other companies in the S&P are vulnerable to these exact players' moats and pricing power. It's a zero-sum game and the winner is clear, so of course the winner's P/E looks really high compared to the expected loser.
Every single one of them has an AWS bill. Every single one of them has a big Windows/Office install base. Every single one of them probably has a huge apple install base. Every single one of them needs to pay to play in the App Store.
And many of them are also in the unenviable position of being on the losing side of an unfair competition in their actual core business. Walmart/HD/Coca-Cola vs Amazon. IBM/Oracle vs AWS. Or other complicated market dynamics that pose only upside to the big guys and potential downside to the rest (Biotechs vs Amazon Pharmacy).
The remainders are competing margins away from one another, are vulnerable to disruption of mid-market non-S&P players (or similarly sized companies that just aren't on the public markets -- see the huge size of privacy capital relative to the 90s). Some also face significant tariff risk. Think banks, consumer goods.
What percent of the difference in P/Es between the best and rest is justifiable on the thesis that we are entering a multi-decade period of (1) tech feudalism and (2) unpredictable populist fits that wreck havoc on everyone except the tippy top of the echelon who can blow enough cash to control the narrative?
If you're a CEO of a giant AI corp you're currently racing for superintelligence (meta-bubble).
The rest of us apes are flinging AI slop at each other until we've saturated each other in AI slop.
I don't really know what will happen, just offering my observation (ape noises)
During peak crypto madness vagueposting was an extremely effective market manipulation tool. I know people who made a lot of money on unconfirmed rumors in hours but of course it was just zero-sum gambling - the ”early adopters” made their money at the expense of the latecomers. No value was generated.
People don’t even need to be convinced that AGI/ASI is near, just ”but what if there’s a chance?”. It’s similar psychological tricks as selling lottery tickets.
Every single one of these "fake it till you make it" AI CEO/Founders are betting they are the Amazon.com and not the Pets.com... but if they are the Pets.com then what is the downside?
The CEO of pets.com certainly didn't end up out on the streets for being the biggest disaster of one of the largest bubbles and effectively burning billions of investor dollars (including institutions investing pensions and retirements funds).
https://www.quiverquant.com/insiders/1780458/Julie%20Wainwri...
You can buy a hell of a lot of dogs and socks for that much.
https://en.wikipedia.org/wiki/Pets.com#Sock_puppet
Really, someone, just show me how you vibecode that seemlingly simple feature https://github.com/JaneySprings/DotRush/issues/89 without having some deep knowledge of the codebase. As of now, I don't believe this works.
The weirdest part about that is Haskell should be way easier due to the compiler feedback and strong static typing.
What I fear most is that it will have a chilling effect on language diversity: instead of choosing the best language for the job, companies might mandate languages that are known to work well with LLMs. That might mean typescript and python become even more dominant :(.
I share similar feelings. I don't want to shit on Python and JS/TS. Those are languages that get stuff done, but they are a local optimum at best. I don't want the whole field to get stuck with what we have today. There surely is place for a new programming language that will be so much better that we will scratch our heads why we ever stuck with what we have today. But when LLMs work "good enough" why even invent a new programming language? And even if that awesome language exists today, why adopt it then? It's frustrating to think about. Even language tooling like static analyzers and linters might get less love now. Although I'm cautiously optimistic, as these tools can feed into LLMs and thus improve how they work. So at least there is an incentive.
While Ghostty is mostly in Zig, the example Mitchell Hashimoto is using there is the Swift code in Ghostty. He has said on Twitter that he's had good success with Swift for LLMs but it's not as good with Zig.
I think it doesn't work as well with Zig because there's more recent breaking changes not in the training dataset, it still sort of works but you need to clean up after it.
or the "humans make mistakes too" crowd
or the "just wait, we are at an inflection point in the sigmoid curve" crowd
Of course the steam engine was revolutionary. That doesn’t excuse or legitimize the nonsense.
Dropbox or SpaceX wasn't valued at many trillions of dollars though. Just because its very useful doesn't mean it lives up to the biggest hype ever in human history in terms of monetary investment.
AI is here to stay, and long term, it's likely going to revolutionize almost all parts of the job market. But to get there... I'm really not sure what a reasonable time estimate would be. I can see it take something like 3 years, which would make current evaluation plausible, but I wouldn't bet on it.
It'd bet on it taking a tad longer, but I strongly suspect by 10-20 yrs, we'll get there.
Under that time horizon, it feels overvalued and hyped, because the winner's of this revolution might not even have been founded yet.
Today is it just a matter of cash and DC capacity?
Another example is "Devin" or whatever the parent company is. Recently acquired some unknown company and they are cooking their books for the next acquisition
There have been examples where things look like a bubble to some market participants, but turn out to be more or less a good reflection of that thing's future value.
AI is uniquely hard to value too because there's so many exponentials which may or may not occur, with those exponentials both having the potential to make products exponentially more valuable or redundant.
There's also different parts of the AI stack and again it's really hard to see which part of the AI stack holds secure value, perhaps with the exception of the hardware providers.
Anyway, I suspect in a few years those calling AI a bubble will mostly be proven wrong, but that's just my sense of things.
Which is it? :)