The AI vibe shift is upon us

64 lelele 74 8/24/2025, 10:31:09 AM cnn.com ↗

Comments (74)

tra3 · 4h ago
I can’t help but feel like the hype generators don’t use the tech themselves. I can contrast this with crypto as a recent example. Sure there were some interesting tidbits there, but I just didn’t see the appeal.

With llms the changes are transformative. I’m trying to learn 3d modeling and chatgpt just gave me a passable sketch for what I had in my mind. So much better than googling for 2 hours. Is the cooling off because industry leadership promised agi last year and it’s not here yet?

Gigachad · 4h ago
The old way was not really that you would spend 2 hours googling for the exact procedure you need, but that you would spend time following getting started tutorials, following along with example projects, and generally learn the entire program. Then when you go to make something specific, you'd be able to come up with the steps required yourself because you know the program.

I'm not discounting the value of having ChatGPT just hand you the answer straight up. If you just want to get the task done as fast as possible that's a pretty cool option that didn't used to exist. But the old way wasn't really worse.

mnky9800n · 4h ago
The problem is that stacking up these solutions creates technical debt. This technical debt needs to be solved by understanding the code base and fixing the issues. Can an ai coding agent do that? Sure maybe. But what I find is that ai coding agents need your help to do pretty much everything. So you need to have a good understanding of the code base in order to be helpful. So eventually you sit and wonder if vibe coding a stack of tools was actually helpful. Don’t get me wrong I think Claude code is amazing, but it’s not “And therefore there no longer needs to be jobs” amazing.
Hizonner · 2h ago
> following along with example projects

What the LLM gives you is essentially an example project, and you can ask for the specific examples you need. You can compare and contrast alternative ways of doing it. You can give it an example in terms of what you already know, and ask it to show you how that translates into whatever you're trying to learn. You don't have to just blindly take what it produces and use it unread.

pragmatic · 1h ago
This is why EXAMPLES in getting started pages are so important.

LLMs are making up for the lack of this.

It’s the Backus-Naur approach vs the Human approach.

Humans learn by example. IMHO this is why math education (and most software documentations) fails so hard - starting with axioms instead of examples.

nchmy · 2h ago
But then it wouldn't be vibe coding! /s

It's endlessly mind-boggling to me how there's so many people who can't grasp the idea of just using llms as a tool in your engineering toolkit, and that you should still be responsible, thoughtful, and do code review - as you would if you delegated to a junior dev (or anyone!)

They see complete fools just accepting the output wholesale, and somehow extrapolate that to thinking everyone works that way

torginus · 3h ago
I really f*king hate this new brand of tech hype, how it used to be is:

Here's the iPhone 13, it makes better pictures, lasts longer on battery, and plays games faster than the iPhone 12. Buy it for $699.

Now it has become:

Here's the iPhone 13, the greatest breakthrough in the history of civilization. But enough about that, let's talk about the iPhone 14. We've released a whitepaper showing the iPhone 14 will almost certainly take your job, and the iPhone 15 will kill us all, provided no further steps are taken. It's so powerful, that we decided to instill powerful moral safeguards into it, so it will steer you towards goodness, and prevent it being used for evil (such as looking at saucy pictures). We also find it necessary to keep a permanent and comprehensive log of every interaction you have with it.

You also can't have it, but can hold it in your hand, provided you pay us $20/month and we deem you morally worthy of accessing this powerful technology. (Do not doubt any of this, we are intellectually superior to you, and have humanity's best interests at heart, you don't want to look like a fool, do you?)

rsynnott · 4h ago
“A new way to learn Blender” is not a multi-trillion dollar industry, is the thing. “Oh, this is kinda neat, and occasionally useful” just won’t cut it at current levels of investment.
torginus · 4h ago
I remember reading about a company which looked to solve the issue of open-source financing - paying volunteers of projects which were used by billion dollar companies.

The company at some point crossed the billion-dollar valuation, yet only handed out a single-digit million as pay for the maintainers.

lostmsu · 1h ago
Education market is certainly multi trillion.
girvo · 4h ago
> Is the cooling off because industry leadership promised agi last year and it’s not here yet?

Effectively, yes: the promises are so huge, that even the impressive usefulness and value it brings today is dwarfed in comparison.

sauercrowd · 4h ago
It's "the rest". More complex projects, existing context (like infrastructure, code, business context, ...).

Building a small script is easy for chatgpt, but actually leveraging the workforce consistently turns out to be a lot harder than the hype promised.

uncircle · 3h ago
Then there’s me, spending my August learning Blender the manual way like we did before 2022 to escape the dreadscape of AI-infected software engineering, and discovering that, wow, I really enjoy 3D modeling to the point I might at least make it my hobby if not pivot my career to it.

I wonder if I should have listened to the hype generators (which you sound like one) and just have created ‘passable’ models with help of an LLM, instead of exercising my brain, learning something new and getting out of my comfort zone.

At the risk of sounding controversial, I’ll add that I also have a diametrically opposed view of crypto’s utility vs LLM than you, especially in the long-term: one will allow us to free ourselves from the shackles of government policy as censorship expands, the other is a very fancy and very expensive nonsense regurgitator that will pretty much go on to destroy the Internet and any sort of credibility and sense of truth, making people dumber at large scale, while lining the pockets of a lucky few.

not_the_fda · 2h ago
AI is pulled out every couple decades and over-hyped. It never matches the hype, then the term AI gets a bad rap, goes away, and the useful stuff gets a new name, like machine learning to shake off the bad connotations of AI. We saw this in the 60's 80', early oughts, and now today.

I remember in college during the late 90's the hype was that CASE tools (Computer Aided Software Engineering) was going to make software engineers irrelevant, you just tell the system your requirements and it spits out working code. Never turned out.

Today, the only way the amount of investment returns a profit if it replaces a whole bunch of workers. If it replaces a whole bunch of workers, well there will be a whole lot less people to buy stuff. So either the bubble bursts or a lot of people lose their job. Either way we are in for a rough ride.

antonvs · 4h ago
> Is the cooling off because industry leadership promised agi last year and it’s not here yet?

Exactly. The business world isn't remotely close to being rational. The technology is incredible, but that doesn't mean it's going to translate to massive business value in the next quarter.

The market reaction to this is driven by hype and by people who don't understand the technology well enough to see through the hype.

sails · 4h ago
> Some large companies’ pilots and younger startups are really excelling with generative AI,” … “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,”

Everyone victory lapping this as a grand failure should pay attention to the above snippet.

samrus · 4h ago
i think thats a bit too defensive. the reasonable take has been that AI is definitely a game changer, like the internet was, but it was still in a bubble because people extrapolated the S-curve as though it was an exponential explosion. just like they did with the internet.

so yeah, targeted well thought out usecases that are handled well by LLMs will deliver value, but it wont replace developers or anything like that, which is what these people with barely an understanding of the tech's limitations have been claiming.

OpenAI hasnt "internally achieved" AGI. thats what people are calling bullshit on

aDyslecticCrow · 3h ago
There is also the cost of inference that has been made artificially cheap. Alot of "gamechanging workflows" may end up being economically too expensive to maintain if the true cost of that compute snaps back.
mnky9800n · 4h ago
It’s like everyone started thinking being an influencer is the actual job as opposed to solving problems via automation. Like what is software if it’s not that?
binary132 · 3h ago
I genuinely think it’s because they are invested in or otherwise making money off the ecosystem, but it really only pans out if they succeed at selling it. Kind of like the rust drones
0xDEAFBEAD · 4h ago
Did we read the same article? I don't see that passage anywhere.
falcor84 · 4h ago
It's from the linked article about the MIT report:

https://fortune.com/2025/08/18/mit-report-95-percent-generat...

risyachka · 4h ago
Why? Doesn’t sound different from any framework e.g. React.

Fixes one pain point good. Can’t really be applied to everything.

So just another tool, not a magic bullet like it is being marketed.

fcdradio · 23m ago
Definitely agree that it’s not a magic bullet, the hype is huge and a bubble burst is quite possible.

On the other hand, its ability to eliminate toilsome work in a variety of areas (it can generate a basic legal contract as well as a basic rails app) is pretty astounding. There are many other industries besides software dev where having tools that can understand and communicate in human language and context could be totally transformative, and they have barely begun to look into it. I think this is where startups should be focused.

rsynnott · 4h ago
I mean, “it works occasionally, in extremely restrictive circumstances” could be said of nearly any previous tech bubble (crypto may be the one example that just never really delivered anything much at all); this even works for the _previous_ AI bubbles. Expert systems are still, slightly, a thing, say.

LLMs are receiving a level of investment that appears to be based on them being world-changing, and that just doesn’t seem to be working out.

aDyslecticCrow · 3h ago
Theyre world changing beyond doubt for one industry in particular; scams, fake news, propaganda, forum bots. The industry has evolved beyond recognition.

We just received a call at work using the voice of the head of accounting.

I really hope the good of all the other uses offset the harm done.

Etheryte · 4h ago
> Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for

I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.

redbluered · 4h ago
I think one issue is time.

We're a few years in. It takes time to figure things out and see returns.

The web and dot com boom and bust still led to several trillion dollar companies, eventually.

AI will transform my industry, but not overnight. My employer is within that 95%... but won't be forever.

lomase · 4h ago
Mobile phones was what changed society, not the web or the dot com era.
chriskanan · 3h ago
Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.

A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license

kace91 · 4h ago
I’ve heard the story that SQL was originally sold as a language that non tech people could use to query databases. It’s mostly an utter failure at that, yet still immensely popular.

I’m expecting a similar future for AI, it will not deliver the “deprecating devs” part but it will still be a useful and ubiquitous tool.

rsynnott · 4h ago
Yeah, it was a 4GL. Roughly since the 1950s, every ten years or so someone has come along with “this will allow unskilled people to write programs, and destroy those awful programmers forever” (_COBOL_ was originally basically marketed as this!) SQL is by _far_ the most successful thing to actually come out of this recurrent trend, in that it is actually useful, and unskilled people can actually use it to an extent. Most of the rest of it, 4GLs and 5GLs and drag and drop programming and no-code and… was just kinda useless; at most it made for a good demo, but attempts to make actual workable maintainable software with it broke down fast.
antonvs · 4h ago
It's interesting that there's been that constant drumbeat - for at least about half a century at this point - about eliminating software developers, yet somehow we don't get the same messaging about eliminating, say, civil engineers.
Disposal8433 · 3h ago
> eliminating software developers

I must write a "me too" here because I have seen this a lot recently on various sites. Whether it comes from managers or non-coders (I guess astrosurfing managers), it's always about those awful developers gate-keeping software development with their complicated compiled languages. I know it's all fake but it's exhausting, and it's nice to see it acknowledged here on HN.

rsynnott · 2h ago
Don’t give them ideas. Whatever about vibe-coding, we really, really do not want vibe-railways.
lomase · 4h ago
https://en.wikipedia.org/wiki/Fourth-generation_programming_...

It was all the hype at the time, like LLMs are now. Most of them died because it was a bad idea.

And the reason we still use some, like SQL, is not because of the sintaxt.

badgersnake · 4h ago
The hype cycle has priced in a paradigm shift, not gradual efficiency improvements. From that perspective a large correction is on the way.
lomase · 4h ago
What I don't really underdstan is how having 10 juniors in a team, all the agentic code stuff, makes a programmer more productive.
ricardobeat · 3h ago
You don’t understand how having an extra ten programmers in your team can be productive?

Junior developers require guidance but are still producing value. And with good guidance, they will do amazing work.

linker3000 · 2h ago
Here's one possibility.

With AI we need fewer programmers, and the juniors will possibly be the first to go, but they might me retrained for other careers (which might eventually get cancelled too because of AI), or out of work.

The software they produced did something - it might have been a CRM or a game, but out of work people might have to cut back on their gaming spend. As for the CRM app business, the customers and potential software customers are also cutting back in staff, and the CRM apps will be able to conduct direct B2B negotiations with client CRMs, so there's no job opportunities there, and so more people are out of work. Perhaps the businesses that used the AI-based B2B and B2C CRM and ERP systems won't be needed any more, or not have a viable customer base, too.

Other industries are replacing folks with 'AI', so the unemployment pool is getting larger. This means the luxury and non-vital goods manufacturers will have less revenue and they are laying off staff so there's some compensation there, but eventually not enough for survival - which is 'fine' because AI is replacing all this stuff.

This snowballs into other industries, leaving just those jobs that can be done more easily by a human, but those jobs will also reduce as AI and surrounding robotics etc improve, so what do all these unemployed people do all day. Some will embrace leisure activities that don't break the bank. Some may volunteer for community work or projects to improve the World, but they still need to eat and pay bills - who's going to help with that?

One solution might be a 'Star Trek' economy not based on work for reward, but that's a big cultural shift that people and governments will struggle massively to get their heads around conceptually.

There will also be powerful resistance to such a radical rebasing of the planet-wide financial model, especially by those people and organisations that have amassed wealth and don't want to give it up. They'll even fight back with lobbying and arguments against change while they're getting replaced with AI.

Or...?

forgotoldacc · 4h ago
5% saying it's helping their company is roughly in line with the lizard man constant. [1] There will always be people who will never admit a thing didn't work out as planned and those who just like to answer sarcastically. It's not unreasonable to assume that, if this report is even remotely accurate, it's pretty much 100% of people finding AI fairly disappointing.

[1] https://en.m.wiktionary.org/wiki/Lizardman%27s_Constant

rsynnott · 4h ago
I mean, I assume the blogspam industry is thrilled with it…

For the time being, and the foreseeable future, LLM’s sweet spot seems to be low-grade translation, and ultra-low-grade bottom barrel ‘content generation’. Which is… not nothing, but also not what you’d call world-changing. As a number of people said, there probably is an industry here; it’s just that it’s worth on the order of tens of billions, not trillions as the markets currently appear to believe.

(Some people will claim it’s a great programming tool. Personally sceptical, but even if it’s the greatest, most amazingest programming tool ever, well, “we might be even more important than Borland and Jetbrains were” is not going to thrill the markets too much. Current valuations are built on mass-market applicability, and if that doesn’t show up soon there will be trouble.)

Ekaros · 3h ago
Also image generation. If you just need to fill space on web pages. Generating a few images for free/very cheap is cheaper than paying for stock images. And at that price do you really care if you do not have copy right? Especially if everything else is LLM generated slop as well.
rsynnott · 2h ago
Okay, so, “we’re as important as Borland, Jetbrains, AND Getty Images! That’ll be a trillion dollars investment money, pls”.

Like, that’s still just silly.

On the developer tools thing in particular, I’d note that it is historically extremely difficult to make a sustainable business, nevermind a wildly profitable business, in that space. Borland and Jetbrains probably are the closest that anyone has come.

luckylion · 4h ago
> Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. [1]

"We wanted to make money with it, but we didn't immediately make a lot of money" feels very different from "the project failed to deliver what it set out to".

[1] https://fortune.com/2025/08/18/mit-report-95-percent-generat...

antonvs · 4h ago
It's roughly in line with Sturgeon's Law: 90% of everything is crap.

Except in this case, where AI can enable people with absolutely no experience in some area to produce something that at least superficially can seem plausibly viable, it's no surprise that the percentage of crap is even higher.

uncircle · 3h ago
This is an interesting point. As LLMs are not intelligent and trained on what already exists, their output is necessarily mediocre if not bad. We simply have found a way to increase the amount of crap in the digital world, to the point that Sturgeon’s 90% will become a very low estimate.
Traubenfuchs · 4h ago
You make „utilities“, those initiatives are about replacing complex processes, business-people-engineer communication and the engineers themselves with AI.

e.g. I waste a lot of time with converting business requirements into a proprietary rule language. It should be simple tasks, but the requirements are freaky, the language is limited and I often need to look up internals of systems that produce data the rules act upon.

My bosses boss currently wants me to replace my work with AI. It can not work. It‘s setup for failure.

kh_hk · 4h ago
This just feels again like the powers that be trying to bring down wages to "correct" after the recent astroturfing.
mrkramer · 4h ago
>Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for — ginning up more revenue.

AI startups were meant to solve problems in novel ways not to amass revenue.

sulandor · 4h ago
the 'novelty of the ways' is usually not nearly as much a factor in adoption than if said change would contribute to the amassing of revenue.
torginus · 2h ago
I smile with glee when these people fail. The fundamental issue of modern capitalism is that its a coerce and exploitative system, true believers (who are in charge unfortunately) ignore this, and think money and value are the same thing.

Let me show you what I mean: Let's someone runs a grocery, and they want to make it more profitable. After looking at the value chain, they conclude the person growing the lettuces makes 10% of the profit, logistics makes 40%, and retail 50%.

So they conclude that the best way to improve the business, is to optimize the retail side.

Then you walk into the store and see the tiny withered lettuce on the gleaming fancy shelves.

If they decided to focus on where the value is created, and helped the farmer grow better groceries, everybody would've been happy.

mrkramer · 1h ago
In capitalism everybody specializes in something, retailers in trading, logistics in storage and transport and producers in producing. The most efficient way to improve all that is to have vertically integrated business where you do all of the above. In my country we had one big retailer like that but it became so huge and in the end it imploded. And yes I agree that capitalism is exploitive and I think that instead of working for salary people should work for equity. I would certainly be more motivated if I own piece of the business instead of merely getting a salary.
fhd2 · 4h ago
If you're alluding to the fact that a lot of startups run at a loss to capture as much of the market as they can, that is true. But I don't think that's the point here.

Revenue is probably the wrong measure, it should be profit. And a startup that doesn't somehow turn into profit for its _customers_ usually doesn't see much traction.

They can either increase revenue (there's a lot of AI sales tools that promise just that), or, more commonly, reduce costs, which also increases profits. If it saves time or money, it reduces costs. If it doesn't do either of these things, you'd have to really enjoy the product to still pay for it.

jgilias · 4h ago
lol
GaggiX · 4h ago
>technology that has never proven its worth outside of specious hype

Reading stuff like this makes me question the entirety of the article.

omnicognate · 4h ago
That's a quote from Ed Zitron, whose entire schtick is that AI is a scam. It's independent of the article itself, and in particular the list of bearish observations near the top, all of which are independently verifiable.
laincide · 3h ago
I find it increasingly annoying to watch how the AI bubble hasn't burst and is currently shifting the entire economic paradigm while we get a constant stream of articles about how "actually AI flopped and it's all over guys, the show is finished pack up". Jesus, it's like worse than if just one or the other was happening.
Traubenfuchs · 4h ago
What will we do with all the datacenters once the subsidies for AI usage will disappear?
moi2388 · 4h ago
Ship gigabytes worth of SPA JavaScript for your shitty company homepage, probably
politelemon · 4h ago
We already have react.
theandrewbailey · 3h ago
I work at an ewaste recycling company. I expect we'll see some high end Nvidia Tesla GPUs coming through, just like the Ant Miners (Bitcoin ASICs) a few weeks ago.
uncircle · 3h ago
Good place as any to ask: can you give a rough percentage of how much material can be recycled from high-end electronics like GPUs? 5%? 20%?

Is it mostly rarer and more expensive materials like gold/lithium, or is it mainly bulk plastic and aluminium?

theandrewbailey · 2h ago
No idea, aside from the steel chassis and copper/aluminum heatsinks. My company collects devices, sorts and disassembles them, and sells the scrap. Since we collect a lot of stuff that still works (like decommissioned corporate IT equipment), I work in the refurb division[0], so I don't have much insight into how much material can get broken down and recycled.

[0] https://www.ebay.com/str/evolutionecycling

pram · 4h ago
They will be used to create new kinds of advertising. Commercials and banner ads that are indistinguishable from magic.
shit_game · 3h ago
It makes me feel so uneasy to think that the most probably answer is the one that will result in the most human misery.
bcardarella · 4h ago
Epic paintball venues
mrkramer · 4h ago
Maybe they will go back to mining crypto coins.
zarzavat · 4h ago
I'm going to be contrarian-contrarian. I don't buy the crash talk. This is just journalists needing to justify their own existence.

HN is full of articles about coding agents in a way it wasn't a few months ago.

What is overhyped is OpenAI. They don't have any moat. Why use an OpenAI model when you could use Claude or Qwen?

ml-anon · 4h ago
And yet Google is absolutely killing it and Meta is posting record numbers both attributing a massive amount of current and projected revenue to AI.

Meta just spent billions to get a B team of AI researchers. The cream of the crop couldn’t be persuaded with 8-10 figure comp packages.

This article is absolute garbage.

ekidd · 4h ago
Vibe shifts are real. The internet was always valuable, but oh my, did the vibes shift quickly in 2001. We went from "infinite money for painfully stupid ideas" to a near-total freeze on even the best ideas for a few years.

The thing about "vibe shifts" is that a big part of the shift occurs among people who have no idea what's going on. They've played with ChatGPT twice, talked about it at parties, and then invested $50,000 in NVIDIA stock. Or they're a corporate VP who doesn't understand this stuff but knows it's trendy and that it impresses the C-suite. When those people bail, the market retrenches hard, trading irrational enthusiasm for equally irrational panic and gloom.

My guess is that the highly-visible switch from the sycophantic GPT 4o to the underwhelming GPT 5 is what made this concrete in the minds of the least informed investors and customers.

fcdradio · 41m ago
Yep, and the thing is that after the dot-com crash we’ve seen another shift where the web has totally transformed society and brought huge value to both everyday people and companies.

The value of it wasn’t ColdFusion or Flash, it was the novel ways that people used the foundational tech.

So yeah, the AI bubble may burst and one model or another (or a company like OpenAI) may fail, but I don’t think we have even scratched the surface on the novel things this tech can do.

meindnoch · 4h ago
Shorting NVDA with 50x leverage.
aDyslecticCrow · 3h ago
Nvidia has seen it coming. They did not over-leverage the boom themselves and will arise from a crash with a market lead, technical superiority, and a massive pile of R&D cashe.

That is even if you can time it correctly.

Better wait for a crash, see people panic sell thinking nvidia has any skin in the, game and buy the dip.