Very anecdotally after recently having gone through the interview loop for the last 3 months (and finally landing something last week, yay), there was barely any talk about AI in any of my interviews, both from startups and from larger companies. There was 1 or 2 startups that had some trite things like "AI Native Engineer" for the role, but when I asked my interviewers what that even meant, they basically told me they had no idea and it was something management was pushing to attract people interested in building AI features.
I've done somewhere around 60 or 70 interviews the last 3 months and in every single one I asked "What role do you see LLMs serving in the day-to-day work at $COMPANY, and in the products you're building? And what are your personal thoughts on LLMs and how useful you've found them?". I was pleasantly surprised that nearly everyone had pretty level-headed views about the topic, mostly along the lines of "There's definite potential, it's very useful in some specific tasks, but it's not an all-intelligent panacea like it's being sold to everyone". This included the VP of Engineering at a very large, influential and successful company in the Netherlands who was extremely wary of LLMs. If I had to put a very non-scientific number on the views I encountered, I'd say roughly 80% of companies/teams I talked to were very neutral and balanced on AI, around 10% were fanatics about AI, and the remaining 10% were extremely anti-AI and didn't want anyone on their teams touching them for any of the work.
Caveats of course that this was entirely anecdotal to my experience in recent interviews, and this was all for companies in the Netherlands (both remote roles & local), but I think the tide is starting to turn slowly and people are sobering up a bit from all the incessant, endless hype regarding LLMs (AI is too broad a word with too many actually useful things and it's a shame it's been conquered by the recent LLM hype). You wouldn't think so reading through HN, but then again if you look through recent YC batches like 99% of them mention AI/LLMs in some capacity even when it makes no sense.
utyop22 · 3h ago
The biggest side cost (that’ll ultimately kill AI investment for awhile once this bubble pops) is the fact that for many people LLM = AI.
It’s going to be difficult to justify AI investment in private financial markets - causing more consolidation and control of future technology to fall into the hands of the large tech firms.
It’s a gamble that Sam Altman and others have taken - hoping and praying it won’t blow up in their face.
nasmorn · 1h ago
I think Sam Altman will be just fine. Also, Since he has no access to a revolutionary new architecture anyways, that was his shot. Why should he not take it.
Shank · 7h ago
Almost all of the Enterprise/Corporate AI offerings are a significant step in cost that needs to bear actual fruit in order to be worthwhile, not to mention the compliance and security requirements most places have in order to get these things approved. We know there are use cases where AI makes sense, but we also know that there are many things that it can't do (at least right now). It makes sense that people aren't plunking down large amounts of money on this stuff, especially since the state-of-the-art so-often changes. What if you buy Claude and something new comes along, or ChatGPT gets better? It's difficult to make these purchasing decisions when products are static, and much more so when everything changes on a bi-weekly cadence.
benterix · 5h ago
Yeah. I worked in places where using LLMs actually made sense but in very limited scenarios (the results were then checked by humans anyway). But there are many places when using LLMs is actually hurting the business, especially if this is a business-to-customer offering and end users are seeing GenAI content. After the initial fascination, I guess many businesses realized that.
dude250711 · 5h ago
And they also try to minimize your use. Why do I need to stick "ultrathink" into my queries? That should just be the default and only mode!
einpoklum · 5h ago
> that needs to bear actual fruit
It may be sufficient to obscure reality enough, so that it is difficult to disprove it bearing significant fruits.
Sammi · 3h ago
"250 or more" employees is at top. "1-4" and "100-249" are next, while all the medium sized companies are lagging at the bottom. This is signal I think. The heavy adopters are either large enterprise or solo/micro teams.
Now my interpretation: Enterprises are mostly riding the hype and not getting that much real benefit - hence the recent steep decline they've seen. Solo devs and micro teams are reaping the most of the actual benefits of generative AI. Anecdotally I've seen this in practice that individuals or tiny teams have the most flexibility with using AI and can play around with it the most in order to get it to do useful stuff. Larger organizations are limited by communication overhead and need to follow protocols and procedures and best practices and whathaveyou. Whatever benefit AI brings is drowned out by this overhead.
My prediction: Solo devs and tiny teams using AI will be able to do more work faster than enterprise. I've yet to see much real world results of this, so I think the effect is kinda small, but I do believe is is tangible. I think we're seeing a silent wave of micro-sass businesses that is made more possible because of generative AI. These aren't large or sexy, so they're not making headlines. Where can I find data on this?
The bigger question is whether it's going to disappear completely for the next 30 years, or if it's going to limp along, simmering on the back burner, just trying to optimize mattress sales.
benterix · 5h ago
The truth is we have no idea. Personally I'm 100% sure the future will look nothing like what Altman is saying, but I'm a logical person so I have to admit there exist a minuscule probability these GenAI CEO's vision will come true. More rationally, I'd expect something like Meta with Metaverse - enormously missing the mark but still useful for many people. The actual usage will be a function of utility and price.
bjacobel · 33m ago
Who has "the Metaverse" been useful for?
pj_mukh · 5h ago
Or more likely it’ll look like the e-commerce adoption plots with a giant pop in the late 90s and then a 20 year slog of consistent growth [1]
Possibly. With one caveat: to do ecommerce I just need a VPS and WooCommerce or Prestashop on it. In order to do GenAI, I either need to have an order of magnitude (or more) expensive server, or using an API which depending on usage might get terribly expensive for SOTA models.
mrheosuper · 5h ago
Personally i believe it will stay with us, for a long time. The benefit is real.
alangibson · 5h ago
The benefit is real, but the profits are not. Companies ability to eventually make money off of this is what will decide if it stays around or not.
utyop22 · 3h ago
Yeah and I don’t think Zuckerberg et all will be willing to risk a steep drop in their wealth for a long period to carry on reinvesting - yea they have control but investors can still reflect their sentiment in the stock price. And the wealth of senior management at these firms is tied up in the price of stock.
This is what happened with the metaverse debacle where the share price fell below 100usd.
dude250711 · 5h ago
Just as a decent working life improvement, not a new industrial revolution.
pandorobo · 6h ago
Does it mean the number of companies newly adopting AI is dropping? That could mean that its just saturated so of course it would drop? Unless I am reading this graph wrong and it's actually the same companies that are now no longer adopting AI?
singron · 5h ago
I believe it's the second case. Otherwise far over 100% of firms "adopted" AI by now, which doesn't make sense unless they keep un-adopting and re-adopting.
The question from BTOS is
> Between MMM DD – MMM DD, did this business use Artificial Intelligence (AI) in
producing goods or services? (Examples of AI: machine learning, natural language
processing, virtual agents, voice recognition, etc.)
rightbyte · 5h ago
No I think the measure is "are using". I am actually quite flabbergasted that what seem like such a useful tool is not nearly as useful as you would believe.
"one question is whether a business has used AI tools such as machine learning, natural language processing, virtual agents or voice recognition to help produce goods or services in the past two weeks."
fishstamp82 · 6h ago
The tickers for months are not obvious to me, and since its a 6-week moving average and not point in time, the numbers are a bit hard to intuitively grasp.
To me it looks like the drop is harder since averaging smooths out the points, so end of july 2025 the adoption is not exactly 12%, but probably more like 8%, where its closer to end of 2023.
It seems big tech is putting a big break on AI tooling, for now.
ares623 · 6h ago
Is this a self reported survey? Why would companies admit they’re not using AI?
singron · 4h ago
The results are aggregated and not sent to investors. There is no incentive to lie, and this survey is voluntary, so if you respond at all, it's out of some civic duty.
rsynnott · 3h ago
What were you expecting the negative consequences of admitting the dire crime of not using AI to the _census bureau_ to be? Straight to Census Jail?
ares623 · 1h ago
Worse, a lower stock price.
000ooo000 · 5h ago
Whoever fills out the survey isn't a CEO
yieldcrv · 3h ago
Meanwhile generative video doesn’t need to be adopted by large companies and the demand still needs 4-5x as many GPUs and compute power then exists today, as nobody is offering 4k video at 60fps yet
That’s one use case alone
So although this may be indicative of how much text inference you’ll need, or what you’ll hear about it on the job, it doesn’t have much to do with the actual AI sector or semiconductor sector yet
senectus1 · 6h ago
The company I work for just got 5k lic for 2 years.
I reckon about 80% of use AT LEAST is just mundane, search engine like use.
I've done somewhere around 60 or 70 interviews the last 3 months and in every single one I asked "What role do you see LLMs serving in the day-to-day work at $COMPANY, and in the products you're building? And what are your personal thoughts on LLMs and how useful you've found them?". I was pleasantly surprised that nearly everyone had pretty level-headed views about the topic, mostly along the lines of "There's definite potential, it's very useful in some specific tasks, but it's not an all-intelligent panacea like it's being sold to everyone". This included the VP of Engineering at a very large, influential and successful company in the Netherlands who was extremely wary of LLMs. If I had to put a very non-scientific number on the views I encountered, I'd say roughly 80% of companies/teams I talked to were very neutral and balanced on AI, around 10% were fanatics about AI, and the remaining 10% were extremely anti-AI and didn't want anyone on their teams touching them for any of the work.
Caveats of course that this was entirely anecdotal to my experience in recent interviews, and this was all for companies in the Netherlands (both remote roles & local), but I think the tide is starting to turn slowly and people are sobering up a bit from all the incessant, endless hype regarding LLMs (AI is too broad a word with too many actually useful things and it's a shame it's been conquered by the recent LLM hype). You wouldn't think so reading through HN, but then again if you look through recent YC batches like 99% of them mention AI/LLMs in some capacity even when it makes no sense.
It’s going to be difficult to justify AI investment in private financial markets - causing more consolidation and control of future technology to fall into the hands of the large tech firms.
It’s a gamble that Sam Altman and others have taken - hoping and praying it won’t blow up in their face.
It may be sufficient to obscure reality enough, so that it is difficult to disprove it bearing significant fruits.
Now my interpretation: Enterprises are mostly riding the hype and not getting that much real benefit - hence the recent steep decline they've seen. Solo devs and micro teams are reaping the most of the actual benefits of generative AI. Anecdotally I've seen this in practice that individuals or tiny teams have the most flexibility with using AI and can play around with it the most in order to get it to do useful stuff. Larger organizations are limited by communication overhead and need to follow protocols and procedures and best practices and whathaveyou. Whatever benefit AI brings is drowned out by this overhead.
My prediction: Solo devs and tiny teams using AI will be able to do more work faster than enterprise. I've yet to see much real world results of this, so I think the effect is kinda small, but I do believe is is tangible. I think we're seeing a silent wave of micro-sass businesses that is made more possible because of generative AI. These aren't large or sexy, so they're not making headlines. Where can I find data on this?
Pretty much exactly what the S-curve looks like.
[1]: https://www.marketplacepulse.com/stats/us-e-commerce-growth-...
This is what happened with the metaverse debacle where the share price fell below 100usd.
The question from BTOS is
> Between MMM DD – MMM DD, did this business use Artificial Intelligence (AI) in producing goods or services? (Examples of AI: machine learning, natural language processing, virtual agents, voice recognition, etc.)
"one question is whether a business has used AI tools such as machine learning, natural language processing, virtual agents or voice recognition to help produce goods or services in the past two weeks."
To me it looks like the drop is harder since averaging smooths out the points, so end of july 2025 the adoption is not exactly 12%, but probably more like 8%, where its closer to end of 2023.
It seems big tech is putting a big break on AI tooling, for now.
That’s one use case alone
So although this may be indicative of how much text inference you’ll need, or what you’ll hear about it on the job, it doesn’t have much to do with the actual AI sector or semiconductor sector yet
I reckon about 80% of use AT LEAST is just mundane, search engine like use.
maybe a bit of document analysis.