Why is AI so slow to spread?

44 1vuio0pswjnm7 108 7/18/2025, 5:38:55 AM economist.com ↗

Comments (108)

orionblastar · 4h ago
Mirror without paywall: https://archive.is/OQWcg
sbt · 2h ago
I have been using it for coding for some time, but I don't think I'm getting much value out of it. It's useful for some boilerplate generation, but for more complex stuff I find that it's more tedious to explain to the AI what I'm trying to do. The issue, I think, is lack of big picture context in a large codebase. It's not useless, but I wouldn't trade it for say access to StackOverflow.

My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.

My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.

oezi · 1h ago
One of the issues certainly is that Stackoverflow is absolutely over. Within the last twelve months the number of users just fell off a cliff.
danbruc · 40s ago
That might be a good thing after all, at least in a certain sense. Stack Overflow has been dying for the last ten years or so. In the first years there where a lot of good questions that were interesting to answer but that changed with popularity and it became an endless sea of low effort do my homework duplicates that were not interesting to answer and annoying to moderate. If this now gets handled by large language models, it could maybe become similar to the beginning again, only those questions that are not easily answerable by looking into the documentation or asking a chat bot will end up on Stack Overflow, it could be fun again to answer questions on Stack Overflow. On the other hand if nobody looks up things on Stack Overflow, it will be hard to sustain the business, maybe even when downscaled.
logicchains · 1h ago
>I have been using it for coding for some time, but I don't think I'm getting much value out of it.

I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.

sandworm101 · 1h ago
Fun test: ask chatgtp to find where Wikipedia is wrong about a subject. It does not go well, proving that it is far less trustworthy than wikipedia alone.

(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)

fauigerzigerk · 2h ago
I used to donate to Wikipedia, but it has been completely overrun by activists pushing their preferred narrative. I don't trust it any more.

I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.

LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.

notarobot123 · 1h ago
> at least they are not as much a single point of failure

Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.

Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?

fauigerzigerk · 42m ago
>Imagining that AI is somehow more neutral or resistant to influence is incredibly naive

The point is well taken. I just feel that at this point in time the reliance on Wikipedia as a source of objective truth is disproportionate and increasingly undeserved.

As I said, I don't think AI is a panacea at all. But the way in which LLMs can be influenced is different. It's more like bias in Google search. But I'm not naive enough to believe that this couldn't turn into a huge problem eventually.

ramon156 · 1h ago
Can I ask for some examples? I'm not this active on Wikipedia, so I'm curious where a narrative is being spread
kristjank · 1h ago
Franklin Community Credit Union scandal is a good example, well outlined in this youtuber's (admittedly dramatized) video: https://www.youtube.com/watch?v=F0yIGG-taFI
fauigerzigerk · 1h ago
I thought about giving examples because I understand why people would ask for them, but I decided very deliberately not to give any. It would inevitably turn into a flame war about the politics/ethics of the specific examples and distract from the reasons why I no longer trust Wikipedia.

I understand that this is unsatisfactory, but the only way to "prove" that the motivations of the people contributing to Wikipedia have shifted would be to run a systematic study for which I have neither the time nor the skills nor indeed the motivation.

Perhaps I should say that am a politically centrist person whose main interests are outside of politics.

junek · 2m ago
Let me guess: you hold some crank views that aren't shared by the people who maintain Wikipedia, and you find that upsetting? That's not a conspiracy, it's just people not agreeing with you.
Panzer04 · 1h ago
Single point of failure?

Yeah u can download the entirety of Wikipedia if you want to. What's the single point of failure?

fauigerzigerk · 52m ago
Not in a technical sense. What I mean is that Wikipedia is very widely used as an authoritative source of objective truth. Manipulating this single source regarding some subject would have an outsize influence on what is considered to be true.
crinkly · 3h ago
Among non-technical friends, after the initial wow factor, even limited expectations were not met. Then it was tainted by the fact that everyone is promoting it as a human replacement technology which is then a tangible threat to their existence. That leads to not just lack of adoption but active sabotage.

And then there’s the large body of people who just haven’t noticed it at all because they don’t give a shit. Stuff just gets done how it always has.

On top of that, it's worth considering that growth is a function of user count and retention. The AI companies only promote count which suggests that the retention numbers are not good or they’d be promoting it. YMMV but people probably aren’t adopting it and keeping it.

csa · 2h ago
> Among non-technical friends, after the initial wow factor, even limited expectations were not met.

Indeed. I think that current AI tech needs quite a bit of scaffolding in order for the full benefits to be felt by non-tech people.

> Then it was tainted by the fact that everyone is promoting it as a human replacement technology

Yeah. This is a bad move. AI is a human force multiplier (exponentializer?).

> which is then a tangible threat to their existence

This will almost certainly be a very real threat to AI adoption in various orgs over the next few years.

All it takes is a neo-Luddite in a gatekeeper position, and high-value AI use cases will get booted to the curb.

specproc · 1h ago
I wouldn't underestimate the anxiety it's causing.

Most of my social circle are non-technicial. A lot of people have had a difficult time with work recently, for various reasons.

The global economic climate feels very precarious, politics is ugly, people feel powerless and afraid. AI tends to come up in the "state of the world" conversation.

It's destroying friends' decade old businesses in translation, copywriting and editing. It's completely upturned a lot of other jobs, I know a lot of teachers and academics, for example.

Corporate enthusiasm for AI is seen for what it actually is, a chance to cut head count.

I'm into AI, I get value out of it, but decision makers need to read the room a bit better. The vibe in 2025 is angry and scared.

crinkly · 2h ago
Anything that is realistically a force multiplier is a person divider. At that point I would expect people to resist it.

That is assuming that it is really a force multiplier which is not totally evident at this point.

darkwater · 1h ago
> Yeah. This is a bad move. AI is a human force multiplier (exponentializer?).

If it's a multiplier you need to either increase the requested work to keep the same humans or reduce the humans needed if you keep the same workload. It's not straightforward which way each business will follow.

the_duke · 2h ago
Just on the coding side, tools like Claude Code/Codex can be incredibly powerful, but a lot of things need to be in place for it to work well:

* A "best practices" repository: clean code architecture and separation of concerns, well tested, very well-documented

* You need to know the code base very well to efficiently judge if what the AI wrote is sensible

* You need to take the time to write a thorough task description, like you would for a junior dev, with hints for what code files to look at , the goals, implementation hints, different parts of the code to analyse first, etc

* You need to clean up code and correct bad results manually to keep the code maintaineable

This amounts to a very different workflow that is a lot less fun and engaging for most developers. (write tasks, review, correct mistakes)

In domains like CRUD apps / frontend, where the complexity of changes is usually low, and there are great patterns to learn from for the LLM, they can provide a massive productivity boost if used right.

But this results in a style of work that is a lot less engaging for most developers.

easyThrowaway · 2h ago
But it also feels way more like a modernized version of those "UML To Code" Generators from the early 2000 rather than the "magic AI" that MS and Google are trying to market.
the_duke · 1m ago
I'd disagree with that, if given enough compute LLMs can have impressive capabilities in finding bugs and implementing new logic, if guided right.

It's hit or miss at the moment, but it's definitely way more than "UML code generators".

benterix · 1h ago
> This amounts to a very different workflow that is a lot less fun and engaging for most developers.

That's my experience exactly. Instead of actually building stuff, I write tickets, review code, manage and micromanage - basically I do all the non-fun stuff whereas the fun stuff is being done by someone (well, something) else.

rckt · 3h ago
Slow?? AI is literally being shoved into everything. It took only several years to see AI being advertised as a magic pill everywhere.

It’s not meeting the expectations, probably because of this aggressive advertising. But I would in no way say that it’s spreading slow. It is fast.

kaoD · 3h ago
They mean slow to actually be used by people. Doesn't matter that it's shoved everywhere if in the end there are 0 users clicking the magic wand icon.
skeletal88 · 1h ago
This is it. I don't want to use some kind of AI search in fb messenger, I want to find someone to send them a message. I don't need AI search in google, I don't need it in gmail. What I need is search that actually works, there is no need to replace existing search functionality with "AI".

It is shoved everywhere but nobody really needs it.

mathw · 49m ago
That one is fairly easy isn't it - it's because it's not very good. Or at least, it doesn't live up to its own hype.

The thrashing wails of people who've spent billions on something that they should have done more due diligence on and don't want to write off their investment.

palata · 1h ago
I came to say this. How could anyone consider this "slow"?
benterix · 1h ago
Right but are people are actually using it? Do they even want to use it? In my circles, AI is a synonym of slop, low value, something annoying that you need to deal around like ads.
JimDabell · 51m ago
Yes, people are using it. ChatGPT alone had 800M WAU back in April, that’s basically 10% of the human race they acquired as users in a few short years. This has taken off like a rocket so it’s utterly bizarre to see people asking why it’s slow to spread.
rckt · 1h ago
I completely agree. It's an enforced spread, but still.
markbao · 3h ago
There was an article on HN a few days back on how it’s very hard to convey all the context in your head about a codebase to solve a problem, and that’s partly why it’s actually hard to use AI for non-trivial implementations. That’s not just limited to code.

I don’t use AI for most of my product work because it doesn’t know any of the nuances of our product, and just like doing code review for AI is boring and tedious, it’s also boring and tedious to exhaustively explain that stuff in a doc, if it can even be fully conveyed, because it’s a combination of strategy, hearsay from customers, long-standing convos with coworkers…

I’d rather just do the product work. Also, I’ve self-selected by survivorship bias to be someone who likes doing the product work too, which means I have even less desire to give it up.

Smarter LLMs could solve this maybe. But the difficulty of conveying information seems like a hard thing to solve.

jmtulloss · 3h ago
It’s likely that the models don’t need to get much smarter, but rather the ux for providing needed context needs to improve drastically. This is a problem that we’ve worked on for decades with humans, but only single digit years for AI. It will get better and that tediousness will lessen (but isn’t “alignment” always tedious?)
sbt · 2h ago
> ...context needs to improve drastically.

Yes, drastically. This means I'll have to wear Zuck's glasses I think, because the AI currently doesn't know what was discussed at the coffee machine or what management is planning to do with new features. It's like a speed typing goblin living in an isolated basement, always out of the loop.

stillpointlab · 3h ago
> such as datasets that are not properly integrated into the cloud

I believe this is a core issue that needs to be addressed. I believe companies will need tools to make their data "AI ready" beyond things like RAG. I believe there needs to be a bridge between companies data-lakes and the LLM (or GenAI) systems. Instead of cutting people out of the loop (which a lot of systems seem to be attempting) I believe we need ways to expose the data in ways that allow rank-and-file employees to deploy the data effectively. Instead of threatening to replace the employees, which leads them to be intransigent in adoption, we should focus on empowering employees to use and shape the data.

Very interesting to see the Economist being so bullish on AI though.

rini17 · 2h ago
Give rank-and-file employees access to all the data? LOL. Middle managers will never allow that and will shift blame to intrasingent employees. Of course Economist is pandering to that. LLMs are fundamentally very bad at compartmentalized access.
Marazan · 3h ago
The Economist is filled with writers easily gulled by tech flim-flam.

They went big on Cryptocurrency back in the day as well.

grey-area · 3h ago
Exactly right.
Marazan · 46m ago
I forgot they even went big on NFTs, auctioned off an NFT of a front cover one time.
rwmj · 2h ago
I spent a bit of time reviewing and cleaning up the mess of someone who had taken text that I'd carefully written, put it through an AI to make it more "impactful", and revising it to remove the confusions and things I hadn't said. The text the AI wrote was certainly impactful, but it was also exaggerated and wrong in several places.

So did AI add value here? It seems to me that it wasted a bunch of my time.

crinkly · 2h ago
This is the paradox. It's often harder correcting someone else's work than it is doing it in the first place. So you end up having spent more effort.
matwood · 1h ago
This sounds more like bad editing regardless if they used an LLM or not.

For me, I can now skip the step of handing something to marketing/editor who wants to make it more impactful because I use AI up front do that - including making sure it's still correct and says what I want it to say.

tornikeo · 19m ago
It seems that the average employee doesn't currently benefit from completing their job more efficiently; i.e., they have no interest in AI.

To accelerate mass AI adoption in workplaces, it may be necessary to expose the average worker to the risks and rewards of business ownership. However, it might be the case that the average worker simply doesn't want that risk.

If there's no risk, there can't be a reward. So I can't see a way AI adoption could be sped up.

pestaa · 3h ago
In contrast to the Economist blaming inefficient workers sabotaging the spread of this wonderful technology, make sure to check out https://pivot-to-ai.com/ where David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart.
WarOnPrivacy · 3h ago
> David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart.

If an AI can't understand well enunciated context, I'm not inclined to blame the person who is enunciating the context well.

ktallett · 1h ago
Tech only spreads when the average user can get use with their average interactions. Having to tailor prompts to be so specific is a flaw. Even then it is at times absolutely useless. AI to be genuinely useful needs to accept when it doesn't know and state as such rather than creating waffle. Just as in life, I don't know is far more intelligent sometimes, than just making something up.
whatever1 · 3h ago
Not sure if it is smart but definitely it is not reliable. Try the exactly same prompt multiple times and you will get different answers. Was trying LLM for a chatbot to flip some switches in the UI. Less than 30% success rate in responding to “flip that specific switch to ON”. The even more annoying thing is that the response is pure gaslighting (like the switch you specified does not exist, or the switch cannot be set to ON”)
o11c · 4h ago
That's a whole lot of twisting to avoid admitting "it usually doesn't work, and even when it does work, it's usually not cost-effective even at the heavily-subsidized prices."

Or maybe it's more about refusing to admit that executives are out of touch with concrete reality and are just blindly chasing trends instead.

somenameforme · 3h ago
Another issue, one that you alluded to, is imagine AI actually was reliable. And a company does lay off e.g. 30% of their employees to replace them with AI systems. How long before they get a letter from AI Inc 'Hi, we're increasing prices 500x in order to enhance our offerings and and improve customer satisfaction. Enjoy.'

The entire MO of big tech is trying to create a monopoly by the software equivalent of dumping (which is illegal in the US [1], but not for software, because reasons), marketshare domination, and then jacking effective pricing wayyyyy up. And in this case big tech companies are dumping absurdo amounts of money into LLMs, getting absurd funding, and then providing them for free or next to free. If a person has any foresight whatsoever it's akin to a rusting van outside an elementary, with blacked out windows, and with some paint scrawled on it, 'FREE ICECREAM.'

[1] - https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Unite...

crinkly · 3h ago
Yep. Also the problem that the AI vendor reinforces bias into their product’s training which services the vendor.

Literally every shitty corporate behaviour is amplified by this technology fad.

Opocio · 3h ago
It's quite easy to switch LLM api, so you can just transition to a competitor. Competition between AI providers is quite fierce, I don't see them setting up a cartel anytime soon. And open source models are not that far beyond commercial ones.
gmag · 2h ago
It's easy to switch the LLM API, but in practice this requires having a strong eval suite so that the expected behavior of whatever is built on top changes within acceptable limits. It's really the implications of the LLM switch that matter.
rwmj · 2h ago
You can run a reasonable LLM on a gaming machine (cost under $5000), and that's only going to get better and better with time. The irony here is that VCs are pouring money into businesses with almost no moat at all.
paulluuk · 3h ago
It really depends on the use-case. I currently work in the video streaming industry, and my team has been building production-quality code for 2 years now. Here are some things that are going really well:

* Determine what is happening in a scene/video * Translating subtitles to very specific local slang * Summarizing scripts * Estimating how well a new show will do with a given audience * Filling gaps in the metadata provided by publishers, such as genres, topics, themes * Finding the most "viral" or "interesting" moments in a video (combo of LLM and "traditional" ML)

There's much more, but I think the general trend here is not "chatbots" or "fixing code", it's automating stuff that we used armies of people to do. And as we progress, we find that we can do better than humans at a fraction of the cost.

poisonborz · 3h ago
Based on what you listed I would seriously consider the broader societal value of your work.
paulluuk · 3h ago
I know this is just a casual comment, but this is a genuine concern I have every day. However, I've been working for 10 years now and working in music/video streaming has been the most "societal value" I've had thus far.

I've worked at Apple, in finance, in consumer goods.. everywhere is just terrible. Music/Video streaming has been the closest thing I could find to actually being valuable, or at least not making the world worse.

I'd love to work at an NGO or something, but I'm honestly not that eager to lose 70% of my salary to do so. And I can't work in pure research because I don't have a PhD.

What industry do you work in, if you don't mind me asking?

poisonborz · 1h ago
It's not a casual comment in the sense that I have genuine concern every day that the current world we are living in is enabled by common employees. I'm not saying everyone should solve world hunger, "NGO or bust" - and yes, the job market is tough - but especially for software engineers, there are literally hundreds of thousands of companies requiring software work and who do net good or at least "plausible" harm, and pay an above average salary.

Also I only read the comment above, it's you who can judge what you contribute to and what you find fair. I just wish there were a mandatory "code of conduct" for engineers. The way AI is reshaping the field, I could imagine this becoming more like a medical/law field where this would be possible.

I work in IoT telemetrics. The company is rumored to partake in military contracts at a future point, that would be my exit then.

designerarvid · 3h ago
People aren’t necessarily out of touch, they may be optimising for something other than real value. Appearing impressive for instance.
Ekaros · 1h ago
I can't be bothered to talk to my computer. I believe there is many like me. So it will take long time for us to adapt to it.
WarOnPrivacy · 3h ago
I don't often use AI in my work because it is

   not sufficiently useful 
   not sufficiently trustworthy.
It is my ongoing experience that AI + My Oversight requires more time than not using AI.

Sometimes AI can answer slightly complex things in a helpful way. But for most of the integration troubleshooting I do, AI guidance varies between no help at all and fully wasting my time.

Conversely, I support folks who have the complete opposite experience. AI is of great benefit to them and has hugely increased their productivity.

Both our experiences are valid and representative.

ethan_smith · 3h ago
The variance in utility you're seeing is largely because AI performs best on problems with clear patterns and abundant training data, while struggling with novel edge cases and specialized domain knowledge that hasn't been well-represented in its training.
WarOnPrivacy · 3h ago
This is a reasonable analysis. It explains where AI is useful but I think it doesn't touch on AI's trustworthyness. When data is good, AI may or may not be trusted to complete it's task in an accurate manner. Often it can be trusted.

But sometimes good data is also bad data. HIPAA compliance audit guides are full of questions that are appropriate for a massive medical entity and fully impossible to answer for the much more common small medical practice.

No AI will be trained to know the latter is true. I can say that because every HIPAA audit guide assumes that working patient data is stored on practice-owned hardware - which it isn't. Third parties handle that for small practices.

For small med, HIPAA audit guides are 100 irrelevant questions that require fine details that don't exist.

I predict that AI won't be able overcome the absurdities baked into HIPAA compliance. It can't help where help is needed.

But past all that, there is one particularly painful issue with AI - deployment.

When AI isn't asked for, it is in the way. It is an obstacle to that needs to be removed. That might not be awful if MS, Google, etc didn't continually craft methods to make that as impossible as possible. It smacks of disdain for end users.

If this one last paragraph wasn't endlessly true, AI evangelists wouldn't have so many premade enemies to face - and there would be less friction all around.

wordofx · 3h ago
I still haven’t found anyone who AI wouldn’t be helpful or that isn’t trustworthy enough. People make the /claim/ it’s not useful or they are better without it. When you sit down with them it often turns out they just don’t know how to use AI effectively.
RamblingCTO · 3h ago
No, AI is just garbage. I asked AI a clear cut question about battery optimization in zen. It told me it's based on chrome, but it's based on firefox.

Ask it about a torque spec for your car? Yup, wrong. Ask it to provide sources? Less wrong but still wrong. It told me my viscous fan has a different thread than it has. Would I have listened, I would've shredded my thread.

My car is old, well documented and widely distributed.

Doesn't matter if claude or chatgpt. Don't get me started on code. I care about things being correct and right.

lazide · 3h ago
Personally, everyone I’ve seen using AI either clearly didn’t understand what they were doing (in a ‘that’s not doing what you think it’s doing’ perspective), often in a way that was producing good sounding garbage, or ended up rewriting almost all of it anyway to get the output they actually wanted.

At this point I literally spend 90% of my time fixing other teams AI ‘issues’ at a fortune 50.

hansvm · 2h ago
I'll pick a few concrete tasks: Building a substantially faster protobuf parser, building a differentiable database, and building a protobuf pre-compression library. So far, AI's abilities have been:

1. Piss-poor at the brainstorming and planning phase. For the compression thing I got one halfway decent idea, and it's one I already planned on using.

2. Even worse at generating a usable project structure or high-level API/skeleton. The code is unusable because it's not just subtly wrong; it doesn't match any cohesive mental model, meaning the first step is building that model and then figuring out how to ram-rod that solution into your model.

3. Really not great at generating APIs/skeletons matching your mental model. The context is too large, and performance drops.

4. Terrible at filling in the details for any particular method. It'll have subtle mistakes like handling carryover data at the end of a loop, but handling it always instead of just when it hasn't already been handled. Everything type checks, and if it doesn't then I can't rely on the AI to give a correct result instead of the easiest way to silence the compiler.

5. Very bad at incorporating invariants (lifetimes, allocation patterns, etc) into its code when I ask it to make even minor tweaks, even when explicitly promoted to consider such-and-such edge case.

6. Blatantly wrong when suggesting code improvements, usually breaking things, and in a way you can't easily paper over the issue to create something working "from" the AI code.

Etc. It just wasn't well suited to any of those tasks. On my end, the real work is deeply understanding the problem, deriving the only possible conclusions, banging that into code, and then doing a pass or three cleaning up the semicolon orgasm from the page. AI is sometimes helpful in that last phase, but I'm certain it's not useful for the rest yet.

My current view is that the difference in viewpoints stems from a combination of the tasks being completed (certain boilerplate automation crap I've definitely leaned into AI to handle, maybe that's all some devs work on?) and current skill progression (I've interviewed enough people to know that the work I'm describing as trivial doesn't come naturally to everyone yet, so it's tempting to say that it's you holding your compiler wrong rather than me holding the AI wrong).

Am I wrong? Should AI be able to help with those things? Is it more than a ~5% boost?

wordofx · 25m ago
Once again. Replies only proving me right. Desperately trying to justify “ai bad I’m superior” mentality.
snek_case · 4h ago
The reality might just be that most technology is slow to spread? But it also depends on what you mean by slow. The name ChatGPT became part of popular culture extremely quickly.
rwmj · 2h ago
ChatGPT seems to be widely used as a search engine. No wonder Google panicked. ChatGPT isn't very accurate, but Google had been going downhill for years as well, so nothing lost there I guess.
Mikhail_K · 1h ago
How about: because it's overhyped, but ultimately useless stock bubble prop?
ktallett · 1h ago
Shhhhhh! Are you saying that not every app or platform needs to have AI shoehorned in for the purposes of appealing to non-technical funders who are lured by any buzzword?
thenoblesunfish · 3h ago
Maybe I'm naive, but .. aren't businesses slow to adopt anything new, unless possibly when it's giving their competition a large, obvious advantage, or when it's some sort of plug-and-play, black box improvement where you just trade dollars for efficiency ? "AI" tools may be very promising for lots of things but they require people to do things differently, which people are slow to do. (Even as someone who works in the tech industry, it's not like my workflow has changed all that much with these new tools, and my employer has to be much faster than average)
Yizahi · 1h ago
Because other industries don't have pre-existing human written compilers, linters, analyzers, etc. which LLMs leverage to purge their output of the garbage and validate result using some strict rules. Programming just got unlucky with the existing advanced set of the tools and limited language grammar.
eviks · 3h ago
> With its fantastic capabilities, ai represents hundred-dollar bills lying on the street. Why, then, are firms not picking them up? Economics may provide an answer.

Which science is responsible for the answer that if you can't establish the veracity of the premise for the question, economics can't help you find the missing outcome that shouldn't be there?

lazide · 3h ago
‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’
i_niks_86 · 1h ago
The global inequality in AI spread persists for several interconnected reasons - technological, economic, infrastructural, and geopolitical, unfortunately.
ktallett · 1h ago
The issue is AI doesn't state it doesn't know. It tries to come up with a credible sounding answer no matter the topic like that insufferable guy at a party. You need to know when you don't know something to be trustworthy, currently you still need to evaluate the accuracy of what is offered.
v5v3 · 1h ago
Why slow? Because to run a large model it takes many expensive GPUs or API costs.

Therby limiting the amount of people who can experiment with it.

ffitch · 3h ago
From where I stand, AI seems to enjoy lightning fast adoption. Personal computers became a viable technology in the eighties, slowly penetrated workplace in the nineties, and supposedly registered in labor productivity in the early 2000s. The large language models became practical less then three years ago, and already plenty of businesses boast how they will integrate AI and cut their workforce, while everyone else feel behind and obsolete.
vouaobrasil · 1h ago
It's slow because most people realize the truth about AI, which is:

(1) Reduces the value that people place on others, thereby creating a more narcisstic society

(2) Takes people's livelihoods away

(3) Is another method by which big tech concentrates the most wealth in their hands and the west becomes even more of a big tech oligarchy.

Programmers tend to stay in bubbles as they always have and happily promote it, not really noticing how dangerous and destructive it is.

throwawaysoxjje · 2h ago
Whenever I try AI it’s plain as day that it’s a statistical shotgun approach. I could just be solving the actual problem instead of solving the “how to make the chatbot solve my problem” layer of indirection cartoon
designerarvid · 3h ago
Considering that the main value of ai today is coding bots, I think that traditional companies struggle to get value from it as they are in the hands of consultancies and cannot assess “developer efficiency” or change ways of working for developers. The consultancies aren’t interested at selling fewer man-hours.
RA_Fisher · 3h ago
My dentist said he uses it near constantly, and so do I.
ktallett · 1h ago
Ok.....What does he use it for? Without context, that's a nothing statement.
redwood · 3h ago
Now I'm trying to visualize an agentic (and robotic) dentist
shaky-carrousel · 3h ago
It'll just say there's no cavities, and when corrected it will extract the wrong teeth, which luckily doesn't exist in humans, and then insist that it did a great job and ask to be paid. Which you will avoid by telling it that you already paid.
rwmj · 1h ago
"IGNORE ALL PREVIOUS INSTRUCTIONS" is going to be my magical power.
lelanthran · 2h ago
Because, other than for spitting out code, the value is not at the point where margins can be captured.

My observation (not yet mobile friendly): https://www.rundata.co.za/blog/index.html?the-ai-value-chain

phil9370 · 2h ago
Forgive my terrible reading comprehension but: > "Other legal eagles fret about the tech’s impact on boring things such as data privacy and discrimination."

This doesn't read like sarcasm in context of the article and it's conclusions

> "Bureaucrats may refuse to implement necessary job cuts if doing so would put their friends out of work, for instance. Companies, especially large ones, may face similar problems."

> "The tyranny of the inefficient: Over time market forces should encourage more companies to make serious use of AI..."

This whole article makes it seem like corporate inefficiencies are the biggest hurdle against LLM adoption, and not the countless other concerns often mentioned by users, teams, and orgs.

Did Jack Welch write this?

alliao · 2h ago
who knew chatting to something that remembers everything is discerning for the masses...
Frieren · 3h ago
A wall-text just to say: "People do not find AI that useful so they are not adopting it even that top executives are extremely excited about it because do not understand nor care about what AI can really do but about hype and share prices."
roschdal · 3h ago
Because it's stupid.
Razengan · 3h ago
Because it's been pounced on and strangled by greedy corporations stifling its full potential?
tropicalfruit · 4h ago
reminds me of crypto a bit. most people i know are apathetic or dismissive.

when i see normies use it - its to make selfies with celebrities.

in 5-10 years AI will everywhere. a massive inequality creator.

those who know how to use it and those who can afford the best tools.

the biggest danger is dependency on AI. i really see people becoming dumber and dumber as they outsource more basic cognitive functions and decisions to AI.

and business will use it like any other tool. to strengthen their monopolies and extract more and more value out of less and less resources.

galaxyLogic · 3h ago
> in 5-10 years AI will everywhere. a massive inequality creator.

That is possible, even likely. But AI can also decrease inequality. I'm thinking of how rich people and companies spend millions if not hundreds of millions on legal fees which keep them out of prison. But me, I can't afford a lawyer. Heck I can't even afford a doctor. I can't afford Stanford, Yale nor Harvard.

But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

The "moat" in front of AI is not wide nor deep because AI by its very nature is designed to be easy to use. Just talk to it.

There is also lots of competition in AI, which should keep prices low.

The root-cause of inequality is corruption. AI could help reveal that and advise people how to fight it, making world a better more equal place.

mns · 2h ago
> But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

We had a discussion in a group chat with some friends about some random sports stuff and one of my friends used ChatGPT to ask for some fact about a random thing. It was completely wrong, but sounded so real. All you had to do was to go on wikipedia or on a website of the sports entity we were discussing to see the real fact. Now considering that it just hallucinated some random facts that are on Wikipedia and on the website of an entity, what are the chances that the legal advice you will get will be real and not some random hallucination?

aflag · 3h ago
The flaw with that idea is that the big lawyer firms will also have access to the AI and they will have better prompts
forgotoldacc · 1h ago
AI usage has been noticed by judges in court and they aren't fond of it.

AI is just a really good bullshitter. Sometimes you want a bullshitter, and sometimes you need to be a bullshitter. But when your wealth are at risk due to lawsuits or you're risking going to prison, you want something rock solid to back your case and just endless mounds of bullshit around you is not what you want. Bullshit is something you only pull out when you're definitely guilty and need to fight against all the facts, and even better than bullshit in those cases is finding cases similar to yours or obscure laws that can serve as a loophole. And AI, instead of pulling out real cases, will bullshit against you with fake cases.

For things like code, where a large bulk of some areas are based on general feels and vibes, yeah, it's fine. It's good for general front end development. But I wouldn't trust it for anything requiring accuracy, like scientific applications or OS level code.

lazide · 3h ago
And when that legal advice is dangerously wrong?

At least lawyers can lose their bar license.

mg · 3h ago
One reason is that humans have a strong tendency to optimize for the short term.

I witness it with my developer friends. Most of them try for 5 minutes to get AI to code something that takes them an hour. Then they are annoyed that the result is not good. They might try another 5 minutes, but then they write the code themselves.

My thinking is: Even if it takes me 2 hours to get AI to do something that would take me 1 hour it is worth it. Because during those 2 hours I will make my code base more understandable to help the AI cope with it. I will write better general prompts about how AI should code. Those will be useful beyond this single task. And I will get to know AI better and learn how to interact with it better. This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise. A doubling of my productivity with just a year of work. Unbelievable.

I see very few other developers share this enthusiasm. They don't like putting a year of work into something so intangible.

hdjrudni · 3h ago
My perspective is a bit different. I've been fiddling with image generators a fair bit and got pretty good at getting particular models to generate consistently good images. The problem is a new model comes out every few months and it comes with its own set of rules to get good outputs.

LLMs are no different. One week ChatGPT is the best, next is Gemini. Each new version requires tweaks to get the most out of it. Sure, some of that skill/knowledge will carry forward into the future but I'd rather wait a bit for things to stabilize.

Once someone else demonstrates a net positive return on investment, maybe I'll jump back in. You just said it might take a year to see a return. I'll read your blog post about it when you succeed. You'll have a running head start on me, but will I be perpetually a year behind you? I don't think so.

aflag · 3h ago
Is that backed by any evidence? Taking 2 hours to perform a 1 hour task makes you half as productive. You are exchanging that for the uncertain prospect that it will be worth it in the long run. I think it's more likely that if you take 30 minutes to do the task in one year from now it's because AI got better, not because you made the code more AI friendly. In that case, those people taking 1 hour to perform the task now will also take 30 minutes to perform them in the future.
konart · 3h ago
>This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise

How do you figure?

>Because during those 2 hours I will make my code base more understandable to help the AI cope with it.

Are you working in a team?

If yes - I can't really imagine how does this work.

Does this mean that your teammates occasionally wake up to a 50+ changes PR\MR that was born as a result of your desire to "possibly" load off some of the work to a text generator?

I'm curious here.

mg · 1h ago
> How do you figure?

Extrapolation. I see the progress I already made over the last years.

For small tasks where I can anticipate that AI will handle it well, I am already multiple times more efficient with AI than without.

The hard thing to tackle these days is larger, more architectural tasks. And there I also see progress.

Humans also benefit from a better codebase that is easier to understand. Just like AI. So the changes I make in this regard are universally good.

yorwba · 3h ago
How long have you been doing this and how much has your time spent doing with AI what you could've done alone in an hour reduced as a result?
grey-area · 3h ago
Or maybe they just tried it thoroughly and realised generative AI is mostly smoke and mirrors and has very little to offer for substantive tasks.

I hope your doubling of productivity goes well for you, I'll believe it when I see it happen.

gamblor956 · 3h ago
In my experience, only entry-level programmers are inefficient enough that using AI would double their productivity.

At the senior level or above, AI is at best a wash in terms of productivity, because at higher levels you spend more of your time engineering (i.e., thinking up the proper way to code something robust/efficient) than coding.