Nobel Laureate Daron Acemoglu: Don't Believe the AI Hype

74 hannofcart 53 5/29/2025, 8:04:25 AM project-syndicate.org ↗

Comments (53)

jqpabc123 · 22h ago
The manufacturing sector have been heavily into automation without AI for quite some time.

Take a tour of a modern auto assembly line and if you're like me, you'll be shocked by 2 things --- how few people are involved and the lack of lights (robots don't need them).

At the Hyundai assembly plant in Mobile Alabama, only about 24 hours of human labor goes into building each car.

At an average rate of about $30 per hour, less than $1000 of human labor goes into each new car.

This doesn't leave a lot of room for AI to have a major impact.

topaz0 · 21h ago
The column mentions manufacturing automation and claims that even though automation has been wildly successful, that automation gives less than 30% increase in total productivity (edit to add: in those particular tasks where it is most effective). That's part of his intuition for why LLMs are unlikely to give much more than that on any particular task.
alephnerd · 21h ago
But even a in aggregate 30% increase in TFP can have massive implications on a job market, as is already being seen.

The issue is a lot of people (especially policymaking adjacent) have an incentive to either use a "skynet is coming" story or a "there is nothing happening" story.

The reality is it's somewhere in the middle, and plenty of white collar jobs are heavily ripe for significant reductions in headcount.

topaz0 · 21h ago
The article also doesn't claim there will be no impact, and indeed acknowledges that for particular roles it could be very consequential. The headline is just that when you average over everything the economy does, a 0.5% to 1% productivity gain is a more plausible outcome than 10-30%. That's the "somewhere in the middle" conclusion that this particular economist comes to (as of last year).
squidbeak · 20h ago
The reality is somewhere in the middle for now. Over long enough timescales, that reality will trend close to full automation (or all the way), unless society rejects AI and policy intervenes to stop it, or unless there's some hard final barrier that no amount of time, money, compute, ingenuity or labor could surmount. The latter seems improbable (are we at AI's vigil - or its nativity?) so Policymakers anticipating the former are only being responsible.
squidbeak · 20h ago
Excuse me if I'm mistaken, but you seem to assume that the assembly plant is already maximally optimized and that only the human labor could be improved by AI. That would discount the potential effect of AI on re-engineering plant with unorthodox insights or through comprehensive fast simulations that haven't been feasible before. Then there's any new engineering techniques it may arrive at, new materials, the scope it will bring to robotics etc etc
jqpabc123 · 20h ago
So basically, you're suggesting AI will replace or have a significant impact on the industrial engineers whose job is to promote efficiency?

That's expecting a lot from something that still struggles to count letters in words or take orders at a fast food drive thru.

squidbeak · 20h ago
> something that still struggles to count letters in words or take orders at a fast food drive thru.

You're expecting quite a bit more if you think that here in 2025 we're at the end state of AI development.

teleforce · 19h ago
AI has an unpopular but really clever cousin namely intelligent automation (IA). They're already helping the humanity since we know how to automate [1],[2],[3],[4].

[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:

https://www.youtube.com/live/TknN8fCQvRk

[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:

https://youtube.com/watch?v=HB5TrK7A4pI

[3] Google OR-Tools:

https://developers.google.com/optimization

[4] MiniZinc:

https://www.minizinc.org/

aurareturn · 21h ago
No one should be surprised. Manufacturing is producing the same thing over and over again. Little to no AI should be needed.
jqpabc123 · 21h ago
Ok, so we agree that manufacturing is pretty much out.

So how about service jobs? How about one of the lowest level service jobs imaginable --- taking orders at a fast food drive thru?

IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows.

Here are the results:

https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...

blooalien · 20h ago
I can actually see this bein' one task that current levels of language models would excel at, honestly... Given the limited list of items on a typical fast-food menu, and the accuracy of even some of the lowliest modern language models and speech recognition, I see no reason why fast-food order-taking needs to be handled by humans at all anymore, especially if you confirm the final order with the human ordering before proceeding; I could honestly see that bein' much more accurate than a human doing that job. (I can't count how many times over the years I've had a human order-taker completely screw the order up despite them repeating the order back exactly as given. A well-designed LLM-based system likely shouldn't have that problem. What it repeats back should end up bein' exactly the order that the system pushes through to completion.)
jqpabc123 · 19h ago
would excel at, honestly.

You would think so --- but well financed tests in the real world suggest otherwise.

DrillShopper · 19h ago
Typical AI fanatic behavior - presented with the evidence that it doesn't work and goes "hmm, this should work perfectly!"

If that doesn't sum up AI hype and apologia then I don't know what does.

blooalien · 19h ago
Yeah, no. I'm not an "AI fanatic" by a long-shot, but whatever... I use A.I. sometimes, and other times I don't. When I do use it, I use it for what it's good at. When I don't, it's because it's simply not capable of the task at hand. Simple as that. :shrug:
aurareturn · 20h ago
It's not clear what the problem is. Is it that the mic quality is not good enough for an AI? Is it that the AI is not smart enough? Is it that people generally don't like AI taking orders? Is the latency not good enough?

Or is it that people prefer to preorder on the phone instead and pick up?

jqpabc123 · 20h ago
After years of effort, I'd say that most of the simple non-AI issues were examined.

Lots of videos on TikTok illustrate the problem.

https://www.businessinsider.com/tiktokers-show-failures-with...

newyankee · 18h ago
Once you incentivise people to use AI by unbundling the costs of using humans vs using AI, you will see a lot of people fall in line. Although legally I do not think it will be an easy implementation. I am sure a lot of people already order via web or app nowadays.
perching_aix · 14h ago
There's no AI needed for that either. Just order on your phone, drive there, scan, machine dispenses your order and goodbye.

Much like how when you go to one of these places >>right now<< you just walk up to a kiosk, input your order, pay, then collect your order at the desk.

Couple more years and we'll rediscover that vending machines exist.

fzzzy · 13h ago
Ok? Taco Bell is taking ai orders at drive throughs right now.
jqpabc123 · 4h ago
Not "full self driving", human supervision is still required.

"a Taco Bell employee is still always listening on the other end of the ordering system with the ability to intervene"

romec · 21h ago
Yup, automation has been happening since (before) the 1970s, finance since the '80s, media since the '90s, digitalization since '00s and all of it more ever since. AI currently has the least impact of any major development and will for some time. Hacker News isn't the real world.
blooalien · 20h ago
> Yup, automation has been happening since (before) the 1970s ...

Yeah, significantly before the '70s, unless you're specifically talkin' about robotic automation. Folks been automating human labor with automated machinery of various kinds for quite a long time before that.

DrillShopper · 19h ago
Ned Ludd did nothing wrong
squidbeak · 20h ago
None of that automation thought for itself, or could undertake its own automation. The potential in all those former waves was limited by the skills of human beings, but the limit on AI eventually is only compute. These are not the same thing at all.
romec · 18h ago
A lot of automation isn't what people think of as automation. Is the average supermarket automated? A bit, but not a lot. But 90% of what they are actually selling has been automated in some way. Groceries have been cooked, picked, preserved, packed, shipped or otherwise processed with the help of automation. So regardless how smart an AI becomes it doesn't provide much more value as a cooking robot. We have automated cooking for decades by processing food. But we are still limited by physics, society and need.

AI will probably make music free. But it is already almost free with cheap instruments, recording equipment and distribution. And even before music wasn't that expensive. You can argue that we lose value in not performing it ourselves. That is some impact, but not one that strictly replaces the other. You can choose to have society where you teach music and it will still provide value over AI.

I do realize that the idea is often not that we will have cooking robots, but that AI will change chemistry or biology to where food is something else. Still hard to say if or when that happens, and what impact it would actually have.

mistercow · 1d ago
> Still, while it is basically impossible to predict with any confidence what AI will do in 20 or 30 years, one can say something about the next decade, because most of these near-term economic effects must involve existing technologies and improvements to them.

I think you would be hard pressed to find someone who was making adequate predictions about where we would be now back in 2020, much less 2015, and if you did, I doubt many people would have taken them seriously.

I’d argue that we can currently speak with some level of confidence about what things will be like in three years. After that, who knows?

fennecfoxy · 22h ago
Yup. Just because someone's a Nobel Laureate (he's an economist) doesn't mean they're right. Just like I won't let my doctor inform me on tabs vs. spaces.

Economists, businesspeople & their ilk have proven time & time again that 99% of them just throw darts at a board & see what sticks. The only ingredients required are money, connections and an extroversion (height helps too). That's not to say that most scientists don't do the same thing, that is science after all.

I doubt many people at all would have expected even the success of LLMs before Google's attention paper. NLP experienced a huge jump, previous models always seemed to me like handwritten sets of statistical rules stringing together text and now we have trained sets of statistical rules orders of magnitudes more complex...I have no idea what we'll end up with next.

ssivark · 21h ago
> I doubt many people at all would have expected even the success of LLMs before Google's attention paper. NLP experienced a huge jump

AI doing fantastically better on AI benchmarks is different from AI greasing the wheels of the economy towards greater productivity. Acemoglu doesn't have much to say about the former (he's an economist, after all) and is focusing on the latter.

It is argued even whether and how personal computing has influenced productivity: https://en.wikipedia.org/wiki/Productivity_paradox

Suffice to say that even though these technologies might change life to feel radically different -- it remains to be seen how that finally snowballs into overall productivity. Of course, this is also complicated by questions of whether we're measuring productivity correctly.

oidar · 22h ago
>Just because someone's a Nobel Laureate (he's an economist) doesn't mean they're right.

https://fivethirtyeight.com/features/the-economics-nobel-isn...

blooalien · 19h ago
> Just like I won't let my doctor inform me on tabs vs. spaces.

Tabs for indentation, spaces for alignment. 100% all the way. Anything else is Heresy... ;)

blooalien · 17h ago
Hahahaha! Okay. Down-vote fully accepted as totally justifiable... I clearly risked a flame-war by wading into religious territory like "tabs vs spaces"... :rofl:

(Seriously though, tabs all the way for me... It's just less key-presses.)

ivape · 22h ago
"Yup. Just because someone's a Nobel Laureate"

It's also worth pointing out that using technology is not the same as the cohort of people that spend their whole lives building and working with technology and dreaming about where the technology can go.

"It is reasonable to suppose that AI’s biggest impact will come from automating some tasks and making some workers in some occupations more productive."

This person needs the Ghost of AI present and future to come show him a bit more of this tech first-hand (try out Google Flow and try to make a statement like the one above, you won't be able to).

---

And oddly, this was just recommended to me on Youtube:

The AI Revolution Is Underhyped | Eric Schmidt (former Google CEO) | TED

https://www.youtube.com/watch?v=id4YRO7G0wE

TheAlchemist · 22h ago
Really ?

Provocative question for sure, but how much things changed since 2020 ? Or even 2015 ?

I'm talking about changes in real economy. Except of huge system shock that was Covid, not that much.

sillyfluke · 22h ago
Yeah, it's interesting to think about change in terms of the change in the economy. It might be rose-tinted glasses when looking at the past, but when "the internet" went mainstream, ie when it had its "chatgpt 3.5" moment, in two years time there was more significant economic impact than this round of AI as I recall. And I'm thinking of the normal economy not the VC hype money slushing around. If that's true, I'm guessing the cost factor of AI vs the internet is also a substantial factor.

EDIT: I see that someone on the thread posted that Krugman doesn't think the internet brought real economic change either apparently.

noobermin · 21h ago
I will kindly point out the entire point of the article isn't about culture or technology merely but it's specifically about AI's impact on the economy. This isn't just an "insightful observation", it's the whole point of the article.
mrkramer · 23h ago
Modern AI started with NLP, computer vision and speech recognition and it was expected as chips get more powerful and faster, that software and AI people would figure how to utilize the massive new computing capabilities. My prediction would've been early 2010s for something like LLMs to occur but I guess I am too optimistic. And probably if it wasn't for Google and their enormous spending on R&D we would see LLMs not in early 2020s but in early 2030s.
Barrin92 · 1d ago
>you would be hard pressed to find someone who was making adequate predictions about where we would be now back in 2020, much less 2015

Macroeconomic and productivity forecasts from 10-15 years ago are pretty accurate, and if anything, were too optimistic on the productivity front, but there was certainly nothing wrong with taking them seriously.

spacebanana7 · 1d ago
Macro forecasts are generally much easier than those tied to specific technologies. We can be much more specific & confident about predicting next year's inflation rate than next year's ChatGPT+ pricing, for example.
naijaboiler · 19h ago
Yup a lot micro factor in aggregate all average/cancel/smoothen out at macro levels, which is why effects at macro levels are more muted and much more predictable
mistercow · 1d ago
Is there a good source that tracks the performance of these forecasts? I’d be particularly interested in seeing what things looked like in, say, 2005, looking ahead ten years, and then maybe 2008. That would be right before and right after the smartphone boom started, which might be our best recent basis for comparison.
kgwxd · 22h ago
a ton of people saw today coming in 2016 based on 1 if, a second term. not enough people listened, or they like those predictions.
jillesvangurp · 23h ago
For accurate predictions, or the lack thereof, it can be educational to look back in time. People in the late nineteenth century wrote down a lot of what in retrospect is a lot of hyperbole, nonsense, and rubbish. Some of it is pretty entertaining. The most outrageous ones actually got some of it right while completely missing the point at the same time. Jules Vernes for example had a pretty lively imagination. We went to the moon. But not by cannon ball. And there wasn't a whole lot there to see and do. And flying around the world takes a lot less than 80 days. Even in a balloon it can be done a lot quicker.

I was borne in the seventies. Much of what is science fact today was science fiction then. And much of that was pretty naive and enlightening at the same time.

My point is that nothing has changed when it comes to people's ability to predict the future. The louder people claim to know what it all means or rush to man-splain that to others, the more likely it is that they are completely and utterly missing the point. And probably in ways that will make them look pretty foolish in a few decades. Most people are just flailing around in the dark. And some of the crazier ones might actually be the ones to listen to. But you'd be well advised to filter out their interpretations and attempts to give meaning to it all.

Hal, the Paranoid Android, Kitt, C3PO, R2D2, Skynet, Data, and all the other science fiction AIs from my youth are now pretty much science fact. Some of those actually look a bit slow and retarded in comparison. Are we going to build better versions of these? I'd be very disappointed in the human race if we didn't. And I'd be also disappointed if that ends up resembling the original fantasies of those things. I don't think many people are capable of imagining anything more coherent than versions of themselves dressed up in some glossy exterior. Which is of course what C3PO is. Very relatable, a bit stupid, and clownish. But also, why would you want such a thing? And the angry Austrian body builder version of that of course isn't any better.

I think the raw facts are that we've invented some interesting software that passes the Turing test pretty much with flying colors. For much of my life that was the gold standard of testing AIs. I don't think anyone has bothered to actually deal with the formalities of letting AIs take that test and documenting the results in a scientific way. That test obviously became obsolete before people even thought of doing that. We now worry about abuse of AIs to deceive entire populations with AIs pretending to be humans manipulating people. You might actually have a hard time convincing people that have been abused in such a way that what they saw and heard was actually real. We imagined it would be hard to convince them it AIs are human. We failed to imagine the job of convincing them they are not is much harder.

redlock · 3h ago
Really? Mansplain? Why bring gender wars terms into this.
Nevermark · 15h ago
Value impact is always hard to predict.

1) It is, of course, hard to predict major capability advances.

2) But it is also hard to predict capability -> value thresholds. Some large advances won't cross a threshold. While some incremental advancements do.

3) This is all made infinitely harder, because the value chain has many layers and links, each with their own thresholds.

Major capability advances upstream may cross dramatic thresholds, generating reasonable hype, yet still hit downstream thresholds that stymie their impact.

And crossing a few small downstream thresholds can unlock massive latent upstream value, resulting in cascades of impact.

(This is something Apple aims to leverage. Not over-prioritizing major advances vs. the careful identification and clearing of numerous "trivial", but critical, downstream bottlenecks.)

bko · 23h ago
I’m reminded of the 1998 Nobel Laureate Paul Krugman quote:

> “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”

The internets impact on society and business didn’t happen overnight. Naive studies on how people are using it today miss the point. I remember a clip from David letterman where he was mocking bill gates about the internet. Gates said you can get baseball recaps, to which letterman replied “have you ever heard of a radio?”

But maybe I’m just more optimistic because these tools have made a huge impact on my life and productivity gain is more than low single digits. I’m not typical, but I imagine others will catch up.

nabla9 · 22h ago
The Internet Was an Economic Disappointment April 4, 2023 https://archive.ph/3fidX

>Life being what it is, several people came back at me, citing a prediction I made in 1998 that the internet’s growth would soon slow and that “by 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.” I did indeed say that, in a throwaway piece I wrote for the magazine The Red Herring — a piece I still don’t remember having written, but I guess I was trying to be provocative.

..

>But how wrong was I, really, about the internet’s economic impact? Or, since this shouldn’t be about me, have the past few decades generally vindicated visionaries who asserted that information technology would change everything? Or have they vindicated techno-skeptics like the economist Robert Gordon, who argued in a 2016 book that the innovations of the late 20th and early 21st century were far less fundamental than those between 1870 and 1940? Well, by the numbers, the skeptics have won the argument, hands down.

>In that last newsletter, we looked at 10-year rates of growth in labor productivity, which suggested that information technology did indeed produce a bump in economic growth between the mid-1990s and the mid-2000s, but one that was relatively modest and short-lived. Today, let me take a slightly different approach. The Bureau of Labor Statistics produces historical estimates, going back to 1948, of both labor productivity and “total factor productivity,” an estimate of the productivity of all inputs, including capital as well as labor, which is widely used by economists as a measure of technological progress. A truly fundamental technological innovation should cause sustained growth in both these measures, especially total factor productivity.

(read the article to see pictures)

>See the great productivity boom that followed the rise of the internet? Neither do I.

jappgar · 22h ago
People who can't admit when they are wrong are absolutely insufferable.
naijaboiler · 19h ago
His mistake was in couching it language of Internet adoption rather than the gross impact on economic productivity. He was wrong on the former and mostly right on the latter. It is very possible for a technology to have dramatic effect on society without a big impact on overall economic productivity

Darren acegmolu is making only the latter argument. I’m betting he’s right.

nabla9 · 22h ago
Fortunately Kruman is not one of them.
dgroshev · 23h ago
I think any analysis of Internet's impact on the economy should also address the relatively constant angle of productivity growth: https://fred.stlouisfed.org/series/OPHNFB
r721 · 1d ago
(2024)
jonstewart · 22h ago
This is the same Nobel Prize-winning economist who hyped the fake AI-revolutionizes-materials-science paper by one of his students just a few months ago.

https://thebsdetector.substack.com/p/ai-materials-and-fraud-...

No comments yet