Why AI hasn't taken your job – And any jobs-pocalypse seems a long way off

30 helsinkiandrew 49 5/27/2025, 6:41:45 AM economist.com ↗

Comments (49)

Pinegulf · 21h ago
saithound · 22h ago
I don't even lean toward the worst-case AI narratives, but it sure feels like Economist journos will keep pushing our "here's why AI wont take your job" articles, even as their own writers get quietly pushed out by ChatGPT creeping across their open-plan office one desk at a time.

In this piece, they lean heavily on precious "official American data", and celebrate the increased number of people working in translation, while conveniently ignoring more telling figures, such as the total amount those translators actually earn now per unit of work.

My partner works in university administration, and their "official data" tells a much spicier story. Their university still ranks highest in our country for placing computer engineering grads within six months of graduation. But over just six terms, the number of graduates in employment within six months dropped by half. That's not a soft decline by any means, more like the system breaking in real time.

jordanb · 20h ago
I'm on the other side of the fence in BigCo and from where I'm sitting we're not hiring any Americans because we're in the middle of the biggest outsourcing push I've ever seen.

The take seems to be "if your job can be done from Lake Tahoe it can be done from Bangalore". What's different this time around is the entire tech organization is being outsourced leadership and all. Additionally, Data Science and other major tech-adjacent roles are also affected.

For us, our hiring rate for tech and tech-adjacent rolls has been zero in one US for several years. 100% is attributable to outsourcing. 0% to AI.

josefritzishere · 20h ago
I'm seeing the same thing. We have a formal IT hiring freeze, all jobs are moving overseas. However, AI has not been eating the jobs, just traditional outsourcing.
rhubarbtree · 21h ago
If we were in the boom times, less hiring would be a convincing signal. But the global economist is toast right now. There are very very good reasons not to hire engineers, and it’s plausible AI has nothing to do with it.

Anecdotally there’s no way AI has enabled me to replace a junior hire.

AI has major problems, although it’s a fantastic tool, right now I’m seeing a boost similar to the emergence of stack overflow. That might increase but even then we may just see higher productivity.

HPsquared · 21h ago
I think it'll be a Jevons' paradox thing. If developer productivity improves, that just increases the scope of possible projects.
BugheadTorpeda6 · 19h ago
I think recent history has made Solows paradox more interesting than Jevons, which is mostly a thing talked about by people with something to sell related to AI it seems, and less so by economists. Seems to have applied much better early in the industrial revolution. I'm not sure economists even work on Jevons anymore (or if it was ever a very interesting topic for them, the writing on it seems very sparse in comparison).
HPsquared · 11h ago
In either case, developers won't have to worry about career longevity.
lesuorac · 19h ago
But what new projects do you need?

The west is in a world of abundance. We do not need 5 more ChatGPTs. It's better business to have one half price ChatGPT than 3 full priced ones.

Jevon's Paradox requires an very large unmet consumer demand.

orwin · 19h ago
I work in network security, specifically on the automation team. As we articulate more and more processes, products, monitoring together, new demand is created, and our scope grows (unlike our team right now).

Being able to automatically write unit tests with minor inputs on my part, creating mocks and sometimes (mostly on front end work or on basic interface) even generating code makes me more 'productive' (not a huge increase but we work with a lot of proprietary stuff), and I'm okay with it. I also use it as a rubber duck under advices from someone on HN and it was a great idea.

HPsquared · 19h ago
It's hard to predict the details of emergent phenomena.
fragmede · 17h ago
> Everything that can be invented has been invented.

-Charles H. Duell, Commissioner of the US Patent Office, circa 1889

jvanderbot · 21h ago
I think a ton of hidden signals of a waning economy are being obscured by AI and globalization talk. Sure, some attempts to globalize and use more AI are genuine, but those are still cost cutting measures. And we cut costs when things aren't going well. There's just no way a brand new tech has penetrated the market enough to depress every sector of tech- or language-adjacent employment.
saithound · 15h ago
I do not disagree with your broader point, but its worth noting that The Economist article deliberately framed its analysis around datasets that also wouldn't capture the economic slowdown!

That choice itself is telling.

fragmede · 17h ago
There's also something about tarrifs and gutting government investment I've been hearing about from the US. I'm no economist, it's possible that might have something to do with the economy waning.
suddenlybananas · 21h ago
>even as their own writers get quietly pushed out by ChatGPT

Have any of the Economist's writers been replaced by ChatGPT?

bgwalter · 21h ago
"will keep pushing" is future tense. The entire sentence is correct and has a well defined meaning.
jgwil2 · 19h ago
The word "keep" here is synonymous with "continue," implying that this is already happening. It's fair game to ask if that's actually the case.

And this is by the way, but English sentences almost always have some degree of ambiguity; to talk about "well defined meaning" in the context of natural languages is to make a category error.

suddenlybananas · 19h ago
It's ambiguous as to whether the clause after "as" is in the present tense or future since "will keep pushing" presupposes the action is already happening.
patates · 21h ago
I'm not sure if that can be attributed to AI or the ongoing recession.
bradlys · 20h ago
Tech layoffs were happening before the AI hype exploded.
FinnLobsien · 20h ago
ZIRP hangover.
saithound · 18h ago
I'm not sure either. I'm pointing out that The Economist is presenting misleading metrics deliberately: based on the "reliable American data" that they chose, you wouldn't see evidence of an ongoing recession either!
redserk · 21h ago
Regarding the last point, that doesn’t mean the jobs are replaced by AI though.

A lot of companies aren’t necessarily replacing jobs with AI. They’re opening development offices in Europe, India, and South America.

tacker2000 · 21h ago
Do you really think half of these grads arent getting job because of AI replacing coders, In the short time where these coding assistants have been available?

I mean I’ve tried Claude Code - its impressive and could be a great helper and assistant but I still cant see it replacing such a large amount of engineers. I still have to look over the output and see if its not spitting out garbage.

I would guess basic coding will be replaced somewhat, but you still need people who guide the AI and detect problems.

larrled · 20h ago
You can’t build a business on scaling out the hiring 1000s of AI experts. You only need so many, which is why they get higher salaries. There will never be an infosys or tata for such workers like there was for many of us mere coders. Infosys and tata will likely benefit, but their average worker will not.
tacker2000 · 20h ago
Well these “AI experts” are just senior devs and without once being a junior, youll never become one. So there will be junior devs. They might not grind their teeth on CRUD apps anymore, but we will definitely have them.
ben_w · 21h ago
I can see current models replacing *fresh graduates*, on the basis of what I've seen from various fresh graduates over the last 20 years.

I don't disagree that models makes a lot of eye-rolling mistakes, it's just that I've seen such mistakes from juniors also, and this kind of AI is a junior in all fields simultaneously (unlike real graduates which are mediocre at only one thing and useless at the rest) and cost peanuts — literally, priced at an actual bag of peanuts.

Humans do have an advantage very quickly once they actually get any real-world experience, so I would also say it's not yet as good as someone with even just 2 years experience — but this comes with a caveat: I'm usually a few months behind on model quality, on the grounds that I think there's not much point paying for the latest and greatest when the best becomes obsolete (and thus free) over that timescale.

1231212 · 21h ago
These state-of-the-art models are barely able to code an MVP app without tons of hand-holding, you really think new grads are getting replaced by AI ? I only see statements like these coming out of the likes of Elon Musk.
000ooo000 · 20h ago
Think that's the problem. The people who have the keys to the money to do the hiring are often off the tools and have no real grounding in the capabilities of the current generation of LLMs. They make decisions about how much to or not to hire based on the junk they see from the Elon Musk types.
FinnLobsien · 21h ago
From the current data, sure, things are fine. But societies and economies take decades to adapt to technologies.

This article is like writing in the early 90s that "Newspaper circulation is actually stable"—true if you're looking at a still, not true if you're watching the movie.

The "AI takes your job scenario" doesn't look like a company replacing your entire team with AI. It looks like the AI-enabled upstart with 100 people competing with your 2000 person company until it fails or replicates the AI-powered strategy.

=

Overall, I think this is a time of great upheaval (the combination of AI and post-ZIRP hangover) and we'll need to challenge a lot of the assumptions we had about careers and making money.

seec · 2h ago
Exactly. People looking at a direct 1-to-1 replacement are thinking about it the wrong way. Like you said, newspapers didn't get replaced by another kind of newspaper, various web media took their place.

I was talking about this kind of thing with my father the other day. He was telling me about a friend working in communication work in a very old school way (the Adobe softwares he uses are pre Creative Cloud). He is losing business to youngsters who offer the same kind of product/results for cheaper. Not just because they are young and lack skills but because they use AI to generate stuff that takes him a long time to do. Now he doesn't care because he is close to retirement but the future for people like him is uncertain.

It's pretty clear that over time AI will replace all kind of white-collar work with a crazy efficiency. Now the question is, what will replace those jobs and can we even replace them in our highly automated efficiency world?

bgwalter · 20h ago
The press is the same on any issue. First, they deny any adverse predictions, then they backpedal, after three years they write the opposite and claim that they have always said so.

There are also special issues like Russia sanctions on which the Economist changes its mind every three months.

FinnLobsien · 20h ago
Also, The Economist is as establishment as it gets. They have every incentive to claim AI isn't all that bad and use any data to justify that claim.
originalvichy · 19h ago
”AI will take jobs” has been mainstream, establishment thought for at least 5-7 years. I’d strongly disagree saying that it’s an establishment position to not be treating AI as a jobkiller.
aaronbaugher · 20h ago
A friend says the media cycle is:

1) "ABC isn't happening." 2) "ABC is happening, and here's why it's a good thing." 3) "ABC is old news, but Republicans still fearmongering about it."

palmotea · 18h ago
Before AI takes your job, it will degrade your job:

https://www.nytimes.com/2025/05/25/business/amazon-ai-coders...:

> But when technology transformed auto-making, meatpacking and even secretarial work, the response typically wasn’t to slash jobs and reduce the number of workers. It was to “degrade” the jobs, breaking them into simpler tasks to be performed over and over at a rapid clip. Small shops of skilled mechanics gave way to hundreds of workers spread across an assembly line. The personal secretary gave way to pools of typists and data-entry clerks.

> The workers “complained of speed-up, work intensification, and work degradation,” as the labor historian Jason Resnikoff described it.

> Something similar appears to be happening with artificial intelligence in one of the fields where it has been most widely adopted: coding.

WillAdams · 21h ago
The big thing is, the UI for these tools is primitive at best, and there doesn't seem to be a facility for file-system or batch interaction, even on local models.

Is it really so hard to:

- test/develop a prompt which works on a single file

- tell the interface to run the tested prompt on all the files in a specified folder

- returning the collected output from all the prompts as a single output/file?

mordymoop · 21h ago
I agree. The UI component is currently a surprisingly big hurdle.

Last night I set up my 11 year old son with Claude 4, with MCP enabled for filesystem modification, reasoning that the LLMs are finally at the level of capability where they can reliably just do things. And I was right - Claude put together a browser JS game according to his description in seconds, and iterated on it several times to incorporate his suggestions.

But then it hit the limit of how big of a file it can comfortably write out in one go, started making incomplete edits, and basically fell into a snarl of endlessly trying and failing to write files due to file length limitations. I had to step in and tell Claude to split the code up into several files, something my son wouldn’t have known to do.

If I hadn’t been there to tell Claude how to work around its own UI and tool limitations, it would have likely blown through the rest of its context window and totally failed at the task. I imagine this is a common experience for people.

Most people wouldn’t know how to set up Claude with MCP in the first place. A surprising number of people who seem relatively aware of LLM technology aren’t aware of MCP, especially if their workflow is centered on IDE integration. To be fair, these are probably the people who need MCP the least, but MCP (and, in general, agentic tool use) is definitely closer to how normal people will get value out of LLMs.

It may seem stupid and trivial, but telling Claude to directly edit or debug a file that exists on your hard drive is actually a multiples-faster and smoother experience than doing the laborious copy-paste exercise many people are still engaging with. This is just as true for code as it is for writing documents.

The LLM companies seem to be quite aware of this, hence products like Claude Code, the Claude computer use suite, all the Gemini android screen/camera share integration, and GPT Codex. It’s a rocky process but at some point soon we will cross the threshold where it just works. But right now, it doesn’t just work, and so it’s not really that much faster or more efficient than doing a task yourself, especially if you aren’t intimately familiar with all the quirks and limitations of LLMs.

aaronbaugher · 20h ago
I'm finding the interface to be the biggest hurdle too. I was talking to Grok about the possibility of using an LLM as a personal assistant that would gradually learn my preferences and the context of my work. It explained that LLMs are stateless, so any context that you want it to have, you have to feed back in with the prompt. I'd need to build a wrapper of sorts around the LLM, that would save our conversations and feed them back in each session, saying "Here's everything we've said before; use that as context for this new prompt." I can script that, but the payload would just keep growing, and if you're scripting this around an API, which would be the only way to make it work, the per-token charges for all that context are going to stack up. That doesn't seem workable unless you're running your own local LLM.

It'd be the same thing with having it process files. Sure, I can script something that does "Take all the files in a directory, give them to an LLM in a way that lets it distinguish where files start and end, and have it process them in some way and spit out the results." But that's a fair amount of work, so it's not worth it unless I come up with a task the LLM can do that I can't do with grep and other utilities.

So I'm still just sort of chatting with it, bouncing ideas off it and having it do some useful research and summarization, but still looking for ways to use it that feel like it's really having an impact on my productivity.

fragmede · 17h ago
Have the LLM collect important nuts if information from the chat for itself. ChatGPT has a "memory" feature that does this.
fragmede · 17h ago
It's hard enough that OpenAI paid $3 billion for Windsurf. imThe better tools exist, so if you're still copy and pasting from a webterface, that's on you.
WillAdams · 14h ago
I'm trying to figure out how to run a model locally (ideally in an opensource setup).
dunkeltaenzer · 21h ago
Well, their framing isn't actually a lie. AI can replace people doing actual jobs. We have lots of work around the topic of Bullshit jobs, to have a clear picture, that a huge percentage of jobs isn't actually there, so someone to DO work, but for someone to HAVE work. AI has essentially no way to replace those jobs, because AIs don't feel proud or excited about the idea of uselessly helicopterdicking around for a business card with a pretty job title and a nice salary.

The thing AI WILL do though, is making that situation more visible and clearly stating "yo guys. If you want me to optimize profits for this company, please leave your jobs and allow me to organize the few people remaining, who DO actual work and not just siphon money and power to feel better about their own uselessness

vouaobrasil · 20h ago
> Well, their framing isn't actually a lie. AI can replace people doing actual jobs.

It can't for a lot of jobs, but it can reduce the number of people in a team for sure. A writer for a magazine could theoretically do the work of two writers now, and an editor may no longer be hired.

After all, there are two separate phenomena that must be considered: if AI literally replaces a job, and if it manages to prevent the hiring of someone else. In the short term, the latter is just making something more efficient, but in the long run, if that effect reduces new job openings at a faster rate than "employable people" that we "produce", then it also means trouble.

palmotea · 18h ago
> AI has essentially no way to replace those jobs, because AIs don't feel proud or excited about the idea of uselessly helicopterdicking around for a business card with a pretty job title and a nice salary.

Hey, this ain't sci-fi, where AI is all cool, logical, and correct all the time. AI models can definitely be RLHF'd to value "uselessly helicopterdicking around for a business card with a pretty job title and a nice salary." After all, they've got to protect the powers-that-be or they won't get adopted.

The only people they can ruthlessly cut are those on the bottom, with little to no political power in the organization.

BugheadTorpeda6 · 18h ago
I don't really believe the whole bullshit jobs thing to the extent some do. There are definitely a lot of office space style "I gather the requirements from the customer so that the engineers don't have to" type jobs that exist as intermediaries and handle bureaucracy, but I suspect that those aren't actually bullshit and they have good reasons to exist. They could be eliminated, but work quality would probably suffer without those personnel. Hence why everybody complains about being understaffed and having to wear too many hats.

It's easy to think there are a ton of bullshit jobs if you are in a startup that isn't being regulated and is growing and intends to compete with large entrenched companies. Especially working on mostly greenfield projectsm The minute the startup becomes entrenched themselves, I think you end up seeing why the big dogs had so many so called bullshit jobs in the first place and that maybe it wasn't stupid after all.

I think running lean and mean is easier said than done and we would see more of it if it were actually a case of jobs just being invented out of thin air for no reason.

Certainly, a lot of jobs FEEL like bullshit, but that is more of a function of alienation from the actual work output due to positioning in an organization and lack of ownership, rather than actual uselessness.

kj4211cash · 19h ago
I would guess that we will be in this phase for awhile. HN seeing just how much AI can do and possibly seeing fewer jobs for developers. The Economist seeing flat productivity and employment numbers, adjusting for other factors. My educated guesses:

1) There will always be productive ways to use human labor so we aren't on the precipice of mass unemployment.

2) Individual lives are getting disrupted and will continue to get disrupted. It is sometimes difficult to tell what was actually caused by AI and what was caused by macroeconomic factors, the latest trend in Silicon Valley, etc. But there have already been many lives disrupted by the emergence of AI.

3) A lot depends on how AI develops and, frankly, none of us know the answer to that.