AI adoption linked to 13% decline in jobs for young U.S. workers: study

103 pseudolus 165 8/28/2025, 2:13:44 PM cnbc.com ↗

Comments (165)

fibers · 7h ago
The accounting note is not true in the traditional sense. The field in the US is just getting offshored to India/PH/Eastern Europe for better or for worse. There is even a big push to lower the educational requirements to attain licensure in the US (Big 4 partners want more bodies and are destroying the pipeline for US students). Audit quality will continue to suffer and public filers will issue bunk financials if they aren't properly attested to.
ACCount37 · 7h ago
The reports from the usual "offshoring centers" aren't exactly inspiring. It's a bloodbath over there.

Seems like the capabilities of current systems map onto "the kind of labor that gets offshored" quite well. Some of the jobs that would get offloaded to India now get offloaded to Anthropic's datacenters instead.

jameslk · 57m ago
How many of these jobs are getting offshored because of AI?

Language barriers, culture, and knowledge are some of the biggest challenges to overcome for offshoring. AI potentially solves many of those challenges

mostlysimilar · 17m ago
> AI potentially solves many of those challenges

Isn't it exactly the opposite?

Language barriers: LLMs are language models and all of the major ones are built in English, speaking that language fluently is surely a prerequisite to interacting with them efficiently?

Knowledge: famously LLMs "know" nothing and are making things up all of the time and sometimes approximate "knowledge"

tootie · 1h ago
Found this article from last year saying IIT grads are facing the same grim outlook as technology hiring in India for new grads has also dried up

https://www.bloomberg.com/news/newsletters/2024-05-30/tough-...

So, that doesn't seem like a likely culprit unless you have some convincing evidence.

fibers · 1h ago
I think you are conflating 2 things. AI could be going after new entry level jobs in software engineering. I am not a professional engineer but an accountant by trade (I like writing software as a hobby lol) but this article looks like evidence that IIT grads will have a harder time getting these jobs that AI is attacking. My comment rests on the fact that the report doesn't really reconcile with AI destroying entry level jobs for accounting, but rather this type of work being offshored to APAC/India. There are still new COEs being built up for mid cap companies for shared services in India to this day and I don't mean Cognizant and Wipro, but rather the end customer being the company in question with really slick offices there.
elif · 2h ago
Do you have any evidence of this because the rationale seems like a coping strategy or conspiracy theory how it's being suppositioned.
thinkingtoilet · 1h ago
Do you have any actual evidence that supports the headline? The article does not. It simply mentions 13% decline in relative employment and then blames AI with no actual evidence. Given what I know about the current state of AI and off-shoring, I think off-shoring is a million times more likely to be the culprit than AI.
fibers · 1h ago
Have you seen how the profession has worked post SOX? Did you know 2016 was the peak year where you had accounting students enrolled in uni in the states? I want you to think laterally about this.
the_real_cher · 6h ago
This is exactly right.

The H1B pipeline has not decreased at all whereas millions of American workers have been laid off.

fibers · 6h ago
Maybe for software engineering but not for accounting. I've had to interface with many offshored teams and interviewed at places where accounting ops were in COE centers in EU/APAC.
lazide · 7h ago
Yup, 95% of the AI hype is to apply pressure on the labor market and provide cover for offshoring/downsizing.
pipes · 5h ago
Where is the evidence for this? Who is "applying pressure on the labour market"?
runako · 1h ago
Every executive publicly saying obviously* false things like X job will be done by AI in 18 months is putting downward pressure on the labor market. The pressure is essentially peer pressure among executives: are we stupid for continuing to hire engineers instead of handing our engineering budget to Anthropic?

* - Someone should maintain a walkback list to track these. I believe recent additions are Amodei of Anthropic and the CEOs of AWS and Salesforce. (Benioff of Salesforce, in February: "We're not going to hire any new engineers this year." Their careers page shows a pivot from that position.)

lokrian · 39m ago
Maybe it's a good time to ask for advice. Which IT job roles and companies are least vulnerable to offshoring? Defense contractors and the like?
londons_explore · 1h ago
> Audit quality will continue to suffer

I wonder how much this actually matters? I understand that for an auditor, having a quality reputation matters. But if all audits from all firms are bad, how much would the world economy suffer?

Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?

cjbgkagh · 1m ago
The current system is not long term stable, and poor accounting is part of the reason more people don't know that. Even worse accounting would speed up the decline.
drusepth · 1h ago
> Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?

Is this hyperbole? It seems like the real question being asked here is "would the world be worse off without deterministic checks and balances", which I think most people would agree is true, no?

tobyjsullivan · 1h ago
I read it as assuming the deterministic checks and balances are already absent. We have the illusion of determinism but, in practice, audits (and justice) are mostly theatre as it is.

From that perspective, lowering the quality of something that is already non-rigourous might not have any perceivable effect. It’s only a problem if public perception lowers, but that’s a marketing issue that the big-4 already have a handle on.

fibers · 1h ago
Then you would have to think twice about the company you may be giving money to (ie the stock market and private bank loans). That's the whole objective of this. Every company is going to need an accountant in one way or another and you don't really need to follow strict GAAP for management requirements (what else is EBIDTA for if anything?), but it's something completely different than saying: I made x dollars and spent y dollars, here is what I have and what I owe, please give me money.

At the end of the day it is a question of convenience/standards, if GAAP didn't exist maybe firms could use a modified accrual standard that is wholly compliant with tax reporting and that's it.

elzbardico · 7h ago
bilsbie · 36m ago
AI is the popular cover excuse for layoffs.

I can’t think of a single job that modern AI could easily replace.

hillcrestenigma · 34m ago
I think the initial job loss from AI will come from having individual workers be more productive and eliminate the need to have larger teams to get the same work done.
cdrini · 32m ago
The way I like to describe it is that you can't go from 1 developer to 0 thanks to AI, but you might be able to go from 10 to 9. Although not sure what the exact numbers are.
GoatInGrey · 20m ago
For cost centers, maybe. If your development team or org is a revenue generator with a backlog, I don't see why the team would be trimmed.
jameslk · 33m ago
Have you taken a Waymo yet?
muldvarp · 7h ago
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".

Was good while it lasted though.

elif · 1h ago
I'm not sure it's that our job is the most automatable, but that the interface is the easiest to adapt to our workflow.

I have a feeling language models will be good at virtually every "sit at a desk" job in a virtually identical capacity, it's just the act of plugging an AI into these roles is non-obvious.

Like every business was impacted by the Internet equally, the early applications were just an artifact of what was an easy business decision.. e.g. it was easier to start a dotcom than to migrate a traditional corporate process.

What we will see here with AI is not the immediate replacement of jobs, but the disruption of markets with offerings that human labor simply can't out-compete.

throwaway31131 · 1h ago
> I'm not sure it's that our job is the most automatable

I don't know. It seems pretty friendly to automation to me.

When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?

These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.

Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.

It's all possible because the computer manages much of the complexity through various forms of automation.

Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.

hex4def6 · 44m ago
This has been my argument as well. We've been climbing the abstraction ladder for years. Assembly -> C -> OOP ->... this just seems like another layer of abstraction. "Programmers" are going to become "architects".

The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).

If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.

rebolek · 8m ago
The only thing that AI is good at is a job that someone has already done before.
robotnikman · 1h ago
If it gets to the point where I can no longer find a tech job I am just going to buy a trailer, live somewhere cheap, and just make money doing odd jobs while spending most of my time programming what I want. I don't want to participate in a society where all I have for job options is a McJob or some Amazon warehouse.
swader999 · 46m ago
That's plan C, plan B is to one person SAAS a better app than my current company makes.
robotnikman · 38s ago
That's actually a good idea. Now I just need to come up with an idea for an SAAS app
bilsbie · 39m ago
Is it hard to date with a trailer?
sandspar · 52m ago
>Buy a trailer, live somewhere cheap, do odd jobs

Unrelated to the discussion, but I love these kinds of backup plans. I've found that most guys I talk to have one. Just a few days ago a guy was telling me that, if his beloved wife ever divorces him, then he'd move to a tropical island and become a coconut seller.

(My personal plan: find a small town in the Sonoran Desert that has a good library, dig a hole under a nice big Saguaro cactus, then live out my days reading library books in my cool and shady cave.)

bilsbie · 37m ago
Is it hard to date living under a cactus?
sandspar · 22m ago
Nah dating under a cactus is easy: just don't be a prick.
grim_io · 7h ago
Maybe it's just the nature of being early adopters.

Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.

In the meantime, memes aside, I'm not too worried about being completely automated away.

These models are extremely unreliable when unsupervised.

It doesn't feel like that will change fundamentally with just incrementally better training.

muldvarp · 7h ago
> These models are extremely unreliable when unsupervised.

> It doesn't feel like that will change fundamentally with just incrementally better training.

I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.

Also, LLMs don't need to be better programmers than you are, they only need to be good enough.

grim_io · 7h ago
No matter how much better they get, I don't see any actual sign of intelligence, do you?

There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.

I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.

muldvarp · 6h ago
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies":

> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.

The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".

Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.

grim_io · 5h ago
There is a reason chess was "solved" so fast. The game maps very nicely onto computers in general.

A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.

muldvarp · 5h ago
Yes, that's my point. AI doesn't need to be general to be useful. LLMs might replace software engineers without ever being "general intelligence".
grim_io · 4h ago
Sorry for not making my point clear.

I'm arguing that the category of the problem matters a lot.

Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".

romeros1 · 1h ago
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" ~ Upton Sinclair

Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.

Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.

Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.

svara · 1m ago
How will this work exactly?

I think I have a pretty good idea of what AI can do for software engineering, because I use it for that nearly every day and I experiment with different models and IDEs.

The way I've found to use the tools effectively is by asking for specific things, where the prompt itself would not be comprehensibly for someone who's not in the field.

If you sat a rando with no CS background in front of Cursor, Windsurf or Claude code, what do you suppose would happen?

It seems really doubtful to me that overcoming that gap is "just more training", because it would require a qualitatively different sort of product.

And even if we came to a point where no technical knowledge of how software actually works was required, you would still need to be precise about the business logic in natural language. Now you're writing computer code in natural language that will read like legalese. At that point you've just invented a new programming language.

Now maybe you're thinking, I'll just prompt it with all my email, all my docs, everything I have for context and just ask it to please make my boss happy.

But the level of integrative intelligence, combined with specialized world knowledge required for that task is really very far away from what current models can do.

The most powerful way that I've found to conceptualize what LLMs do is that they execute routines from huge learnt banks of vector programs that re-combine stored textual information along common patterns. This view fits well with the strengths and weaknesses of LLMs - they are good at combining two well understood solutions into something new, even if vaguely described.

But they are quite bad at abstracting textual information into a more fundamental model of program and world state and reasoning at that level.

I strongly suspect this is intrinsic to their training, because doing this is simply not required to complete the vast majority of text that could realistically have ended up in training databases. Executing a sophisticated cut&paste scheme is in some ways just too effective; the technical challenge is how do you pose a training problem to force a model to learn beyond that.

anthem2025 · 1h ago
Ironic to post that quote about AI considering the hype is pretty much entirely from people who stand to make obscene wealth from it.
lawlessone · 30m ago
>That LLMs were nothing but glorified stochastic parrots.

Well yes , now we know they make kids kill themselves.

I think we've all fooled ourselves like this beetle

https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...

for thousands of years up until 2020 anything that conversed with us could safely be assumed to be another sentient/intelligent being.

No we have something that does that, but is neither sentient or intelligent, just a (complex)deterministic mechanism.

manmal · 1h ago
LLMs can code, but they can’t engineer IMO. They lack those other parts of the brain that are not the speech center.
anthem2025 · 1h ago
Do you actually believe this drivel or are you being paid to spew nonsense?
ACCount37 · 7h ago
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
OtherShrezzing · 1h ago
AI training costs are increasing around 3x annually across each of the last 8 years to achieve its performance improvements. Last year, spending across all labs was $150bn. Keeping the 3x trend means that, to keep pace with current advances, costs should rise to $450bn in 2025, $900bn in 2026, $2.7tn in 2027, $8.1tn in 2028, $25tn in 2028, and $75tn in 2029 and $225tn in 2030. For reference, the GDP of the world is around $125tn.

I think the labs will be crushed by the exponent on their costs faster white-collar work will be crushed by the 5% improvement exponent.

cjs_ac · 7h ago
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
ACCount37 · 7h ago
By all accounts, yes.

"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.

The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.

Aloisius · 1h ago
Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.

Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.

ACCount37 · 45m ago
We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.
grim_io · 6h ago
The jpeg compression argument is still valid.

It's lossy compression at the core.

elif · 1h ago
In 2025 you can add quality to jpegs. Your phone does it and you don't even notice. So the rhetorical metaphor employed holds up, in that AI is rapidly changing the fundamentals of how technology functions beyond our capacity to anticipate or keep up with it.
lm28469 · 45m ago
> add quality to jpegs

Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place

ACCount37 · 6h ago
I don't think it is.

Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.

The "fundamental deficiency/performance ceiling" argument I don't buy at all.

We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.

grim_io · 5h ago
Sure, you can upscale a badly compressed jpeg using ai into something better looking.

Often the results will be great.

Sometimes the hallucinated details will not match the expectations.

I think this applies fundamentally to all of the LLM applications.

muldvarp · 4h ago
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.

That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.

anthem2025 · 1h ago
Lots of technology is cool if you get to just say “if we get rid of the limitations” while offering no practical way to do so.

It’s still horrible btw.

grim_io · 4h ago
I think you are selling yourself short if you believe you can be replaced by a next token predictor :)
ACCount37 · 3h ago
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.

LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.

anthem2025 · 1h ago
LLMs have no ability to reason whatsoever.

They do have the ability to fool people and exacerbate or cause mental problems.

lawlessone · 52m ago
what's actually happening is all your life you've been told by experience if something can talk to you is that it must be somewhat intelligent.

Now you get can't around that this might not be the case.

You're like that beetle going extinct mating with beer bottles.

https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...

ACCount37 · 23m ago
"What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.

We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.

But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.

abletonlive · 1h ago
this boring reductionist take on how LLMs work is so outdated that I'm getting second hand embarassment.
anthem2025 · 1h ago
Pretty crazy, and all you have to do is assume exponential performance growth for as long as it takes.
bdcravens · 1h ago
I'm sure those who lost a job to software at some point are feeling a great deal of sympathy for developers who are now losing out to automation.
devnullbrain · 5m ago
Despite being the target of a lot of schadenfreude, most software developers aren't working on automation.
lawlessone · 56m ago
Nice watching it tear down recruiters though.
anthem2025 · 1h ago
Which universe is that, the one consisting of the union of AI charlatans and people who don’t understand software engineering?

You know even the CEOs are backtracking on that nonsense right?

omnicognate · 34m ago
Universally? Nah.
polski-g · 7h ago
Its the least regulated (not at all). So it will be the first to be changed.

AI lawyers? Many years away.

AI civil engineers? Same thing, there is a PE exam that protects them.

DrewADesign · 1h ago
You don’t need to perfect AI to the point of becoming credentialed professionals to gut job markets— it’s not just developers, or creative markets. Nobody’s worried that the world won’t have, say, lawyers anymore — they’re worried that AI will let 20% of the legal workforce do 100% of the requisite work, making the skill essentially worthless for the next few decades because we’d have way too many lawyers. Since the work AI does is largely entry-level work, that means almost nobody will be able to get a foothold in the business. Wash, rinse, repeat to varying levels across many white collar professions and you’ve got some real bad times brewing for people trying to enter the white collar workforce from now on— all without there being a single AI lawyer in the world.
muldvarp · 7h ago
Same thing for doctors. Turns out radiologists are fine, it's software engineers that should be scared.
manmal · 1h ago
We might end up needing 20% or so less doctors, because all that bureaucracy can be automated. A simple automated form pre-filler can save a lot of time. It’s likely that hospitals will try saving there.
AndrewKemendo · 7h ago
Too bad engineers were “too important” to unionize because their/our labor is “too special .”

I think you could find 10,000 quotes from HN alone why SDEs were immune to labor market struggles that would need a union

Oh well, good luck everyone.

tick_tock_tick · 4m ago
So what your argument is we're so special that we deserve to hold back human progress to have a privileged life? If it's not that what would you want a union to do in this situation?
jordanb · 7h ago
This was when programmers were making software to time Amazon worker's bathroom breaks so believing "this could never happen to me" was probably an important psychological crutch.
nradov · 7h ago
I'm not necessarily opposed to unionization in general but it's never going to save many US software industry jobs. If a unionization drive succeeds at some big tech company then the workers might do well for a few years. But inevitably a non-union startup competitor with a lower cost structure and more flexible work rules will come along and eat their lunch. Then all the union workers will get laid off anyway.

Unionization kind of worked for mines and factories because the company was tied to a physical plant that couldn't easily be moved. But software can move around the world in milliseconds.

FirmwareBurner · 7h ago
Indeed, just look at the CGI VFX industry of Hollywood. US invented it and was the leader for a long time, but now it has been commodified, standardized and run into the ground, because union or not, you can't stop US studios form offshoring the digital asset work to another country where labor is 80% cheaper than California and quality is 80% there. So the US is left with making the SW tools that VFX artist use, as the cutting edge graphics & GPU knowhow is all clustered there.

Similarly, a lot of non-cutting edge SW jobs will also leave the US as tooling becomes more standardized, and other nations upskill themselves to deliver similar value at less cost in exchange for USD.

JumpCrisscross · 7h ago
This is, if true, a fundamental shift in the value of labor. There really isn’t a non-Luddite way to save these jobs without destroying American tech’s productivity.

That said, I’m still sceptical it isn’t simply a reflection of an overproduction of engineers and a broader economic slowdown.

jordanb · 7h ago
Yeah I agree that outsourcing and oversupply are the real culprits and AI is a smoke screen. The outcome is the same though.
JumpCrisscross · 7h ago
> outcome is the same though

Not really. If it’s overproduction, the solution is tighter standards at universities (and students exercising more discretion around which programmes they enroll in). If it’s overproduction and/or outsourcing, the solutions include labour organisation and, under this administration, immigration curbs and possibly services tariffs.

Either way, if it’s not AI the trend isn’t secular—it should eventually revert. This isn’t a story of junior coding roles being fucked, but one of an unlucky (and possibly poorly planning and misinformed) cohort.

jordanb · 7h ago
It can be oversupply/outsourcing and also secular: You can have basically chronic oversupply due to a declining/maturing industry. Chronic oversupply because the number of engineers needed goes down every year and the pipeline isn't calibrated for that (academia has been dealing with this for a very long time now, look up the postdocalypse). Outsourcing, because as projects mature and new stuff doesn't come along to replace, running maintenance offshore gets easier.

Software isn't eating the world. Software ate the world. New use cases have basically not worked out (metaverse!) or are actively harmful.

xienze · 4m ago
Unions work in physical domains that need labor “here and now”, think plumbers, electricians, and the like. You can’t send that labor overseas, and the union can control attempts at subversion via labor force importation. But even that has limitations, e.g. union factory workers simply having their factory shipped overseas.

Software development at its core can be done anywhere, anytime. Unionization would crank the offshoring that already happens into overdrive.

lispisok · 25m ago
Unions wouldnt stop any of this but professionalization would
orochimaaru · 7h ago
Unions won’t solve this for you. If a company just decides they have enough automation to reduce union workforce it can happen the next time contracts get negotiated.

Either way, there are layoff provisions with union agreements.

jszymborski · 7h ago
Tell that to dock workers, who have successfully delayed the automation of ports to the extent we see them automated in e.g. the PRC [0].

Hell, they're even (successfully) pushing back against automated gates! [1]

[0] https://www.cnn.com/2024/10/02/business/dock-workers-strike-...

[1] https://www.npr.org/2024/10/03/nx-s1-5135597/striking-dockwo...

MangoCoffee · 7h ago
Isn't that just delaying the inevitable? Yangshan Deep-Water Port in Shanghai is one of the most automated ports. Considering there are more people in China than in the US, China still automated their port.
jszymborski · 6h ago
I'm not making a value judgment on the specific case of dock workers, I'm rather saying that unions can and do prevent automation. If Software Devs had unionized earlier, a lot of positions would probably still be around.
est31 · 7h ago
In Hollywood, union bargaining bought some time at least. Unions did mandate limits on the use of AI for a lot of the creation process.

AI is still used in Hollywood but nobody is proud of it. No movie director goes around quoting percentages of how many scenes were augmented by AI or how many lines in the script were written by ChatGPT.

muldvarp · 6h ago
Unions can only prevent automation up to a point. Really the only thing that could have reasonably prevented this would have been for programmers to not produce as much freely accessible training data (formerly known as "open source software").
ivewonyoung · 7h ago
Unions would just delay the inevitable while causing other downsides like compressing salary bands, make it difficult to fire non-performers, union fees, increasing chance of corruption etc.

For a recent example:

> Volkswagen has an agreement with German unions, IG Metall, to implement over 35,000 job cuts in Germany by 2030 in a "socially responsible" way, following marathon talks in December 2024 that avoided immediate plant closures and compulsory layoffs, according to CNBC. The deal was a "Christmas miracle" after 70 hours of negotiations, aiming to save the company billions by reducing capacity and foregoing future wage increases, according to MSN and www.volkswagen-group.com.

renewiltord · 52m ago
I mean, I still don't want to unionize with the guys who find `git` too complicated to use (which is apparently the majority of HN). Also, you guys all hate immigrants which is not my vibe, sorry.
shadowgovt · 7h ago
I really hope nobody had themselves convinced that software engineering couldn't be automated. Not with the code enterprise has been writing for decades now (lots and lots and lots of rules for gluing state to state, which are extremely structured but always just shy of being so structured that they were amenable to traditional finite-rule-based automation).

The goal of the industry has always been self-replacement. If you can't automate at least part of what you're working on you can't grow.

... unfortunately, as with many things, this meshes badly with capitalism when the question of "how do you justify your existence to society" comes up. Hypothetically, automating software engineering could lead to the largest open-source explosion in the history of the practice by freeing up software engineers to do something else instead of toil in the database mines... But in practice, we'll probably have to get barista jobs to make ends meet instead.

vitaflo · 1h ago
If you want to know what will happen to software engineers in the US just follow the path of US factory workers in the 90s.
manmal · 1h ago
The experiences people are having when working with big, complex codebases don’t line up with your gloomy outlook. LLMs just fall apart beyond a certain project size, and then the tech debt must be paid.
shadowgovt · 3m ago
[delayed]
beepbooptheory · 8m ago
Thinly veiled economic propaganda aside, I am dealing with a different AI mess everyday. Technical debt is exploding everywhere I turn. There is an ever larger part of me these days that wishes I could just call the bluff all at once and let all the companies in question learn the inevitable lessons here the hard way.

The worst thing for me would be just needing to get a job like I had before being a dev, the stakes are so much grander for all the companies. It's only really existential for the side of this that isn't me/us. I've been working since I was 15, I can figure it out. I'll be more happy cutting veggies in a kitchen than every single CEO out there when all is said and done!

throwawayq3423 · 54m ago
A recession could also explain this drop.
ArtTimeInvestor · 7h ago
Every day when I am out in the city, I am amazed by how many jobs we have NOT managed to replace with AI yet.

For example, cashiers. There are still many people spending their lives dragging items over a scanner, reading a number from a screen, holding out their hand for the customer to put money in, and then sorting the coins into boxes.

How hard can it be to automate that?

anthem2025 · 1h ago
They don’t need AI for that, they just cut staff to the bare minimum and put in self checkouts.
renewiltord · 51m ago
Pharmacists are my favourite. They're a human vending machine that is bad at counting and reading. But law protects them. Pretty good regulatory capture.
iamdelirium · 11m ago
Please actually understand what pharmacists actually do and _why_ AI is not a good replacement for them yet, unless you want to die of certain drugs interactions.
deathanatos · 10m ago
Pharmacists are a fantastic example. My pharmacy is delivered my prescription by computer. They text me, by computer, when it's ready to pick up. I drive over there … and it isn't ready, and I have to loiter for 15 minutes.

Also, after the prescription ends, they're still filling it. I just never pick it up. The autonomous flow has no ability to handle this situation, so now I get a monthly text that my prescription is ready. The actual support line is literally unmanned, and messages given it are piped to /dev/null.

The existing automation is hot garbage. But C-suite would have me believe our Lord & Savior, AI, will fix it all.

delfinom · 2h ago
>How hard can it be to automate that?

Self checkout has been a thing for ages. Heck in Japan the 711s have cashiers but you put the money into a machine that counts and distributes change for them.

Supermarkets are actually getting rid of self checkouts due to crime. Surprise surprise, having less visible "supervision" in a store results in more shoplifting than having employees who won't stop it anyway.

anthem2025 · 1h ago
It’s also just resulting in atrocious customer experience.

I can go to Safeway or the smaller chain half a block away.

The Safeway went all in on self checkouts. The store is barely staffed, shelves are constantly empty, you have to have your receipt checked by security every time, they closed the second entrance permanently, and for some reason the place smells.

Other store has self checkouts but they also have loads of staff. I usually go through the normal checkout because it’s easier and since they have adequate staff and self checkout lines it tends to be about the same speed to.

End result is I don’t shop at Safeway if I can avoid it.

lotsofpulp · 7h ago
The hard part is preventing theft, not adding numbers.
tux3 · 7h ago
Cashiers should not, and will not prevent theft. They're not paid nearly enough to get in danger, and it is not their job.

I'm sure you can find videos of thefts in San Francisco if you need a visual demonstration. No cashier is going to jump in front of someone to stop a theft.

loco5niner · 6h ago
That's not the type of theft they were talking about. Rather, self scanners purposely not scanning items to get them for free, etc
schnable · 1h ago
I had a roommate in college who used to stuff containers of beef into produce bags full of kale, and weigh that on the self-service scanner.
anthem2025 · 1h ago
They absolutely do. It’s not the cashiers being security, it’s having adequate staffing making people less likely to steal. Its not stopping crimes that have occurred it’s just reducing opportunistic theft.
HankStallone · 6h ago
True, but having a cashier standing there waiting to scan your items will prevent most normal people from stealing. Sure, some will brazenly walk right past with a TV on their shoulder, but most people won't.

If there's no cashier and you're doing it yourself, a whole lot more people will "forget" to scan a couple items, and that adds up.

tux3 · 6h ago
There's usually a security person or two in the store, looking over the self checkouts. I agree that job prevents a lot of people from becoming opportunistic thiefs, but I'm making a distinction between cashiers and security. Today the store needs both.
delfinom · 1h ago
Pretty sure if a "security person" worked so well, Walmart wouldn't be severely reducing self checkouts at their stores to Walmart Plus members only.
tux3 · 1h ago
That might be regional, then. I wouldn't say $COUTNRY is exactly a high-trust society, but it's not quite that bad for us over here.
graeme · 1h ago
A thief doesn't know what a cashier will do. And a cashier is an eye witness or can yell "hey stop them!"

You're doing the all or nothing fallacy. The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.

dragonwriter · 1h ago
> The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.

Yes, for one thing, it ignores that a very large share of retail theft is insider theft, and that cash handling positions are the largest portion of that.

Cashiers absolutely do something for theft.

ArtTimeInvestor · 7h ago
Is the theft really happening at the checkout?

And if so, why can't we detect it via camera + AI?

anthem2025 · 1h ago
So take the broken god awful experience of self checkout and add another layer of “I think you did something wrong so now you have to stand around waiting for an actual person”?

No thanks.

distances · 1h ago
There are stores that are abandoning self-checkouts completely and going back to cashiers as the theft rose to unsustainable numbers.
Lovesong · 1h ago
You detect someone leaving your store with a 4€ item. What then?
Workaccount2 · 1h ago
You ban them from coming back in after a few warnings. Stores seem really icy about facial recognition right now though. The optics are pretty bad (a play on words pun?)
Ekaros · 6h ago
Checkouts are often only egress points. So having pair of eyes over them does have some effect compared to having none at all.
lotsofpulp · 6h ago
Detecting theft does not mean theft is prevented. You then need the government to prosecute, and impose sufficient punishment to deter theft. This is not cheap, nor a given that it will happen.
Spivak · 7h ago
You mean ordering kiosks and self-checkout machines? We have automated it, it's just not everywhere has implemented it.

The one I'm desperately waiting for is serverless restaurants—food halls already do it but I want it everywhere. Just let me sit down, put an order into the kitchen, pick it up myself. I promise I can walk 20 feet and fill my own drink cup.

freddie_mercury · 43m ago
Serverless restaurants have been common in Australia for decades. You just get a buzzer and then need to go pick up your food when it is ready. There's a single person behind the bar to take orders and pour beer/wine/soda.
ArtTimeInvestor · 7h ago
You seem to like self-checkout processes. I don't. I avoid any place where I have to interact with a screen. Be it a screen installed on-premise or the screen on my phone. It is not a relaxing experience for me.
distances · 1h ago
I don't use self-checkouts at the stores, nor would I eat at automated or self-service restaurants. I have a kitchen for that already.

But it's good if both are available, as apparently there will be customers for both.

slipperydippery · 4h ago
Self check-out machines aren't automation.
Spivak · 1h ago
There used to be two humans standing at the cash register, now because of software, automatic change machines, and cameras there is only one. One of those humans' jobs got automated.

Call it what you like but replacing the work of humans one for one is difficult and usually not necessary. Reformulating the problem to one that machines can solve is basically the whole game. You don't need a robot front desk worker to greet you, you just need a tablet to do your check in.

Ekaros · 6h ago
Seems like perfect option for robots (not humanoid). Bring me my food. You can still keep people in kitchen for a bit, but well servers in many restaurants are not really needed.
oytis · 7h ago
Looks like the study pretty arbitrarily picks "exposed industries" and notes that employment rate there has declined.
brandon272 · 7h ago
> Some examples of these highly exposed jobs include customer service representatives, accountants and software developers.

We seem to be in this illogical (delusional?) era where we are being told that AI is 'replacing' people in certain sectors or types of work (under the guise that AI is better or will soon be better than humans in these roles) yet those same areas seem to be getting worse?

- Customer service seems worse than ever as humans are replaced with "AI" that doesn't actually help customers more than 'website chatbots' did 20 years ago.

- Accounting was a field that was desperate for qualified humans before AI. My attempts to use AI for pretty much anything accounting related has had abysmal results.

- The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences.

chrisweekly · 7h ago
> "The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences."

^ This. (Tho I'm not sure about it being "general consensus".) Vibe code is the payday loan (or high-interest credit card) of tech debt. Demo-quality code has a way of making it into production. Now "everyone" can produce demos and PoCs. Companies that leverage AI as a powerful tool in the hands of experienced engineers may be able to iterate faster and increase quality, but I expect a sad majority to learn the hard way that there's no free lunch, and shipping something you don't understand is a recipe for disaster.

techpineapple · 8h ago
I’m suss about this paper when it makes this claim:

“where AI is more likely to automate, rather than augment , human labor.”

Where is AI currently automating human labor? Not Software Engineering. Or - what’s the difference between AI that augments me so I can do the job of three people and AI that “automates human labor”

tart-lemonade · 5h ago
I was also curious about this. Table A1 on page 56 lists examples of positions that are automated vs augmented, and these are the positions the authors think are going to be most augmented (allegedly taken from [0]):

- Chief Executives

- Maintenance and Repair Workers, General

- Registered Nurses

- Computer and Information Systems Managers

After skimming [0], I can't seem to find a listing of jobs that would be augmented vs automated, just a breakdown of the % of analyzed queries that were augmenting vs automating, so I'm a bit confused where this is coming from.

[0]: https://arxiv.org/abs/2503.04761

WillPostForFood · 7h ago
When the Stanford paper looked at augment vs automate, they used the data from Anthropic's AI Economic Index. That paper defined the terms like this:

We also analyze how AI is being used for tasks, finding 57% of usage suggests augmentation of human capabilities (e.g., learning or iterating on an output) while 43% suggests automation (e.g., fulfilling a request with minimal human involvement).

From the data, software engineers are automating their own work, not augmenting. Anthropic's full paper is here:

https://arxiv.org/html/2503.04761v1

techpineapple · 6h ago
Sounds like a snake eating it's own tail.
lotsofpulp · 7h ago
What is the effective difference between augment and automate? Either way, fewer man hours are needed to produce the same output.
stonemetal12 · 6h ago
If your job is to swing a hammer, then hammer swinging robot automates your job.

If your job is to swing a hammer, then drill robot augments your job (your job is now swing hammer and drill hole).

How that is different from drill bot automating human driller's job is an exercise left to the reader.

JumpCrisscross · 7h ago
> What is the effective difference between augment and automate?

If the field has a future.

HPsquared · 6h ago
The total output isn't going to stay the same, though.
farceSpherule · 1h ago
Sensationalist, alarmist, b.s. article.

It emphasizes "AI adoption linked to 13% decline," which implies causation. The study itself only claims "evidence consistent with the hypothesis."

The article also largely highlights job loss for young workers, while only briefly mentioning cases where AI complements workers.

The study's preliminary status -- it is not peer reviewed -- is noted but only once and at end. If the article was more balanced it would have noted this at the beginning.

Articles on the same subject by the World Economic Forum, McKinsey, and Goldman Sachs are more balance and less alarmist.

orochimaaru · 7h ago
The study is bs. While executives are blaming AI, it is nowhere near levels of replacement.

What I bet is happening under the covers is reprioritization of work, offshoring or both.

stonemetal12 · 7h ago
Why bet? In the news recently Australian bank CBA was caught offshoring positions and claiming the jobs had been replaced by AI.
anthem2025 · 1h ago
It’s also just natural cost cutting from businesses that were previously massively over hiring, and outside of AI don’t exactly have a ton of areas with huge growing investment.

Plus slashing jobs like this keeps the plebs in line. They don’t like software engineers having the money and job security to raise a stink over things. They want drones terrified of losing everything.

smt88 · 7h ago
> What I bet is happening under the covers is reprioritization of work, offshoring or both.

AI has been frequently used as an explanation for layoffs.

Before AI, layoffs would be a positive signal to investors, but they'd be demoralizing to staff and/or harm the brand.

Now, you can say, "Wow, we're so good at technology, we're eliminated ___ jobs!" and try to get the best of both worlds.

anthem2025 · 1h ago
Yeah, unquestioning “journalists” have allowed them to turn laying off thousands into an ad for their new tech.
coldpie · 7h ago
My company did exactly this earlier in the year. It was a blatant lie and everyone who works here knew it. None of the people laid off were actually replaced with AI, the work they did was just eliminated.
wslh · 7h ago
Short-term, discrete numbers like these are interesting to look at, but they don't really tell us much about the long-term trajectory. In parallel: [1].

[1] "Nvidia Forecasts Decelerating Growth After Two-Year AI Boom" <https://news.ycombinator.com/item?id=45053175>

seneca · 7h ago
This study feels pretty weak. Software as a occupation is collapsing, but it's not due to AI. Articles and "studies" like this are just a smoke screen to keep your eye off the ball.
dimgl · 1h ago
Why is it collapsing?
kelp6063 · 7h ago
yet another clickbait "ai is taking jobs" study that doesn't investigate whether or not the employment decrease is directly caused by the ai adoption
ChrisArchitect · 7h ago
seneca · 7h ago
And a better source article.