The ‘white-collar bloodbath’ is all part of the AI hype machine

448 lwo32k 757 5/30/2025, 1:38:21 PM cnn.com ↗

Comments (757)

simonsarris · 8h ago
I think the real white collar bloodbath is that the end of ZIRP was the end of infinite software job postings, and the start of layoffs. I think its easy to now point to AI, but it seems like a canard for the huge thing that already happened.

just look at this:

https://fred.stlouisfed.org/graph/?g=1JmOr

In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)

Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv

For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)

I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.

bootsmann · 2h ago
Wonder what else happened on the 20th of January that would affect a lot of IT and operational jobs...
pera · 2h ago
pydry · 1h ago
Trump didnt kick off the layoffs.

It was the war with Russia that drove the fed to raise interest rates in 2022 - a measure that was intended to curb inflation triggered by spikes in the prices of economic inputs (gas, oil, fertilizer, etc.).

The tech layoffs started later that year.

Widespread job cuts are an intended effect of raising interest rates - more unemployed = less spending = keeps a lid on inflation.

AI is just cashing in on the trend.

frontfor · 8m ago
I don’t believe the war specifically drove the Fed to raise interest rates. Inflation and asset prices have risen sharply a year prior to the war.
e40 · 3h ago
Also section 174’s amortization of software development had a big role.
gmerc · 1h ago
The flaw with the Zirp narrative that companies managed to raise more money than ever before the moment they had a somewhat believable narrative instead of the crypto/web3/metaverse nonsense.
jameslk · 6h ago
Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure. Instead, we chose consumption, and helicopter money gave us bullshit jobs so we could keep buying more bullshit. This is fairly evident by the fact when the helicopter money runs out, all the bullshit jobs get cut.

AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.

autobodie · 6h ago
Keynes lived in a time when the working class was organized and exerting its power over its destiny.

We live in a time that the working class is unbelievably brainwashed and manipulated.

kergonath · 4h ago
He was extrapolating, as well. Going from children in the mines to the welfare state in a generation was quite something. Unfortunately, progress slowed down significantly for many reasons but I don’t think we should really blame Keynes for this.

> We live in a time that the working class is unbelievably brainwashed and manipulated.

I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.

ireadmevs · 2h ago
There’s no middle class. You either have to work for a living or you don’t.
dagw · 48m ago
You either have to work for a living or you don’t

The words 'have to' are doing a lot of work in that statement. Some people 'have to' work to literally put food on the table, other people 'have to' work to able to making payments on their new yacht. The world is full of people who could probably live out the rest of their lives without working any more, but doing so would require drastic lifestyle changes they're not willing to make.

I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.

nosianu · 19m ago
> I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.

What income? Income from job, or from capital? A huge difference. Also a lot harder to lose the latter, gross incompetence or a revolution, while the former is much easier.

dgfitz · 39m ago
The sentence works without those two words. “You either work for a living or you don’t.”

Now what?

d4mi3n · 1h ago
While you’re not wrong in what differentiates those with wealth to those without, I think ignores a lot of nuance.

Does one have savings? Can they afford to spend time with their children outside of working day to day? Do they have the ability to take reasonable risks without chancing financial ruin in pursuit of better opportunities?

These are things we typically attribute to someone in the middle class. I worry that boiling down these discussions to “you work and they don’t” misses a lot of opportunity for tangible improvement to quality of life for large number of people.

eastbound · 5h ago
It is very possible that foreign powers use AI to generate social media content in mass for propaganda. If anything, the internet up to 2015 seemed open for discussion and swaying by real people’s opinion (and mockery of the elite classes), while manipulation and manufactured consent became the norm after 2017.
kergonath · 4h ago
> It is very possible that foreign powers use AI to generate social media content in mass for propaganda.

No need for AI. Troll farms are well documented and were in action before transformers could string two sentences together.

amarcheschi · 3h ago
Italian party Lega (in the government coalition) has been using deep fakes for some time now. It's not only ridiculous, it's absolutely offensive to the people they mock - von Der leyen, other Italian politicians... -
genewitch · 1h ago
Queen Ursula deserves to be mocked.
rusk · 5h ago
This is a pre-/post- Snowden & Schrems, which challenged the primary economic model of the internet as a surveillance machine.

All the free money dried up and the happy clapping Barney the Dinosaur Internet was no more!

hoseyor · 1h ago
He also lived in a time when the intense importance and function of a moral and cultural framework for society was taken for granted. He would have never imagined the level of social and moral degeneration of today.

I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.

Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.

When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.

komali2 · 43m ago
That a capitalist society might achieve a 15 hour workweek if it maintained a "non debauched culture" and "culture homogeneity" is an extraordinary claim I've never seen a scrap of evidence for. Can you support this extraordinary claim?

That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?

"Decimated," "muddled," and "polluted" imply you have an objective analysis framework for culture. Typically people who study culture avoid moralizing like this because one very quickly ends up looking very foolish. What do you know that the anthropologists and sociologists don't, to where you use these terms so freely?

If I seem aggressive, it's because I'm quite tired of vague handwaving around "degeneracy" and identity politics. Too often these conversations are completely presumptive.

seydor · 20m ago
Most of the people are leisuring af work (for keynes era standards) and also getting paid for it
tim333 · 1h ago
I think something Keynes got wrong there and much AI job discussion ignores is people like working, subject to the job being fun. Look at the richest people with no need to work - Musk, Buffett etc. Still working away, often well past retirement age with no need for the money. Keynes himself, wealth and probably with tenure working away on his theories. In the UK you can quite easily do nothing by going on disability allowance and doing nothing and many do but they are not happy.

There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.

timacles · 1m ago
What percentage of people would you say like working for fun? Would you really claim they make up a significant portion of society?

Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.

trinix912 · 1h ago
The reality of most people is that they need to work to financially sustain themselves. Yes, there are people who just like what they do and work regardless, but I think we shouldn't discount the majority which would drop their jobs or at least work less hours had it not been out of the need for money.
navane · 1h ago
Meanwhile your examples for happy working are all billionaires who do w/e tf they want, and your example of sad non working are disabled people.
davedx · 2h ago
Some countries are still trending in that direction:

https://www.theguardian.com/commentisfree/2024/nov/21/icelan...

Policy matters

SarahC_ · 3h ago
"Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed. No company pays someone to do -nothing-.

AI isn't going to generate those jobs, it's going to automate them.

ALL our bullshit jobs are going away, and those people will be unemployed.

antonvs · 31m ago
> "Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed.

It's also the jobs that involve keeping people happy somehow, which may not be "productive" in the most direct sense.

One class of people that needs to be kept happy are managers. What makes managers happy is not always what is actually most productive. What makes managers happy is their perception of what's most productive, or having their ideas about how to solve some problem addressed.

This does, in fact, result in companies paying people to do nothing useful. People get paid to do things that satisfy a need that managers have perceived.

tim333 · 1h ago
I foresee programers replaced by AI and the people who programed becoming pointy haired bosses to the AI.
dgfitz · 31m ago
I for see that when people only employ AI for programming, it quickly hits the point where they train on their own (usually wrong) code and it spirals into an implosion.

When kids stop learning to code for real, who writes GCC v38?

This whole LLM is just the next bitcoin/nft. People had a lot of video cards and wanted to find a new use for them. In my small brain it’s so obvious.

hansmayer · 21m ago
Ha-ha, this is very funny :) Say, have you ever tried seriously using the AI-tools for programming? Because if you do, and still believe this, I may have a bridge/Eiffel Tower/railroad to sell you.
pmlnr · 6h ago
Keynes was talking about work in every sense,including house chore. We're well below 15 hours of house chores by now, so that part became true.
LeonB · 4h ago
Washing machines created a revolution where we could now expend 1/10th of the human labour to wash the same amount of clothes as before. We now have more than 10 times as much clothes to wash.

I don’t know if it’s induced demand, revealed preference or Jevon’s paradox, maybe all 3.

tim333 · 1h ago
I saw some research once that the hours women spend doing housework hasn't changed. I think because human nature, not anything to do with the tech.
itishappy · 6h ago
We've got 10 whole hours left over for "actual" work!

(Quotes because I personally have a significantly harder time doing bloody housework...)

leoedin · 5h ago
Clearly you don’t have children!
antonvs · 30m ago
Life pro tip: teach your children to do chores.
tim333 · 1h ago
I was thinking it's a function of the social setting. Single bloke 1h/week. Couple 5h/week. With kids continuous. Or some such.
autobodie · 6h ago
Source? Keynes was a serious economist, not a charlitan futurist.
itishappy · 5h ago
John Maynard Keynes (1930) - Economic Possibilities for our Grandchildren

> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!

http://www.econ.yale.edu/smith/econ116a/keynes1.pdf

https://www.aspeninstitute.org/wp-content/uploads/files/cont...

digitcatphd · 7h ago
As of now yes. But we are still in day 0.1 of GenAI. Do you think this will be the case when o3 models are 10x better and 100x cheaper? There will be a turning point but it’s not happened yet.
godelski · 4h ago
Yet we're what? 5 years into "AI will replace programmers in 6 months"?

10 years into "we'll have self driving cars next year"

We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"

Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?

jjani · 3h ago
> Yet we're what? 5 years into "AI will replace programmers in 6 months"?

Realistically, we're 2.5 years into it at most.

hn_throwaway_99 · 1h ago
I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.

I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.

godelski · 1h ago

  > it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.

And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...

antonvs · 25m ago
> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.

tim333 · 2h ago
Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.
godelski · 55m ago
I'm quoting Elon.

I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet

roenxi · 2h ago
As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.
RivieraKid · 1h ago
This is just not true, Waymo, MobilEye, Tesla and Chinese companies are not bottlenecked by regulations but by high failure rate and / or economics.
apwell23 · 2h ago
> the main barriers are legal and regulatory concerns rather than the tech

they have failed in sfo, phoenix and other cities that rolled red carpet for them

roenxi · 1h ago
Pretty solid evidence that self driving cars already exist though.
antonvs · 22m ago
You're confusing "exist" with "viable".

When someone talks about "having" self-driving cars next year, they're not talking about what are essentially pilot programs.

laserlight · 1h ago
When people say “we'll have self-driving cars next year”, I understand that self-driving cars will be widespread in the developed world and accessible to those who pay a premium. Given the status quo, I find it pointless to discuss the semantics of whether they exist or not.
godelski · 58m ago
As prototypes, yes. But that's like pointing to Japanese robots in the 80's and expecting robot butlers any day now. Or maybe Boston dynamics 10 years ago. Or when OpenAI was into robotics.

There's a big gap between seeing something work in the lab and being ready for real world use. I know we do this in software, but that's a very abnormal thing (and honestly, maybe not the best)

pydry · 1h ago
I remember one reason phoenix was chosen as a trial location coz it was supposed to be one of the easiest places to drive.

It's pretty damning that it failed there.

risyachka · 45m ago
They are only self-driving in a very controlled environments of few very good mapped out cities with good roads in good weather.

And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.

jeffreygoesto · 1h ago
What? If that stuff works, no liability will have to be executed. How can you state that it works and claim liability problems at the same time?
tsunamifury · 4h ago
It’s hilarious how absurdly wrong you are here. Both of those things have happened and you don’t even know it.
seanhunter · 3h ago
I consulted a radiologist more than 5 years after Hinton said that it was completely obvious that radiologists would be replaced by AI in 5 years. I strongly suspect they were not an AI.

Why do I think this?

1) They smelled slightly funny. 2) They got the diagnosis wrong.

OK maybe #2 is a red herring. But I stand by the other reason.

godelski · 4h ago
I named 3 things...

You're going to have to specify which 2 you think happened

hengheng · 2h ago
I have a fusion reactor to sell to you.
laserlight · 1h ago
Some people are ahead of you by 3.5 years [0]:

> Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)

[0] https://blog.samaltman.com/helion

antonvs · 20m ago
You're missing the big picture. Helion can still make their goal. Once they have a working fusion reactor they can use the energy to build a time machine.
apwell23 · 2h ago
did you by any chance send money to nigerian prince ?
croes · 4h ago
Where did it happen?

They try it, but it’s not reliable

directevolve · 5h ago
We’re already heading toward the sigmoid plateau. The GPT 3 to 4 shift was massive. Nothing since had touched that. I could easily go back to the models I was using 1-2 years ago with little impact on my work.

I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.

threeseed · 5h ago
> I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved

It really hasn't.

The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.

Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.

I was assured that MCP was going to solve all of this but nope.

HumanOstrich · 4h ago
How did you think MCP was going to solve the issue of a large number of outdated dependencies?
threeseed · 3h ago
Those large number of outdated dependencies are in the LLM "index" which can't be rapidly refreshed because of the training costs.

MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.

nothercastle · 7h ago
I think they will be 10-100x cheaper id be really surprised if we even doubled the quality though
makeitdouble · 6h ago
How does it work if they get 10x better in 10 years ? Everything else will have already moved on and the actual technology shift will come from elsewhere.

Basically, what if GenAI is the Minitel and what we want is the internet.

nradov · 7h ago
10× better by what metric? Progress on LLMs has been amazing but already appears to be slowing down.
jaggederest · 6h ago
All these folks are once again seeing the first 1/4 of a sigmoid curve and extrapolating to infinity.
drodgers · 5h ago
No doubt from me that it’s a sigmoid, but how high is the plateau? That’s also hard to know from early in the process, but it would be surprising if there’s not a fair bit of progress left to go.

Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).

leoedin · 5h ago
Biological muscles are proof that you can make incredibly small and forceful actuators. But the state of robotics is nowhere near them, because the fundamental construction of every robotic actuator is completely different.

We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.

I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.

audunw · 4h ago
I don’t think it’s hard to know. We’re already seeing several signs of being near the plateau in terms of capabilities. Most big breakthrough these days seems to be in areas where we haven’t spent the effort in training and model engineering. Like recent improvements in video generation. So of course we could get improvements in areas where we haven’t tried to use ML yet.

For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)

Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.

jaggederest · 4h ago
The wonderful thing about a sigmoid is that, just as it seems like it's going exponential, it goes back to linear. So I'd guess we're not going to see 1000x from here - I could be wrong, but I think the low hanging fruit has been picked. I would be surprised in 10 years if AI were 100x better than it is now (per watt, maybe, since energy devoted to computing is essentially the limiting factor)

The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.

formerly_proven · 2h ago
Progress so far has been half and half technique and brute force. Overall technique has now settled for a few years, so that's mostly in the tweaking phase. Brute force doesn't scale by itself and semiconductors have been running into a wall for the last few years. Those (plus stagnating outcomes) seem decent reasons to suspect the plateau is neigh.
GoblinSlayer · 4h ago
Human brains are easy to do, just run evolution for neural networks.
elif · 6h ago
with autonomous vehicles, the narrative of imperceptibly slow incremental change about chasing 9's is still the zeitgeist despite an actual 10x improvement in homicidality compared to humans already existing.

There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.

intended · 5h ago
This is my favorite thing to point out from the day we started talking about autonomous vehicles on tech sites.

If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.

Driving data is cultural data, not data about pure physics.

You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.

You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.

Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.

yusina · 4h ago
> You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.

Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.

binoct · 5h ago
One of my favorite things to question about autonomous driving is the goalposts. What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.

Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?

LegionMammal978 · 5h ago
> What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.

I definitely recall reading some thinkpieces along the lines of "In the year 203X, there will be no more human drivers in America!" which was and still is clearly absurd. Just about any stupidly high goalpost you can think of has been uttered by someone in the world early on.

Anyway, I'd be interested in a breakdown on reliability figures in urban vs. suburban vs. rural environments, if there is such a thing, and not just the shallow take of "everything outside cities is trivial!" I sometimes see. Waymo is very heavily skewed toward (a short list of) cities, so I'd question whether that's just a matter of policy, or whether there are distinct challenges outside of them. Self-driving cars that only work in cities would be useful to people living there, but they wouldn't displace the majority of human driving-miles like some want them to.

atleastoptimal · 5h ago
Why couldn’t an autonomous vehicle adapt to different cultures? American driving culture has specific qualities and elements to learn, same with India or any other country.

Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.

yusina · 4h ago
> a technology which 3 years ago did not exist

Decades of machine learning research would like to have a word.

threeseed · 5h ago
How are we in 0.1 of GenAI ? It's been developed for nearly a decade now.

And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.

So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.

Flemlo · 4h ago
A decade?

The explosion of funding, awareness etc only happened after gpt-3 launch

hyperadvanced · 3h ago
Funding is behind the curve. Social networks existed in 2003 and Facebook became a billion dollar company a decade later. AI horror fantasies from the 90’s still haven’t come true. There is no god, there is no Skynet.
imtringued · 3h ago
That was five years ago not yesterday.
Flemlo · 2h ago
I didn't say yesterday.

Nonetheless it took openai til Nov 2022 for 1 Million users.

The overall awareness and breakthrough was probably not at 2020.

ricardobayes · 5h ago
Frankly, we don't know. That "turning point" that seemed so close for many tech, never came for some of them. Think 3D-printing that was supposed to take over manufacturing. Or self-driving, that is "just around the corner" for a decade now. And still is probably a decade away. Only time will tell if GenAI/LLMs are color TV or 3D TV.
kergonath · 4h ago
> Think 3D-printing that was supposed to take over manufacturing.

3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.

godelski · 4h ago
You're misunderstanding the parent's complaint and frankly the complaints with AI. Certainly 3D printing is powerful and hasn't changed things. But you forgot that 30 years ago people were saying there would be one in every house because a printer can print a printer and how this would revolutionize everything because you could just print anything at home.

The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.

The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.

Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.

So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware

croes · 4h ago
If not when.
solumunus · 2h ago
I use LLM’s daily and live them but at the current rate of progress it’s just not really something worth worrying about. Those that are hysterical about AI seem to think LLM’s are getting exponentially better when in fact diminishing returns are hitting hard. Could some new innovation change that? It’s possible but it’s not inevitable or at least not necessarily imminent.
apwell23 · 5h ago
> Do you think this will be the case when o3 models are 10x better and 100x cheaper?

why don't you bring it up then.

> There will be a turning point but it’s not happened yet.

do you know something that rest of us don't ?

lozenge · 3h ago
Macroeconomic policy always changes, recessions come and go, but it's not a permanent change in the way e-commerce or AI is.
leflambeur · 8h ago
It's simply the old Capital vs Labor struggle. CEOs and VCs all sing in the same choir, and for the past 3 years the tune is "be leaner".

p.s.: I'm a big fan of yours on Twitter.

saubeidl · 3h ago
Except Labor in Tech is unique in that it has zero class consciousness and often actively roots for their exploiters.

If we were to unionize, we could force this machine to a halt and shift the balance of power back in our favor.

But we don't, because many of us have been brainwashed to believe we're on the same side as the ones trying to squeeze us.

GoblinSlayer · 2h ago
>If we were to unionize

Last time it was tried the union coerced everyone to root for their exploiters. People that unionize aren't magically different.

godelski · 7h ago

  > the tune is "be leaner".
Seems like they're happy to start cutting limbs to lose weight. It's hard to keep cutting fat if you've been aggressively cutting fat for so long. If the last CEO did their job there shouldn't be much fat left
chii · 5h ago
> If the last CEO did their job there shouldn't be much fat left

funny how that fat analogy works...because the head (brain) has a lot more fat content than muscles/limbs.

godelski · 4h ago
I never thought to extend the analogy like that, but I like it. It's showing. I mean look how people think my comments imply I don't know what triage is. Not knowing that would be counter to everything I'm saying, which is that a lot of these value numbers are poor guestimates at best. Happens every time I bring this up. It's absurd to think we could measure everything in terms of money. Even economists will tell you that's silly
leflambeur · 7h ago
yet this will continue until it grounds to a halt.

It's amazing and cringy the level of parroting performed by executives. Independent thought is very rare amongst business "leaders".

godelski · 7h ago
Let's make the laptops thinner. This way we can clean the oil off of the keyboard, putting it on the screen.

At this point I'm not sure it's lack of independent thought so much as lack of thought. I'm even beginning to question if people even use the products they work on. Shouldn't there be more pressure from engineers at this point? Is it yes men from top to bottom? Even CEOs seem to be yes men in response to share holders but that's like being a yes man to the wind.

When I bring this stuff up I'm called negative, a perfectionist, or told I'm out of touch with customers and or understand "value". Idk, maybe they're right. But I'm an engineer. My job is to find problems and fix them. I'm not negative, I'm trying to make the product better. And they're right, I don't understand value. I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't. What is this, "Whose Line Is It Anyways?" If you want made up dollar values go ask the business monkeys, I'm a code monkey

andsoitis · 5h ago
> I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't.

So you think all bugs are equally important to fix?

godelski · 4h ago
No, of course not. That would be laughably absurd. So do you think I'm trolling or you're misunderstanding? Because who isn't familiar with triage?

Do you think every bug's monetary value is perfectly aligned with user impact? Certainly that isn't true. If it were we'd be much better at security and would be more concerned with data privacy. There's no perfect metric for anything, and it would similarly be naïve to think you could place a dollar value on everything, let alone accurately. That's what I'm talking about.

My main concern as an engineer is making the best product I can.

The main concern of the manager is to make the best business.

Don't get confused and think those are the same things. Hopefully they align, but they don't always.

bawolff · 7h ago
Honestly, if anything i think AI is going to reverse the trend. Someone is going to have to be hired to clean up after them.
notepad0x90 · 5h ago
I think they said that about outsourcing software dev jobs. The reality is somewhere in the middle. extreme cases will need cleanup but overall it's here to stay, maybe with more babysitting.
godelski · 4h ago
I think the reality is Lemon Market Economics. We'll sacrifice quality for price. People want better quality but the truth is that it's a very information asymmetric game and it's really hard to tell quality. If it wasn't, we could all just rely on Amazon reviews and tech reviewers. But without informed consumers, price is all that matters even if it creates a market nobody wants.
tempodox · 7h ago
If anyone will actually bother with cleaning up.
xkcd1963 · 6h ago
Thats the impression I got. Things overall get just worse in quality because people rely too much on low wages and copy pasting LLM answers
hattmall · 6h ago
I think that's true in software development. A lot of the focus is on coding because that's really the domain of the people interested in AI, because ultimately they ARE software. But the killer app isn't software, it's anything where the operation is formulaic, but the formula can be tedious to figure out, but once you know it you can confirm that it's correct by working backwards. Software has far too many variables, not least of which is the end user. On the other hand things like accounting, finance, and engineering are far more suitable for trained models and back testing for conformity.
autobodie · 6h ago
Get worse for who? The ruling class will simply never care how bad things get for working people if things are getting better for the ruling class.
shswkna · 4h ago
The central problem with this statement is that we expect others to care, but we do not expect this from ourselves.

We have agency. Whether we are brainwashed or not. If we cared about ourselves, then we don’t need another class, or race, or whatever other grouping to do this for us.

xkcd1963 · 6h ago
I meant just regular products as example if I login to bitpanda on browser the parts that would hold the translation hold the keys for translations instead. Just countless examples and many security issues as well.

Regarding class struggle I think class division always existed but we the mass have all the tools to improve our situation.

csomar · 6h ago
Elon Musk experiment is the worst anchor that can be used for comparison since the dude destabilized Twitter (re-branding, random layoffs, etc...). I'd be more interested in companies that went leaner but did it in a sane manner. The Internet user base grew between 2022 and now but Twitter might have lost users in that time period and certainly didn't make any new innovations beyond trying to charge its users more and confusing them.
tdeck · 9h ago
Maybe someone can help me wrap my head around this in a different way, because here's how I see it.

If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.

For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.

topspin · 7h ago
"shouldn't it be painfully obvious in companies' output?"

No.

The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.

One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.

throwaway2037 · 5h ago

    > One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki):

    > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.

Related: (Also from Wiki):

    > Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
SirHumphrey · 1h ago
I think AlphaFold is where current AI terminology starts breaking down. Because in some real sense, AlphaFold is primarily a statistical model - yes, it's interesting that they developed it using ML techniques, but from the use standpoint it's little different than perturbation based black boxes that were used before that for 20 years.

Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).

The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.

ImaCake · 7h ago
Maybe this means that LLMs are ultimately good for small buisness. If large buisness is constrained by being large and LLMs are equally accesible to 5 people or 100 then surely what we will see is increased productivity in small companies?
topspin · 7h ago
My direct experience has been that even very small tech businesses contend with IP issues as well. And they don't have the means to either risk or deliberately instigate a fight.
bawolff · 7h ago
Even still, in theory this should free up more money to hire more lawyers, markerters, etc. The effect should still be there presuming the market isn't saturated with new ideas .
xkcd1963 · 6h ago
Something else will get expensive in the meantime, e.g. it doesn't matter how much you earn, landlords will always increase rent to the limit because a living space is a basic necessity
bawolff · 4h ago
No, landlords will increase rent as much as they can because they like money (they call it capitalism for a reason). This is true of all goods, both essential and non-essential. All businesses follow the rule of supply and demand when setting prices or quickly go out of business.

In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.

xkcd1963 · 3h ago
you say no for no reason. read what I wrote again
SteveNuts · 6h ago
>A little further on and it'll be writers, actors, etc.

The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..

Izkata · 5h ago
This was made a little over a week ago: https://www.reddit.com/r/IndiaTech/comments/1ksjcsr/this_vid...

For most of them I'm not seeing any of those issues.

PeterHolzwarth · 5h ago
I get what you mean, but the last year has been a story of sudden limits and ceilings of capability. The (damned impressive) video you post is a bunch of extremely brief snippets strung together. I'm not yet sure we can move substantially beyond that to something transformative or pervasively destructive.

A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.

Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."

genewitch · 1h ago
Approximately 1 second was how long AI could hold it together. If you had a lot of free time you could extend that out a bit, but it'll mess something up. So generally people who make them will run it slow-motion. This is the first clip I've seen with it at full speed.

The quick cuts thing is a huge turnoff so if they have a 15 second clip later on, I missed it.

When I say "1second" I mean that's what I was doing with automatic1111 a couple years ago. And every video I've seen is the same 30-60 generated frames...

meander_water · 4h ago
I wonder if this is going to change the ad/marketing industry. People generally put up with shitty ads, and these will be much cheaper to produce. I dread what's coming next.
csomar · 6h ago
> where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.

Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.

pkroll · 5h ago
https://arxiv.org/abs/2412.10849

"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"

squigz · 5h ago
This isn't really applying LLMs to research in novel ways.
pera · 2h ago
Bullshit: Chatbots are not failing to demonstrate a tangible increase in companies' output because of regulations and IP law, they are failing because they are still not good for the job.

LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).

Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.

"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.

throwawayffffas · 1h ago
The things you mention in the legion of other things are actually things LLMs do better than intellectual productivity. They can spew entire libraries of marketing bs, summarize decades of legal precedents and fill out mountains of red tape checklists.

They have trouble with debugging obvious bugs though.

casualscience · 3h ago
In big companies, this is a bit slower due to the need to migrate entrenched systems and org charts into newer workflows, but I think you are seeing more productivity there too. Where this is much more obvious is in indie games and software where small agile teams can adopt new ways of working quickly...

E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492

bojan · 9m ago
The number of critically acclaimed games remains the same though. So for now we're getting quantity, but not the quality.
throwaway2037 · 5h ago
Regarding the impact of LLMs on non-programming tasks, check out this one:

https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...

    > The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.

    > The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.

    > “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).
bawolff · 7h ago
Reistically its because layoffs have a high reputational cost. AI provides an excuse that lets companies do lay offs without suffering the reputation hit. In essence AI hype makes layoffs cheaper.

Doesnt really matter if AI actually works or not.

CMCDragonkai · 9h ago
It's cause there are still bottlenecks. AI is definitely boosting productivity in specific areas, but the total system output is bottlenecked. I think we will see these bottlenecks get rerouted or refactored in the coming years.
_heimdall · 8h ago
> AI is definitely boosting productivity in specific areas

What makes you so sure of the productivity boost when we aren't seeing a change in output?

tdeck · 9h ago
What do you think the main bottlenecks are right now?
kergonath · 4h ago
Quality control, for one. The state of commercial software is appalling. Writing code itself is not enough to get a useable piece of software.

LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.

fhd2 · 3h ago
Considering how software is increasingly made out of seperate components and services, integration testing can become pretty damn difficult. So quite often, the public release is the first serious integration test.

From my experience, this stuff is rarely introduced to save developers from typing in the code for their logic. Actual reasons I observe:

1. SaaS sales/marketing pushing their offerings on decision makers - software being a pop culture, this works pretty well. It can be hard for internal staff to push back on What Everyone Is Using (TM). Even if it makes little to no sense.

2. Outsourcing liability, maintenance, and general "having to think about it". Can be entirely valid, but often it indeed comes from an "I don't want to think of it" kind of place.

I don't see this stuff slowing down GenAI or not, mainly because it has usually little to do with saving time or money.

CMCDragonkai · 3h ago
Informational complexity bottlenecks. So many things are shackled to human decision making loops. If we were truly serious, we would unshackle everything and let it run wild. Would be chaotic, but chaos create strange attractors.
esperent · 8h ago
> It's cause there are still bottlenecks

How do you know this? What are the bottlenecks?

No comments yet

jayd16 · 8h ago
We'll take cheaper over faster but is that the case? If it's not cheaper or faster what is the point?
strangattractor · 6h ago
Most significant technology takes almost a generation to be fully adopted. I think it is unlikely we are seeing the full effect of LLM's at the moment.

Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.

grumpymuppet · 8h ago
The problem with this sort of analysis is that it's incremental and balanced across a large institution usually.

I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.

And you DO see companies laying off people in large numbers fairly regularly.

simonsarris · 8h ago
> And you DO see companies laying off people in large numbers fairly regularly.

Sure but, so far, too regularly to be AI-gains-driven (at least in software). We have some data on software job postings and the job apocalypse, and corresponding layoffs, coincided with the end of ultra-low interest rates. If AI had a recent effect this year or last, its quite tiny in comparison.

https://fred.stlouisfed.org/graph/?g=1JmOr

so one can argue more is to come, but its hard to see how its had a real effect on jobs/layoffs thus far.

hyperadvanced · 8h ago
Layoffs happen because cash is scarce. In fact, cash is so scarce for anything that’s not “AI” that it’s basically nonexistent for startup fundraising purposes.
AznHisoka · 8h ago
“we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products.”

Shipping features faster != innovation or improvements to existing products

tdeck · 8h ago
Granting that those don't fully overlap, is that relevant to the point? I'm not seeing either.
AznHisoka · 8h ago
Because theyre just pushing out stuff that nobody mighy even need or want to buy. Because its not even necessarily leading to more revenue. Software companies arent factories. More stuff doesnt mean more $$$ made
ngruhn · 6h ago
Unfortunately, I think it does. Even if customers don't want all that extra stuff and will never use it, it sells better.
epgui · 8h ago
And?
wcfrobert · 8h ago
If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

Luckily software companies are not ball bearings factories.

tikhonj · 8h ago
unluckily, too many corporate managers seem to think they are :/
econ · 7h ago
The days of hating on idea men seem over.

I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??

That would mean you have no ideas. It says nothing about the potential.

ccorcos · 8h ago
AI tools seem to be most useful for little things. Fixing a little bug, making a little change. But those things aren’t always very visible or really move the needle.

It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.

droopyEyelids · 8h ago
If they're fixing all the little bugs that should give everyone much more time to think about product design and do the research.
ccorcos · 8h ago
Assuming a well functioning business, yes.
jajko · 2h ago
Or a lot of small fixes all over the place. Yet in reality we dont see this anywhere, not sure what exactly that means.

Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?

My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).

kraig911 · 8h ago
Effort in this equation isn't measured in man hours saved but dollars saved. We all know this is BS and isn't going to manifest this way. It's tantamount for giving framers a nailgun versus a hammer. We'll still be climbing the same rafters and doing the same work.
autobodie · 6h ago
No, we would see profits increase, and we have been seeing profits increase.
wiseowise · 4h ago
I will never understand this argument. If you have a super tool, that can magically double your output, why would you suddenly double your output publicly? So that you now work twice essentially for the same money? You use it to work less, your output stays static or marginally improves - that’s smart play.

Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.

conradkay · 4h ago
That only works if you're one of very few people with the tool. Otherwise the rest of your team is now 2x as productive as you.
wiseowise · 3h ago
That’s assuming they were as productive as me in the first place.
imtringued · 2h ago
How would you know? What if they are following your strategy and are hiding their "power level"?
bjt12345 · 8h ago
The problem seems to be two-fold.

Firstly, the capex is currently too high for all but the few.

This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.

Secondly, there's a corporate paralysis over AI.

I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.

I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.

The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.

Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.

bawolff · 7h ago
> Firstly, the capex is currently too high for all but the few.

> This is a rather obvious statement,

Nobody is saying companies have to make LLMs themselves.

SASS is a thing.

bjt12345 · 2h ago
By SAAS I assume you mean public LLMs, the problem is the hand-wringing occurring over intellectual property leaking from the company. Companies are actually writing policies banning their use.

In regards to Private LLMs, the situation has become disappointing in the 6 months.

I can only think of Mistral as being a genuine vendor.

But given the limitations in context window size, fine tuning is still necessary, and even that requires capex that I rarely see.

But my comment comes from the fact that I heard from several sources, smart people say "we tried language models at work and it failed".

However in my discussion with them, they have no concept of the size of the datacentres used by the webscalers.

tdeck · 8h ago
This is a good reminder that every org is different. However some companies like Microsoft are aggressively pushing AI tools internally, to a degree that is almost cringe.
throwaway2037 · 4h ago
I don't want to shill for LLMs-for-devs, but I think this is excellent corporate strategy by Microsoft. They are dog-fooding LLMs-for-devs. In a sense, this is R&D using real world tests. It is a product manager's dream.

The Google web-based office productivity suite is similar. I heard a rumor that at some point Google senior mgmt said that nearly all employees (excluding accounting) must use Google Docs. I am sure that they fixed a huge number of bugs plus added missing/blocking feature, which made the product much more competitive vs MSFT Office. Fifteen years ago, Google Docs was a curiosity -- an experiment for just how complex web apps could become. Today, Google Docs is the premiere choice for new small businesses. It is cheaper than MSFT Office, and "good enough".

bjt12345 · 8h ago
But this is often a mixture of these two things.

The tools are often cringe because the capex was laughable. E.g. one solution, the trial was done using public LLMs and then they switched over to an internally built LLM which is terrible.

Or, secondly, the process is often cringe because the corporate aims are laughable.

I've had an argument with a manager making a multi-million dollar investment in a zero coding solution that we ended up throwing in the bin years later.

They argued that they are going with this bad product because "they don't want to have to manage a team of developers".

They responded "this product costs millions of dollars, how dare you?"

How dare me indeed...

They promptly left the company but it took 5 years before it was finally canned, and plenty of people wasted 5 years of their career on a dead-end product.

godelski · 7h ago

  > shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...

Seriously, this shit can be solved with a regex...

The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best

ivape · 8h ago
Companies are not accepting that their entire business will mostly go away. They are mostly frogs boiling in water, that's why they are kinda just incorporating these little chat bots and LLMs into their business, but the truth of the matter is it's all going away and it's impossible to believe. Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming. They just don't believe that's the reality, we're talking about Kodak moment.

Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.

nradov · 8h ago
That's silly. You still need a way to track and prioritize tasks even if you use voice input. Jira may be replaced with something better, built around an LLM from the ground up. But the basic project management requirements will never go away.
ivape · 8h ago
Yes, that's quite easy. I say "Hey reorganize the tasks like-so, prioritize this, like so", and if I really need to, I can go ahead and hook up some function calls but I suspect this will be unnecessary with a few more LLM iterations (if even that). You can keep running from how powerful these LLMs are, but I'll just sit and wait for the business/startup apocalypse (which is coming). Jira will not be replaced by something better, it'll be replaced by some weekend project a high schooler makes. The very fact that it's valued at over a billion dollars in the market is just going to be a profound rug pull soon enough.

So let me keep it real, I am shorting Atlassian over the next 5 years. Asana is another, there's plenty of startup IPOs that need to be shorted to the ground basically.

petersellers · 6h ago
If replacing Jira is really as easy as you claim, then it would have happened by now. At the very least, we'd be getting hit by a deluge of HN posts and articles about how to spin up your very own project management application with an LLM.

I think that this sentiment, along with all of the hype around AI in general, is failing to grasp a lot of the complexity around software creation. I'm not just talking about writing the code for a new application - I'm talking about maintaining that application, ensuring that it executes reliably and correctly, thinking about the features and UX required to make it as frictionless as possible (and voice input isn't the solution there, I'm very confident of that).

ivape · 4h ago
You are not understanding what I am saying. I am saying its the calm before the storm before everyone realizes they are paying a bunch of startups for literally no comparative value given AI. First the agile people are going to get fired, then the devs are just going to go "oh yeah I just manage everything in my LLM".

I'll be here in a year, we can have this exact discussion again.

petersellers · 4h ago
I understand what you are saying, I just don't agree with it.

"AI" is not going to wholesale replace software development anytime soon, and certainly not within a year's time because of the reasons I mentioned. The way you worded your post made it sound like you believed that capability was already here - nevertheless, whether you think it's here now or will be here in a year, both estimates are way off IMO.

hooverd · 7h ago
What sort of assurances can I get from that weekend project? I think we're going to build even more obscene towers of complexity as nobody knows how anything works anymore, because they choose not to.
ivape · 6h ago
What assurances do you get from the internals of an LLM?
badsectoracula · 4h ago
> Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming

When I used a not-so-simple LLM to make it act as a text adventure game it could barely keep track of the items in my inventory, so TBH i am a little bit skeptical that an LLM can handle entire project management - even without voice.

Perhaps it might be able to use tools/MCP/RPC to call out to real project management software and pretend to be your accountant/manager/whoever, but i wouldn't call that the LLM itself doing the project management task - and someone would need to write that project management software.

ivape · 4h ago
There are innovative ways to accomplish the consistency you seek for the example application you mentioned. They are coming a lot sooner than you think, but hey this thread is a bit of a poker game before the flop, I’m just placing my bet - you can call the bluff.

We just have to wait for the cards to flip, and that’s happening on a quadratic curve (some say exponential).

idkwhattocallme · 20h ago
I worked at two different $10B+ market cap companies during ZIRP. I recall in most meetings over half of the knowledge workers attending were superfluous. I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend. Why could we do that? Because the company was growing and hiring someone to attend meetings wasn't going to hurt the skyrocketing stock. Plus hiring someone gave my VP more headcount and therefore more clout. The market only valued company growth, not efficiency. But the market always capitulates to value (over time). When that happens all those overlay hires will get axed. Both companies have since laid off 10K+. AI was the scapegoat. But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.
hn_throwaway_99 · 14h ago
This is so true. We had a (admittedly derogatory) term we used during the rise in interest rates, "zero interest rate product managers". Don't get me wrong, I think great product managers are worth their weight in gold, but I encountered so many PMs during the ZIRP era who were essentially just Jira-updaters and meeting-schedulers. The vast majority of folks I see that were in tech that are having trouble getting hired now are in people who were in those "adjacent" roles - think agile coaches, TPMs, etc. (but I have a ton of sympathy for these folks - many of them worked hard for years and built their skills - but these roles were always somewhat "optional").

I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).

catigula · 14h ago
Off-shoring is pretty big right now but what shocks me is that when I walk around my company campus I see obscene amounts of people visibly and culturally from, mostly, India and China. The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with. These are low level business and accounting analyst positions.

Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.

ajmurmann · 12h ago
I am VERY pro-immigration. I do have concerns about the H1B program though. IMO it's not great for both immigrant workers, as well as non-immigrant workers because it creates a class of workers for whom it's harder to change employers which weakens their negotiation position. If this is the case for enough of the workforce it artificially depresses wages for everyone. I want to see a reform that makes it much easier for H1B workers to change employers.
Spooky23 · 11h ago
In context of tech, H1B is great for the money people in the US and India. It suppresses wages in both countries and is a powerful plum for employee “loyalty”. There’s a whole industry of companies stoking the pipeline of cheap labor and corrupting the hiring process.

In big dollar markets, the program is used more for special skills. But when a big bank or government contractor needs marginally skilled people onshore, they open an office in Nowhere, Arizona, and have a hard time finding J2EE developers. So some company from New Jersey will appear and provide a steady stream of workers making $25/hr.

The calculus is that more H1=less offshore.

The smart move would be to just let skilled workers from India, China, etc with a visa that doesn’t tie them to an employer. That would end the abusive labor practices and probably reduce the number of lower end workers or the incentive to deny entry level employment to US nationals.

senderista · 10h ago
H1-B also makes CS masters programs a cash cow for US schools.
rightbyte · 11h ago
How does H1B suppress wages in India?
Aeolun · 11h ago
All those people skilled enough to get hired in the US (for massive increase in wages) don’t try to get similar positions in India, thus, nobody has to compete to pay for them.
antithesizer · 11h ago
Because it surpresses wages in the US, so Indian employers do not need to offer as much compensation to keep local workers who are considering emigrating.
catigula · 12h ago
I want to use you as a bit of a sounding board, so don't take this as negative feedback.

The problem is that the left, which was historically pro-labor, abdicated this position for racial reasons, and the right was always about maximizing the economic zone.

hn_throwaway_99 · 2h ago
I saw a report recently about the political left in Denmark, who are basically one of the the only progressive movements in countries that understood what it takes to maintain support, and hence Denmark has had much less of a rise in support for far right parties than other countries in the world. Here's an article, https://www.nytimes.com/2025/02/24/magazine/denmark-immigrat....

Basically, progressives in Denmark have argued for very strict immigration rules, the essential argument being that Denmark has an expensive social welfare state, and to get the populace to support the high taxes needed to pay for this, you can't just let anyone in who shows up on your doorstep.

The American left could learn a ton of lessons from this. I may loath Greg Abbott for lots of reasons, but I largely support what he did bussing migrants to NYC and other liberal cities. Many people in these cities wanted to bask in the feelings of moral superiority by being "sanctuary cities", but public sentiment changed drastically when they actually had to start bearing a large portion of the cost of a flood of migrants.

SpicyLemonZest · 12h ago
Employment-based immigration policy just isn't controversial outside of very specific bubbles. Everyone who's considered the problem seriously, left and right, realizes that the H1B system is bad a point-based system is the way to go, which is why it's been part of every immigration reform proposal for over a decade with essentially no controversy. If this were the only aspect of immigration issues, or if people felt it was important enough to pull it out of broad immigration reform, it would pass in a heartbeat.
Aeolun · 11h ago
Japan will let everyone that can get a job in (and is willing to do the immigration process for them). This seems like a perfectly fair way to do things. If you don’t have a job, and can’t find a new one in 3-6 months, you have to leave again.

Don’t understand why other countries make it harder.

throwaway2037 · 4h ago
Can you give more details here? I don't fully understand your post.
tjpnz · 2h ago
Japan (the country) doesn't do this. You still need a company to sponsor you and not every company can.
jajko · 1h ago
Switzerland is the same. By far the best implemented immigration policies in whole Europe, if only Germany and France egos would step down a notch, acknowledge their mistakes and take an inspiration from clearly way more successful neighbour. They have 3x more immigration than next country and it just works, long term.

EU would flourish economically and there would be no room for ultra conservative right to gain any real foothold (which is 95% just failed immigration topic just like Brexit was).

Alas, we are where we are, they slowly backpedal but its too little too late, as usually. I blame Merkel for half of EU woes, she really was a horrible leader of otherwise very powerful nation made much weaker and less resilient due to her flawed policies and lack of grokking where world is heading to.

Btw she still acknowledges nothing and keeps thinking how great she was. Also a nuclear physicist who turned off all existing nuclear plants too early so Germany has to import massive amount of electricity from coal burning plants. You can't make it up.

catigula · 12h ago
My understanding is that Bernie Sanders used to say that mass immigration was a "Koch brothers thing" and his tune on this has since changed to align with "progressive" ideas, but I might be mistaken.

I already know that the right-wing supports h1bs, Trump himself said so.

gosub100 · 11h ago
He recently addressed Congress and brought up the abuse of H1B such as for entry level accounting positions. The program was to meet shortages for highly skilled positions. Now its being abused to cheat new grads out of jobs and depress wages
SpicyLemonZest · 12h ago
Even in his most immigration-skeptical era (https://www.computerworld.com/article/1367869/bernie-sanders...), Sanders always acknowledged that some companies genuinely need a skilled immigration program to hire the global best and brightest. And note his line about "offshore outsourcing companies"; the issue's become even less controversial now that the balance of H1B sponsors is shifting towards large American tech companies who genuinely pay market rate.
catigula · 12h ago
I don't really think that is what's being discussed here.

Even literal Nazis were exempted from immigration controls on the basis of extreme merit.

bradlys · 11h ago
What if tech roles at big tech roles actually paid more like the same prestigious firms in finance in nyc?

People in tech are so quick to shoot themselves in the foot.

throwaway2037 · 4h ago
Regarding the first sentence, it is already true for software developers. You can (and probably will) make more money at FAANG compared to global ibanks in NYC.
SpicyLemonZest · 10h ago
Not sure what you're aiming to get out of this comparison. Software engineers make quite a bit more at prestigious tech companies than they do at prestigious finance firms in NYC, and prestigious finance firms in NYC extensively recruit people from outside the US. Even if you want to compare engineers in tech to bankers in finance, I'm not sure Goldman is paying all that much better than OpenAI these days.
throwaway2037 · 4h ago
Why do people think Goldman pays software developers so well? They do not. They pay whatever is required compared to their competition (mostly other ibanks). There is a tiny sliver (less than 5%) of the dev staff who work in front office and are called "Strats". (Some other banks have "Strats" [Morgan?] or put you into a quant team to pay you more [JPM/UBS/etc].) They make about 25-50% more money compared to vanilla software devs in the IT division.
fijiaarone · 10h ago
The job of the high paid people in finance at prestigious firms is to look nice in an expensive suit. Know many people in tech with those qualifications?
bradlys · 7h ago
I'd be good at it but I won't get hired cause I didn't go to the right boarding school.

Tech has its barriers too. Most people I've met in tech come from relatively rich families. (Families where spending $70k+/yr on college is not a major concern for multiple kids - that's not normal middle class at all even for the US)

DonHopkins · 9h ago
>Trump himself said so

TACO Trump himself said he'd reveal his health care plan in two weeks, many many years ago, many many times. But then he chickened out again and again and again and again and again. So that the buk buk buk are you talking about?

bdangubic · 12h ago
amen! that will never happen though, nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling (immigrants or otherwise…)
andrekandre · 8h ago

  > nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling
its almost as if we need a 'workers party' or something... though i'd imagine first-past-the-post in the u.s makes that difficult.
kstrauser · 12h ago
I agree with all of that. I've seen employers treat workers with H1B visas as slaves, basically. Local employees had a pretty decent work-life balance, but H1B employees got calls at 8PM on a Friday night to add a feature. And why not? What were they going to do quit (and have, what is it, something like 48 hours to get out of the country)?

I felt enormous sympathy for my coworkers here with that visa. Their lives sucked because there was little downside for sociopathic managers to make them suck.

Most frustrating was when they were doing the same kind of work I was doing, like writing Python web services and whatnot. We absolutely could hire local employees to do those things. They weren't building quantum computers or something. Crappy employers gamed the system to get below-market-rate-salary employees and work them like rented mules. It was infuriating.

lokar · 11h ago
It sucks that people are treated that way.

While working at Google I worked with many many amazing H1B (and other kinds) visa holders. I did 3 interviews a week, sat on hiring committees (reading 10-15 packets a week) and had a pretty good gauge of what we could find.

There was just no way I could see that we could replace these people with Americans. And they got paid top dollar and had the same wlb as everyone else (you could not generally tell what someone’s status was).

kstrauser · 10h ago
I fully, completely support the idea of visa programs running like that. If you want to pay top dollar for someone with unique skills to move here and help build our economy, I am fully behind this.

But wanna use it as a way to undercut American jobs with 80-hour-a-week laborers, as I've personally witnessed? Nah.

My criticisms against the H1B program are completely against the companies who abuse it. By all means, please do use it to bring in world-class scientists, researchers, and engineers!

guestbest · 9h ago
If the foreign candidates were so much superior than locally born candidates as you explained, why not just open a campus in that country and thus save the best employees from having to uproot from their native culture?
lokar · 8h ago
Good question. In many cases they did. The Zurich office has people from all over Europe.

But, for existing teams they wanted (reasonably) to avoid splitting between locations. So you need someone local.

disgruntledphd2 · 3h ago
I think the real reason for hiring locally is both that communication works better, and that the higher ups don't want to give the impression that their jobs could also be outsourced.
yobbo · 13h ago
> The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads

One theory is that the benefit they might be providing over domestic "grads" is lack of prerequisites for promotion above certain levels (language, cultural fit, and so on). For managers, this means the prestige of increased headcount without the various "burdens" of managing "careerists". For example, less plausible competition for career-ladder jobs which can then be reserved for favoured individuals. Just a theory.

boredatoms · 13h ago
I think that would backfire as the intrinsic culture of the company changes as it absorbs more people. Verticals would form from new hires who did manage to get promoted
catigula · 12h ago
It's also not correct to view people as atomized individuals. People band together on shared culture and oftentimes ethnicity.
bradlys · 11h ago
Which is exactly what has happened. Anyone in the industry for 15 years can easily see this.
A4ET8a8uTh0_v2 · 12h ago
I will admit that this is the most plausible explanation of this phenomenon that explains the benefit to managers I have read on this issue so far.
catigula · 12h ago
Putting aside economic incentives, which the wealthy were eager to reap, the vast majority of the technical labor force in this country came and still comes from (outside of SF) a specific race and we have huge incentives that literally everyone reading this has brushed up against, whether in support or against, to alter that racial makeup.

Obviously the only real solution to creating an artificial labor shortage is looking externally from the existing labor force. Simply randomly hiring underserved groups didn't really make sense because they weren't participants.

Where I work, we have two main goals when I'm involved in the technical hiring process: hire the cheapest labor and try to increase diversity. I'm not necessarily against either, but those are our goals.

throwaway2037 · 4h ago
Careerists: What does this term mean?
lostlogin · 12h ago
> The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with.

I hear this argument where I live for various reasons, but surely it only ever comes down to wages and/or conditions?

If the company paid a competitive rate (ie higher), locals would apply. Surely blaming a lack of local interest is rarely going to be due to anything other than pay or conditions?

catigula · 12h ago
The company having access to the global labor force is the problem we're explicitly discussing. This isn't seen as something desirable by US workers.
lanstin · 11h ago
I was born in NC, and I mostly have experienced the large amount of immigration as a positive. Most of the people I grew up were virulently anti-intellectuals, mocking math and science learning, and most of them have gone on to be realtors and business folks, bankers even. All the people I've met from China or South Asia (the two demographics I work most closely worth) value learning and science and math - not as some "lets have STEM summer camps" but when they meet some new 8 year old will ask them to solve some math problems (like precisely 1 of my kids' dozens of relatives).

I enjoy meeting the very smart people from all sorts of backgrounds - they share the values of education and hard work that my parents emphasized, and they have an appreciation for what we enjoy as software engineers; US born folks tend to have a bit of entitlement, and want success without hard work.

I interview a fair number of people, and truly first rate minds are a limited resource - there's just so many in each city (and not everyone will want to or be able to move for a career). Even with "off-shoring" one finds after hiring in a given city for a while, it gets harder, and the efficient thing to do is to open a branch in a new city.

I don't know, perhaps the realtors from my class get more money than many scientists or engineers, and certainly more than my peers in India (whose salaries have gone from 10% of mine to about 40% of mine in the past decade or two), but the point is the real love of solving novel problems - in an industry where success leads to many novel problems.

Hard work, interesting problems, and building things that actual people use - these are the core value prop for software engineering as a career; the money is pretty new and not the core; finding people who share that perspective is priceless. Enough money to provide a good start to your children and help your family is good, but never the heart of the matter.

spoaceman7777 · 2h ago
It's also worth noting that it's almost entirely native born Americans that are pushing back against nepotism. Extreme nepotism is still the norm (an expectation even) in most South and East Asian cultures. And it's quite readily acknowledged if you speak to newer hires who haven't realized yet that it is best kept quiet.

It's a hard truth for many Americans to swallow, but it is the truth nonetheless.

Not to say there isn't an incredible amount of merit... but the historical impact of rampant nepotism in the US is widely acknowledged, and this newer manifestation should be acknowledged just the same.

therealpygon · 12h ago
My opinion is that off-shore teams are also going to be some of the jobs more easily replaced, because many of these are highly standardized with instructions due to the turnover they have. I wouldn’t be surprised if these outsourcing companies are already working toward that end. They are definitely automating and/or able to collect significant training data from the various tools they require their employees to use for customers.
gedy · 14h ago
I was working at a SoCal company a couple years ago (where I’m from), and we had a lot of Chinese and Indian folks. I remember cracking up when one of the Indian fellows pulled me aside and asked me where I was from, because I sounded so different with my accent and lingo. He thought I was from some small European country, lol.
catigula · 14h ago
Just to note interpersonally I find pretty much any group to be great on average but being a participant of US labor and sympathetic to other US laborers this is clearly not something I can support.
hluska · 11h ago
You can’t support having a good enough relationship with coworkers from outside of your country that you can relate cheerful anecdotes about them?
tcdent · 13h ago
The language I use being from southern California has, on more than one occasion, sparked conversation about it.

Sorry, dude, it's like, all I know.

jayd16 · 12h ago
I mean, aren't 3 out of 8 humans from India or China? If the company is big enough to appeal to a global applicant pool its a bit expected.
sokoloff · 12h ago
It’s presumably (from context) a company campus in the US that they’re taking about. I wouldn’t expect 3 of 8 legally authorized to work in the US people to be Chinese or Indian combined.

Other than a few international visitors, I’d expect the makeup to look like the domestic tech worker demographics rather than like the global population demographics.

apex3stoker · 4h ago
I think most software companies hire from computer science graduates from US colleges. It’s likely that international students makes up a large percentage of these graduates.
bradlys · 11h ago
Also, anyone who has worked in these companies also know it’s much larger than 3 out of 8… comical to act like it’s only 3/8.
senderista · 10h ago
I estimate AWS engineering is maybe 80% Indian and another 10% Chinese. Less at higher levels though.
bradlys · 7h ago
It always blows my mind that 75% of H1B admittance is Indian. Then you live in SFBA for 10 years and it's not really a surprise anymore.
underlipton · 12h ago
We all get 5 conspiracy theories before we advance from "understandably suspicious, given the complexity of the modern world" to "reliable tinfoil purchasers", and one of mine is that the prevalence of Indian execs and, to a lesser extent, Indian and Chinese workers in tech is a backdoor concession to countries who could open a demographic can of whoop-ass on us if they really wanted to. We let them bleed off the ambitious intellectuals who could become a political issue for their elite, and ours get convenient scapegoats for why businesses can't hire, train, and pay domestic workers well. As far as top men are concerned, it's a good deal.

Nadella ascending to the leadership of Micro"I Can't Believe It's Not Considered A State-Sponsored Defense Corp"soft is what got my mildly xenophobic (sorry) gears turning.

hluska · 11h ago
Edited:

Actually disregard, this isn’t worth it, but I don’t grant any freebies.

underlipton · 10h ago
Well, now I'm curious.
renewiltord · 10h ago
Any immigrants should read these threads carefully. If you're pro-union you're going to get screwed by your fellow man. Don't empower a union unless you want to be kicked out of the country.

According to these people, politicians like you here and labour doesn't. If that's true, do you want to empower labour to kick you out?

VonTum · 8h ago
What a weird crabs-in-a-bucket argument against unions. "Don't empower yourself and the rest of your colleagues because they might get powerful enough to kick you out"?

The whole reason H1Bs were invented is to disempower the existing workforce. Not reaching for a (long overdue) tool of power for tech workers is playing right into their hand.

renewiltord · 4h ago
The colleagues are all screaming to kick you out. Someone would have to be mentally differently abled to want to lend their voice to the chorus of people asking to kick them out.

You can call it what you want to legitimize it but these people want immigrants out and empowering them means immigrants get kicked out.

If you want to get kicked out as an immigrant definitely support them.

catigula · 8h ago
The funny thing is that you're not wrong and this is yet another feather in the cap of "foreign labor are literal scabs" argument.
renewiltord · 4h ago
The history of unions and the past of the AFLCIO is filled with successful lobbying to prevent immigrants from becoming American. They’re not going to stop suddenly today.

Knowing one’s enemy is key to fighting them.

boogieknite · 11h ago
first job out of college i was one of these pms. luckily i figured it out quickly and would spend maybe 2 hours a day working, 6 hours a day teaching myself to program. i cant believe that job existed and they gave it to me. one of my teammates was moved to HR and he was distraught over how he actually had work to do
adamtaylor_13 · 6h ago
I’m realizing that 100% of all product managers I have ever worked with were just ZIRP-PMs.

I have never once worked with a product manager who I could describe as “worth their weight in gold”.

Not saying they don’t exist, but they’re probably even rarer than you think.

icedchai · 12h ago
I worked at a small company with more PMs than developers. It was incredible how much bull it created.
mlsu · 14h ago
I suspect that these "AI layoffs" are really "interest rate" layoffs in disguise.

Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.

spamizbad · 14h ago
Yeah, my spiciest take is that Jr. Dev salaries really started getting silly during the 2nd half of the 2010s. It was ultimately supply (too little) and demand (too much) pushing them upward, but it was a huge signal we were in a bubble.
LPisGood · 12h ago
As someone who entered the workforce just after this, I feel like I missed the peak. A ton if those people got boatloads of money, great stock options, and many years of experience that they can continue to leverage for excellent positions.
trade2play · 11h ago
I joined in 2018.

Honestly it was 10 years too late. The big innovations of the 2010 era were maturing. I’ve spent my career maintaining and tweaking those, which does next to zero for your career development. It’s boring and bloated. On the bright side I’ve made a lot of money and have no issues getting jobs so far.

Aeolun · 10h ago
I think my career started in 2008? That was a great time to start for the purpose of learning, but a terrible one for compensation. Basically nobody knew what they were doing, and software wasn’t the ticket to free money that it became later yet.
dustingetz · 1h ago
data engineering was free money for nothing at all circa 2014, they got paid about 1.5x a fullstack application developer for .5x the work because frontend/ui work was considered soft, unworthy
lurking_swe · 10h ago
there’s always interesting work out there. It just doesn’t always align with ethical values, good salary, or work life balance. There’s always a trade off.

For example think of space x, Waymo, parts of US national defense, and the sciences (cancer research, climate science - analyzing satellite images, etc). They are doing novel work that’s certainly not boring!

I think you’re probably referring to excitement and cutting edge in consumer products? I agree that has been stale for a while.

idkwhattocallme · 12h ago
Don't worry, there is always another bubble on the horizon
nyarlathotep_ · 14h ago
The irony now is that 120k is basically minimum wage for major metros (and in most cases that excludes home ownership).

Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.

rekenaut · 12h ago
I feel that saying "120k is basically minimum wage for major metros" is absurd. As of 2022, there are only three metro areas in the US that have a per capita income greater than $120,000 [1] (Bay Area and Southwest Connecticut). Anywhere else in the US, 120k is doing pretty well for yourself, compared to the rest of the population. The average American working full time earns $60k [2]. I'm sure it's not a comfortable wage in some places, but "basically minimum wage" just seems ignorant.

[1] https://en.wikipedia.org/wiki/List_of_United_States_metropol...

[2] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...

lamename · 11h ago
I disagree. Your data doesnt make the grandparent's assertion false. Cost of living != per capita or median income. Factoring in sensible retirement, expensive housing, inflation, etc, I think the $120k figure may not be perfect, but is close enough to reality.
BlueTemplar · 11h ago
Since when "minimum wage" means "sensible retirement" ?

More like it means ending up with government-provided bare minimum handouts to not have you starve (assuming you somehow manage to stay on minimum wage all your life).

lamename · 10h ago
We agree, minimum wage doesnt mean that. And in a large metro area, that's why $120k is closer to min wage than a good standard of lliving and building retirement.
nyarlathotep_ · 10h ago
Correct, I mean in the sense of "living a standard of life that my parents and friends parents (all of very, very modest means) had 20 years ago when I was a teenager."

I mean a real wage associated with standards of living that one took for granted as "normal" when I was young.

impossiblefork · 4h ago
It actually is basically minimum wage for major metros.

If I took a job for ~100k in Washington, I'd live worse than I did as a PhD student in Sweden. It would basically suck. I'm not sure ~120k would make things that different.

bravesoul2 · 12h ago
Yeah 120k is the maximum I have earned over 20 years in the industry. I started off circa. 40k maybe that's 70k adj for inflation. Not in US.
lostlogin · 12h ago
It’s always going to be difficult to compare countries. Things like healthcare, housing, childcare, schooling, taxes and literally every single thing are going to differ.
bravesoul2 · 12h ago
The arbitrage is when you are young and healthy get that US salary and save then retreat home in your 40s and 50s. Stay healthy of course.
adaptbrian · 12h ago
Lots of tech folks get burnt out without knowing it. If you're tired all the time drastically alter your diet, it could change your life for the better.
foobiekr · 11h ago
To what?
alephnerd · 13h ago
CoL in London or Dublin is comparable to much of the US, but new grad salaries are in the $30-50k range.

The issue is salary expectations in the US are much higher than those in much of Western Europe despite having similar CoL.

And $120k for a new grad is only a tech specific thing. Even new grad management consultants earn $80-100k base, and lower for other non-software roles and industries.

ponector · 12h ago
I've seen recently an open position for senior dev with 60k salary and hybrid 3 days per week in London. Insane!
alephnerd · 10h ago
Yep. And costs are truly insane in Greater London. Bay Area level housing prices and Boston level goods prices, but Mississippi or Alabama level salaries.

But that's my point - salaries are factored based on labor market demands and comparative performance of your macroeconomy (UK high finance and law salaries are comparable with the US), not CoL.

lurk2 · 5h ago
> Boston level goods prices

I’ve never been to Boston. Why are the prices high there?

FirmwareBurner · 13h ago
>but new grad salaries are in the $30-50k range

But in UK an Ireland they get free healthcare, paid vacation, sick leave and labor protections, no?

__turbobrew__ · 10h ago
> free healthcare

I pay over 40% effective tax rate. Healthcare is far from free.

alephnerd · 13h ago
The labor protections are basically ignored (you will be expected to work off the clock hours in any white collar role), and the free healthcare portion gets paid out of employer's pockets via taxes so it comes out the same as a $70-80k base (and associated taxes) would in much of the US.

There's a reason you don't see new grad hiring in France (where they actually try to enforce work hours), and they have a subsequently high youth unemployment rate.

Though even these new grad roles are at risk to move to CEE, where their administrations are giving massive tax holidays on the tune of $10-20k per employee if you invest enough.

And the skills gap I mentioned about CS in the US exists in Weatern Europe as well. CEE, Israel, and India are the only large tech hubs that still treat CS as an engineering disciple instead of as only a form of applied math.

lazyasciiart · 4h ago
> The labor protections are basically ignored (you will be expected to work off the clock hours in any white collar role),

I happen to have a sibling in consulting who was seconded from London to New York for a year, doing the same work for the same company, and she found the work hours in NY to be ludicrously long (and not for a significant productivity gain: more required time-at-desk). So there are varying levels of "expected to work off the clock hours".

0xpgm · 7h ago
What is the difference between treating CS as an engineering discipline vs a branch of applied math?
kilpikaarna · 6h ago
(According to this guy apparently) low level vs algorithms focus. CE or CS basically.
rcpt · 12h ago
Maybe the EU is different but in the US there's no software engineering union. Our wages are purely what the market dictates.

Think they're too high? You're free to start a company and pay less.

catigula · 14h ago
That really only happened in HCOL areas.
bravesoul2 · 12h ago
HCOL wasn't the driver though. It is abundance of investment and desire to hire. If the titans could collude to pay engineer half as much, they would. They tried.
xp84 · 14h ago
Sure, but there was a massive concentration of such people in those areas.
bachmeier · 20h ago
> I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend.

Some managers read Dilbert and think it's intended as advice.

trhway · 12h ago
AI has been also consuming Dilbert as part of its training...
DonHopkins · 8h ago
Worse yet, AI has been consuming Scott Adams quotes as part of its training...

"The reality is that women are treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It’s just easier this way for everyone. You don’t argue with a four-year old about why he shouldn’t eat candy for dinner. You don’t punch a mentally handicapped guy even if he punches you first. And you don’t argue when a women tells you she’s only making 80 cents to your dollar. It’s the path of least resistance. You save your energy for more important battles." -Scott Adams

"Women define themselves by their relationships and men define themselves by whom they are helping. Women believe value is created by sacrifice. If you are willing to give up your favorite activities to be with her, she will trust you. If being with her is too easy for you, she will not trust you." -Scott Adams

"Nearly half of all Blacks are not OK with White people. That’s a hate group." -Scott Adams

"Based on the current way things are going, the best advice I would give to White people is to get the hell away from Black people. Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed." -Scott Adams

"I’m going to back off from being helpful to Black Americas because it doesn’t seem like it pays off. ... The only outcome is that I get called a racist." -Scott Adams

icedchai · 19h ago
I've worked at smaller companies where half the people in the meetings were just there because they had nothing else to do. Lots of "I'm a fly on the wall" and "I'll be a note taker" types. Most of them contributed nothing.
xp84 · 13h ago
My friend's company (he was VP of Software & IT at a non-tech company) had a habit of meetings with no particular agenda and no decisions that needed making. Just meeting because it was on the calendar, discussing any random thing someone wanted to blab about. Not how my friend ran his team but that was how the rest did.

Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."

osigurdson · 9h ago
It is almost like once a company gets rolling, there is sufficient momentum to keep it going even if many layers aren't doing very much. The company becomes a kind of meta-economic zone where nothing really matters. Politics / fights emerge between departments / layers but has nothing to do with making a better product / service. This can go on for decades if the moat is large enough.
Nasrudith · 6h ago
The first mistake is thinking that contribution must be in the form of output instead of ingestion. Of course meetings aren't often the most efficient form of doing so. More being forced to listen (at least officially) so there isn't an excuse.
JSR_FDED · 11h ago
I don’t doubt there’s a lot of knowledge workers who aren’t adding value.

I’m worried about the shrinking number of opportunities for juniors.

hn_throwaway_99 · 1h ago
I agree with this, but I still think that offshoring is much more responsible for this than AI.

I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.

phendrenad2 · 20h ago
To the contrary - they were providing value to the VP who benefitted from inflated headcount. That's "real value", it's just a rogue agent is misaligned with the company's goals.

And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?

At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!

idkwhattocallme · 19h ago
True! I golfed with the president of the division on a Friday (during work) and we got to the root of this. Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes. When you have 70%+ margin on your software, you have money to burn. Dividends back to shareholders was not rewarded during ZIRP. On VP's being respected. I found at the companies I worked at VPs and their directs were like Nobles in a feudal kingdom constantly quibbling/battling for territory. There were alliances with others and full on takeouts at points. One VP described it as Game of Thrones. Not sure how this all changes when your kingdom is a bunch of AI agents that presumably anyone can operate.
myko · 18h ago
Not so fun in real life but I kind of like this as a video game concept
DonHopkins · 8h ago
lotsofpulp · 12h ago
> Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes

The data does not support this. The businesses with the highest market caps are the ones with the highest earnings.

https://companiesmarketcap.com/

Sort by # of employees and you get a list of companies with lower market caps.

trade2play · 10h ago
Google/Facebooks earnings are so high they can afford to be wildly wasteful with headcount and still be market leaders
Ekaros · 2h ago
Those two are perfect examples of burning insane amounts of money and still showing profits beyond that... Whole metaverse investment. And all the products that Google has abandoned. Even returning all the payments like Stadia...
versteegen · 10h ago
If you sort by number of employees you get companies where those employees aren't in R&D divisions.
lotsofpulp · 10h ago
Their comment reads to me as if businesses hire employees (regardless of the work they do, since we are discussing employees that don't do anything) because investors consider employees as R&D (even useless ones).

Either way, there is no data I have seen to suggest market cap correlates with number of employees. The strongest correlation I see is to net income (aka profit), and after that would be growing revenues and/or market share.

BriggyDwiggs42 · 19h ago
We really oughta work on setting up systems that don’t waste time on things like this. Might be hard, but probably would be worth the effort.
PeterStuer · 19h ago
"Hiring someone gave my VP more headcount and therefore more clout"

Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.

dlivingston · 14h ago
No, not if the metric by which VPs get clout changes.
monkeyelite · 12h ago
That metric is evaluated deep in the human psyche.
thfuran · 13h ago
The more cloud spend the better. Take 10% of it as a bonus?
0xpgm · 7h ago
It's about to change to doing more with less headcount and higher AI spend
Nasrudith · 6h ago
Automation is just one form of "face a sufficiently competitive marketplace such that the company can no longer tolerate the dead-weight loss of their egos".
__turbobrew__ · 10h ago
Turns out 50% of white collar jobs are just daycare for adults.
federiconafria · 4h ago
"my VP more headcount and therefore more clout"

This had me thinking, how are they going to get "clout", by comparing AI spending?

paulcole · 14h ago
Just curious, did you put yourself in the superfluous category either time?
idkwhattocallme · 12h ago
Ultimately (and sadly) yes. While I never habitually or intentionally attended meetings to just look busy, I did work on something I knew had a long shot of creating value for the business. I worked on 0-1 products that if the company was more disciplined would not (or should not) have attempted. I left both on my own accord seeing the writing on the wall.
dehrmann · 10h ago
> I worked on 0-1 products that if the company was more disciplined would not (or should not) have attempted.

You said you were at large companies, so this is a hard call to make. A lot of large companies work on lots of small products knowing they probably won't work, but one of them might, so it's still worth it to try. It's essentially the VC model.

ozim · 12h ago
Bad part is all those guys attending meetings start feeling important. They start feeling like they are doing the job.

I’ve seen those guys it is painful to watch.

lukev · 12h ago
Whenever I think about AI and labor, I can't help thinking about David Graeber's [Bullshit Jobs](https://en.wikipedia.org/wiki/Bullshit_Jobs).

And there's multiple confounding factors at play.

Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.

But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.

alvah · 11h ago
Exactly. For as long as I can remember, in any organisation of any reasonable size I have worked in, you could get rid of the ~50% of the headcount who aren't doing anything productive without any noticeable adverse effects (on the business at least, obviously the effects on the individuals would be somewhat adverse). This being the case, there are obviously many other factors other than pure efficiency keeping people employed, so why would an AI revolution on it's own create some kind of massive Schumpeterian shockwave?
ryandrake · 10h ago
People keep tossing around this 50% figure like it's a fact, but do you really think these companies just have half their staff just not doing anything? It just seems absurd, and I honestly don't believe it.

Everywhere I've ever worked, we had 3-4X more work to do than staff to do it. It was always a brutal prioritization problem, and a lot of good projects just didn't get done because they ended up below the cut line, and we just didn't have enough people to do them.

I don't know where all these companies are that have half their staff "not doing anything productive" but I've never worked at one.

What's more likely? 1. Companies are (for reasons unknown) hiring all these people and not having them do anything useful, or 2. These people actually do useful things, but HN commenters don't understand those jobs and simply conclude they're doing nothing?

trade2play · 10h ago
All of the big software companies are like the parent describes, in most of their divisions.

Managers always want more headcount. Bigger teams. Bigger scope. Promotions. Executives have similar incentives or don’t care. That’s the reason why they’re bloated.

alvah · 9h ago
Have you heard of Twitter? 80-90% reduction in numbers, visible effects to the user (resulting from the headcount cuts, not the politics of the owner)? Pretty much zero.
hnaccount_rng · 1h ago
That’s a difficult example. I don’t think anyone would reasonably expect the engineering artifact twitter.com to break. But the business artifact did break. At least to a reasonable degree. The Ad revenue is still down (both business news and the ads I’m experiencing are from less well resourced brands). And yes, that has to do with “answering emails with poop emojis” and “laying off content checkers”
daxfohl · 11h ago
Agree, but two questions:

First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).

Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.

ledauphin · 10h ago
the one does actually sound worse because... it's actually worse. it clarifies that the companies themselves were playing games with people's livelihoods because of the potential for profit.

whereas "AI" is intuitively an external force; it's much harder to assign blame to company leadership.

daxfohl · 10h ago
I'd read the first as adjusting to market demand, not playing with people's lives. If if were construed as playing with lives, that could apply to basically any investment.
leflambeur · 7h ago
isn't the scapegoat he or she who gets sacrificed? I think engineers are that
827a · 11h ago
Half of everyone at most large companies could be retired with no significant impact to the company's ability to generate revenue. The problem has always been figuring out which half.
ivape · 7h ago
I've said this many times, that the abundance and wealth of the tech industry basically provided vast amounts of Universal Basic Income to a variety of roles (all of agile is one example). We're at a critical moment where we actually have to look at cost-cutting on this UBI.
matthest · 11h ago
Does anyone else think the fact that companies hire superfluous employees (i.e. bullshit jobs) is actually fantastic?

Because they don't have to do that. They could just operate at max efficiency all the time.

Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.

nyarlathotep_ · 10h ago
> Does anyone else think the fact that companies hire superfluous employees (i.e. bullshit jobs) is actually fantastic?

I do.

It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.

I'm a person that would be hypothetically supportive of something like DOGE cuts, but I'd rather have people earning a living even with Soviet-style make work jobs than unemployed. I don't desire to live in a cutthroat "competitive" society where only "talent" can live a dignified life. I don't know if that's "wealth distribution" or socialism or whatever; I don't really care, nor make claim it's some airtight political philosophy.

andrekandre · 5h ago

  > It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
its just my intuition, but talking to many people around me, i get the feeling like this is why people on both "left" and "right" are in a lot of ways (for lack of a better word) irate at the system as a whole... if thats true, i doubt ai will improve the situation for either...
leflambeur · 7h ago
tech bros think not only that that system is good, but that they'd be the winners
disambiguation · 18h ago
> But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.

I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.

jsnider3 · 10h ago
Netflix's value comes from being convenient and compatible with the copyright system in a way sharing videos P2P definitely isn't.
disambiguation · 3h ago
I'm not advocating for p2p, but rather drawing attention to the word "value" and what it means to create it. For example, would netflix as a piece of software hold any value if the company were to suddenly lose all its copyrights and IP licenses? Whereas something like an operating system or excel has standalone utility, netflix is only as valuable as its IP. The software isn't designed to create value, but instead to fully utilize the value of a piece of property. It's an important distinction to keep in mind especially when designing such software. Now consider that in the streaming world there isn't just netflix, but prime, Hulu, HBO, etc. Etc.

The parent comment was complaining about certain employees contributions to "real value" or lack thereof. My question is, how do you ascertain the value of work in this context where the software isn't what's valuable but the IP is, and further how do justify working on a product thats already a solved problem and still refer to it as "creating 'real' value"?

lazyasciiart · 4h ago
And their increasingly restrictive usage policies are basically testing how important the 'convenient' piece is.
CKMo · 14h ago
There's definitely a big problem with entry-level jobs being replaced by AI. Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.

Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.

Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.

ChrisMarshallNY · 14h ago
This is where the horrific disloyalty of both companies and employees, comes to bite us in the ass.

The whole idea of interns, is as training positions. They are supposed to be a net negative.

The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.

But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.

At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.

neilv · 11h ago
I agree that interns are pretty much over in tech. Except maybe for an established company do do as a semester/summer trial/goodwill period, for students near graduation. You usually won't get work output worth the mentoring cost, but you might identify a great potential hire, and be on their shortlist.

Startups are less enlightened than that about "interns".

Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.

It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.

Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.

geraneum · 13h ago
> horrific disloyalty of both companies and employees

There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.

ChrisMarshallNY · 13h ago
> We need to learn a thing or two from blue collars.

A majority of my friends are blue-collar.

You might be surprised.

Unions are adversarial, but the relationships can still be quite warm.

I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.

It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.

It can be done. It has been done.

It's just kind of lost, in tech.

sabarn01 · 10h ago
I have been in Union shops before working in tech. In some places they are fine in others its where your worst employee on your team goes to make everyone else less effective.
FirmwareBurner · 13h ago
>It's just kind of lost, in tech.

Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.

Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.

There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.

yardie · 10h ago
Almost every Hollywood movie you see,that wasn’t filmed in LA, was basically a taxpayer backed project. Look at any film with international locations and in the film credits you’ll see a lots of state-backed, loans, grants, and tax credits. Large part of the film crew and cast are flown out to those locations. And if you think LA was expensive, location pay is even more so. So production is flying out the most expensive parts of the crew to save a few dollars on craft service?
madaxe_again · 3h ago
> Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.

FirmwareBurner · 14h ago
>At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers,

Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.

dylan604 · 13h ago
I really think the loss of a mentor/apprentice type of experience is one of those baby-with-the-bath-water type of losses. There are definitely people with the personality types of they know everything and nothing can be learned from others, but for those of us who would much rather learn from those with more experience on the hows and whys of things rather than getting all of those paper cuts ourselves, working with mentors is definitely a much better way to grow.
ChrisMarshallNY · 13h ago
Yup. It was a standard part of their HR policy. They are all about long, long-term employment.

They are a marquée company, and get the best of the best, direct from top universities.

Also, no one has less than a Master's, over there.

We got damn good engineers as interns.

FirmwareBurner · 13h ago
>Also, no one has less than a Master's, over there.

I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.

I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.

yardie · 10h ago
Graduate school assistant in the US pay such shit wages compared to Europe that you would be eligible for food stamps. Opportunity cost is better spent getting your bachelors degree, finding employment, and then using that salary to pay for grad school or have your employer pay for it. I’ve worked in Europe with just my bac+3. I also had 3-4 years of applied work experience that a fresh-faced MSc holder was just starting to acquire.
fn-mote · 11h ago
Possibly also because they don’t observe added value of the additional schooling.

Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.

I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.

mechagodzilla · 14h ago
Interns and new grads have always been a net-negative productivity-wise in my experience, it's just that eventually (after a small number of months/years) they turn into extremely productive more-senior employees. And interns and new grads can use AI too. This feels like asking "Why hire junior programmers now that we have compilers? We don't need people to write boring assembly anymore." If AI was genuinely a big productivity enhancer, we would just convert that into more software/features/optimizations/etc, just like people have been doing with productivity improvements in computers and software for the last 75 years.
lokar · 11h ago
Where I have worked new grads (and interns) were explicitly negative.

This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.

0xpgm · 6h ago
Isn't that every new employee? The first few months you are not expected to be firing on all cylinders as you catch up and adjust to company norms

An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that

alephnerd · 14h ago
It's a monetary issue at the end of the day.

AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.

At $70-80k the math for a new grad works out, but not at almost double that.

Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.

The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.

The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).

sarchertech · 8h ago
> by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple

That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.

That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.

brookst · 14h ago
I just can't agree with this argument at all.

Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.

Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.

The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.

But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.

So... why exactly is the entry level screwed?

chongli · 13h ago
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more.

Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.

I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.

sibeliuss · 7h ago
It's worth reminding folks that one doesn't _need_ a formal education to get by. I did terrible in school and never went to college and years later have reached a certain expertise (which included many fortunate moments along the way).

What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.

alephnerd · 13h ago
I mean, a lot of what you mentioned is an issue around critical thinking (and I'm not sure that's something that can be taught), which has always remained an issue in any job market, and to solve that deskilling via automation (AI or traditional) was used to remediate that gap.

But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.

gerad · 14h ago
They don't have the experience to tell bad AI responses from good ones.
xp84 · 14h ago
True, but this becomes less of an issue as AI improves, right? Which is the 'happier' direction to see a problem moving, as if AI doesn't improve, it threatens the jobs less.
hnthrow90348765 · 12h ago
I would be worried about the eventual influence of advertising and profits over correctness
sarchertech · 8h ago
If AI improves to the point that an intern doesn’t need to check its work, you don’t need the intern.

You don’t need managers, or CEOs. You don’t even need VCs.

einpoklum · 13h ago
> will need less hand-holding, will be able to leverage AI to deliver more

Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.

But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.

brookst · 12h ago
You’ve eloquently expressed exactly the same disconnect: as long as we think the purpose of internships is to write the same kind of code that interns write today, sure, AI probably makes the whole thing less efficient.

But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.

So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“

diogolsq · 14h ago
You’re right that AI is fast and often more efficient than entry-level humans for certain tasks — but I’d argue that what you’re describing isn’t delegation, it’s just choosing to do the work yourself via a tool. Implementation costs are lower now, so you decide to do it on your own.

Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.

You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.

Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.

Many reasons why AI can't fill your shoes:

1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.

2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.

Junior and Interns can also use AI tools.

dasil003 · 13h ago
You said exactly what I came here to say.

Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.

It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.

It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.

snowwrestler · 9h ago
Companies reducing young hires because of AI are doing it backward. Returns on AI will be accelerated by early-career staff because they are already eagerly using AI in daily life, and have the least attachment to how jobs are done now.

You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.

Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.

sarchertech · 8h ago
If the AI can write code to a level that doesn’t need an experienced person to check the output, you don’t need tech companies at all.
Loughla · 14h ago
So what happens when you retire and have no replacement because you didn't invest in entry level humans?

This feels like the ultimate pulling up the ladder after you type of move.

phailhaus · 13h ago
I don't get this because someone has to work with the AI to get the job done. Those are the entry-level roles! The manager who's swamped with work sure as hell isn't going to do it.
mirkodrummer · 10h ago
imo comparing entry-level people with ai is very short sighted, I was smarter than every dumb dinosaur at my first job, I was so eager to learn and proactive and positive... i probably was very lucky too but my point is i don't believe this whole thing that a junior is worse than ai, i'd rather say the contrary
pedalpete · 13h ago
We've been doing the exact opposite for some positions.

I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.

At the same time, there are some amazing recent grads and even interns who can't get jobs.

We've been hiring the younger group, and contracting for a few days a week with the more experienced people.

Combine that with AI, and you've got a powerful combination. That's our theory anyway.

It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.

aloknnikhil · 14h ago
It's not that entry-level jobs / interns are irrelevant. It's more that entry-level has been redefined and it requires significant uplevelling in terms of skills necessary to do a job at that level. That's not necessarily a bad thing. As others have said here, I would be more willing to hand-off more complex tasks to interns / junior engineers because my expectation is they leverage AI to tackle it faster and learn in the process.
sauercrowd · 10h ago
"intern" and "entry level" are proxies for complexity with these comparisons, not actual seniority. We'll keep hiring interns and entry level positions, they'll just do other things.
mjburgess · 14h ago
This is always the case though. A factor of 50x productivity between expert and novice is small. Consider how long it take you to conduct foot surgery vs. a food surgeon -- close to a decade of medical school + medical experience -- just for a couple hours of work.

There have never been that many businesses able to hire novices for this reason.

pc86 · 14h ago
This is a big part of why a lot of developers' first 1-3 jobs are small mom & pop shops of varying levels of quality, almost none of which have "good" engineering cultures. Market rate for a new grad dev might be X, it's hard to find an entry level job at X but mom & pop business who needs 0.7 FTE developers is willing to pay 0.8X and even though the owner is batshit insane it's not a bad deal for the 22 and 23 year olds willing to do it.
mjburgess · 14h ago
Sure. I mean perhaps, LLMs will accelerate a return to a more medieval culture in tech where you "have to start at 12 to be any good". Personally, I think that's a good (enough) idea. By 22, I'd at least a decade of experience; my first job at 20 was as a contractor for a major national/multinational.

Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.

InitialLastName · 14h ago
I think for a surgeon as an example, quality may be a better metric than time. I'll bet I could conduct an attempted foot surgery way faster than a foot surgeon, but they're likely to conduct successful foot surgeries.
nradov · 6h ago
Sure, but no one has found a good metric for actually quantifying quality for surgeons. You can't look at just the rate of positive outcomes because often the best surgeons take on the worst cases that others won't even attempt. And we simply don't have enough reliable data to make proper metric adjustments based on individual patient attributes.
dylan604 · 14h ago
Are you honestly trying to tell us that the code you receive from an AI is not requiring any of your time to review and tweak and is 100% correct every time and ready to deploy into your code base with no changes what so ever? You my friend must be a steely eyed missile man of prompting
throwuxiytayq · 10h ago
Consider that there are no humans in existence that fulfill your requirements, not to mention $20/mo ones
dylan604 · 10h ago
why would i consider that when there absolutely are humans that can do that. your dollar value is just ridiculous. if you're a hot shit dev that no longer needs junior devs, then if you spend 15 minutes refactoring the AI output, then you're underwater on that $20/mo value
baxtr · 13h ago
I think it’s the other way around.

If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.

phatfish · 11h ago
Yup, apart from a few companies at the cutting edge the most difficult problems to solve in a work environment are not technical.
necheffa · 13h ago
> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...

Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.

If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?

> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.

Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.

I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.

einpoklum · 13h ago
> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

1. Because, generally, they don't.

2. Because an LLM is not a person, it's a chatbot.

3. "Hire an intern" is that US thing when people work without getting real wages, right?

Grrr :-(

aianus · 9h ago
Interns make $75k+ in tech in the US. It's definitely not unpaid. In fact my school would not give course credit for internships if they were unpaid.
jmyeet · 11h ago
This is basically what happened after 2008. The entry level jobs college grads did basically disappeared and didn't really come back for many years. So we kind of lost half a generation. Those who missed out are the ones who weren't able to buy a house or start a family and are now in their 40s, destined to be permanent renters who can never retire.

The same thing will happen to Gen Z because of AI.

In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.

The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.

All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.

abletonlive · 14h ago
This is a big issue in the short term but in the long term I actually think AI is going to be a huge democratization of work and company building.

I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.

I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.

I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.

The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)

The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.

In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.

Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.

My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.

The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.

Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs

izabera · 13h ago
what does "huge democratization of work" even mean? what world do you people live in? the current global unemployment rate on my planet is around 5% so that seems pretty democratised already?
tdeck · 9h ago
I've noticed that when people use the term "democratization" in business speak, it makes sense to replace it with "commodification" 99% of the time.
abletonlive · 13h ago
What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

Just as the internet was a democratization of information, llms are a democratization of output.

That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.

The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.

Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.

There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.

We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.

For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

The junior engineer can also do this too, albeit in most cases less effectively.

That's democratization of work.

In your "5% unemployment" world you have many more gatekeepers and financial barriers.

hn_acc1 · 12h ago
Just curious what area you work in? Python or some kind of web service / Jscript? I'm sure the LLMs are reasonably good for that - or for updating .csv files (you mention spreadsheets).

I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..

I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.

abletonlive · 12h ago
A lot of python in a monorepo. Mono repos have an advantage right now because the LLM can pretty much look through the entire repo. But I'm also applying LLM to eliminate a lot of roles that are obsolete, not just using it to code.

Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.

Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.

> which is now based on o4

based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...

  Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.

You might be getting bad results for a few reasons:

  - your prompts are not specific enough
  - your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
  - the model/tool you're using sucks
  - you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.

see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...

I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:

  - what is your problem domain
  - show us your favorite prompt
  - what model and tools are you using?
  - are you using it as a chat or an agent? 
  - are you bringing your own keys or using a service?
  - what did you supply in context when you got the bad result? 
  - how did you supply context? copy paste? file locations? attachments?
  - what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.

Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.

rini17 · 10h ago
But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?
abletonlive · 10h ago
So your critique of AI is that it can't read your mind and figure out what to do?

> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

I mean....i'm doing it and getting paid for it so...

blibble · 13h ago
> What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

this sounds like the death of employment and the start of plutocracy

not what I would call "democratisation"

abletonlive · 12h ago
> plutocracy

Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much

blibble · 12h ago
not everyone is capable of starting a business

so, employee or destitute? tough choice

abletonlive · 12h ago
I spent a lot of time arguing the barrier to entry for starting one is lower than ever. But if your only options are employee or being destitute, I will again point right to -> cynicism.
snowwrestler · 9h ago
Historically, people have been pretty good at predicting the effects of new technologies on existing jobs. But quite bad at predicting the new jobs / careers / industries that are eventually created with those technologies.

This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.

In the absence of reliable prediction, quick reaction is what wins.

Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.

And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.

digdugdirk · 8h ago
More people need to understand the actual history of the luddites. The real issue was the usage of mechanized equipment to overwhelm an entire sector of the economy of the day - destroying the labor value of a vast swath of craftspeople and knocking them down a peg on the social ladder.

Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.

Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.

At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite

nopinsight · 8h ago
My thesis is that this could lead to a booming market for “pink-collar” service jobs. A significant latent demand exists for more and better services in developed countries.

For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.

Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.

Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).

csomar · 5h ago
I think the takeaway is that interest rates have to be maintained relatively high as the ZIRP era has showed that it breaks the free market. There is a reason why the Trump wants to lower the interest rate.

Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.

beepbooptheory · 9h ago
So, we are doomed to work forever, just maybe different jobs?
satvikpendem · 9h ago
Of course. I mean this has never not been the case unless you are independently wealthy. Work always expands, that's why it's a fallacy to think that if we just had more productivity gains that we'd work half the time; no, there are always new things to do tomorrow that were not possible yesterday.
absurdo · 9h ago
Basically yeah. You live in a world of layered servitude and, short of a financial windfall that hoists you up for some time, you’re basically guaranteed to work your entire life, and grow old, frail and poor. This isn’t a joke, it’s reality for many people that’s hidden from us to keep us deluded. Similar to my other mini-rant, I don’t have any valid answers to the problem at hand. Just acknowledging how fucked things are for humanity.
aianus · 9h ago
No, it's quite easy to make $1mm in a rich country and move to a poorer country and chill if you so desire.
lurk2 · 5h ago
> No, it's quite easy to make $1mm in a rich country and move to a poorer country and chill if you so desire.

On an aggregate level this is true and contrary to the prevailing sentiment of doomer skepticism, the developed world is usually still the best place to do it. On an individual level, a lot of things can go wrong between here and a million dollars.

andrekandre · 5h ago

  > make $1mm in a rich country and move to a poorer country and chill if you so desire
i wonder if such trends are good for said poorer country (e.g real estate costs) in the long run?
satvikpendem · 9h ago
It's not that easy, as in, you can make the money but the logistics of moving and living in another country are always harder than expected, both culturally and bureaucratically.
Johanx64 · 8h ago
>logistics of moving and living in another country are always harder than expected, both culturally and bureaucratically.

You know what's hard? Moving from a poor "shithole" to a wealthy country, with expensive accommodation, where a month of rent is something you'd save up months for.

Knowing and displaying (faking really) 'correct' cultural status signifiers to secure a good job. And all the associated stress, etc.

Moving the other direction to a low-cost-of-living or poor shithole country is extremely easy in comparison with a fat stack of resources.

You literally don't have to worry about anything in the least.

eastbound · 5h ago
Apart from the tax office suing you in oblivion because the startup you’ve founded is now worth 10x its revenue, so you need to pay 40% CGT with only 1/10th the income (at least that’s the French exit tax).

So basically once you are rich, you have to choose to leave most of it on the table to go to a poor country.

77pt77 · 6h ago
Just like the Red Queen.

You have to always keep on moving just to stay in the same place.

madaxe_again · 3h ago
When steam engines came along, an awful lot of people argued that being able to pump water from mines faster, while inarguably useful, would not have any broad economical impact. Only madmen saw the Newcomen engine and thought “ah, railways!”. Those madmen became extraordinarily wealthy. Vast categories of work were eliminated, others were created.

I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.

tw04 · 9h ago
But also it potentially means mass unemployment and we have literally no plan in place if that happens beyond complete societal collapse.

Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?

snowwrestler · 7h ago
We actually have many backup plans. The most effective ones will be the new business plans that unlock investment which is what creates new jobs. But behind that are a large set of government policies and services that help people who have lost work. And behind that are private resources like charities, nonprofits, even friends and family.

People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.

alluro2 · 6h ago
Compared to 30-40 years ago, I believe many in the US would argue that society has already collapsed to a significant extent, with regards to healthcare, education, housing, cost of life, homelessness levels etc.

If these mechanisms you mention are in place and functioning, why is there, for example, such large growth of the economic inequality gap?

ccorcos · 8h ago
> do you really think it’s a great idea to have no backup plan

What makes you think people haven’t made back up plans?

Or are you saying government needs to do it for us?

argomo · 8h ago
Ah yes the old "let's make individuals responsible for solving societal problems" bit. Nevermind that the state is sometimes the only entity capable of addressing the situation at scale.
ccorcos · 8h ago
Yes, I believe individuals should take responsibility for themselves and their future prosperity. We all know what happens when you don’t…

History has shown us quite clearly what happens if governments, and not individuals, are responsible for finding employment.

Voloskaya · 1h ago
I fail to understand what it is you are suggesting a 20 something year old is supposed to do to prepare their backup plan.

They should all just find a way be set for life within the next 3 years, is this your proposal ?

Nasrudith · 6h ago
That is literally part of the deal of not living in a literal dictatorship. It is your responsibility to solve societal problems. I mean, geeze what did they teach in civic classes in your generation?
lazyasciiart · 4h ago
So if you believe that it is your individual responsibility to solve societal problems, and assuming you believe in the possibility of human-driven mitigation of climate change: presumably you individually are solving that, by devoting your life to it? Or do you not really mean it's your individual responsibility?
sevensor · 20h ago
What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task. In 2025, we can get LLMs to do that for us. Unfortunately, the kind of executive who thinks AI is a legitimate replacement for actual work does not recognize the difference. I expect to see the more credulous CEOs dynamiting their companies as a result. Whether the rest of us can survive this remains to be seen. The CEOs will be fine, of course.
const_cast · 18h ago
> What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.

The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.

AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.

cjs_ac · 19h ago
There's a connection to the return to office mandates here: the managers who don't see how anyone can work at home are the ones who've never done anything but yap in the office for a living, so they don't understand how sitting somewhere quiet and just thinking counts as work or delivers value for the company. It's a critical failure to appreciate that different people do different things for the business.
Jubijub · 14h ago
That is a hugely simplistic take that tells me you never managed people out coordinated work across many people. I mean I a more productive individually at home too, so are probably all my folks in the team. But we don’t always work independently from each others, by which point having some days in common is a massive booster
cjs_ac · 14h ago
There is a spectrum: at one extremity is mandatory in-office presence every day; at the other is a fully-remote business. For any given individual, and for any given team, the approach needs to be placed on that spectrum according to what it is that that individual or team does. I'm not arguing in favour of any position on that spectrum; I'm arguing against blanket mandates that don't involve any consideration for what individuals in the business do.
xg15 · 9h ago
> What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.

Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.

richardw · 14h ago
CEO’s will be fine until their customers disappear. Are the AI’s going to click ads and buy iPhones?
psadauskas · 20h ago
AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.

I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.

TeMPOraL · 12h ago
> AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.

That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.

potatoman22 · 13h ago
Some of the best applications of LLMs I've seen are for reducing bullshit. My goal for creating AI products is to let us act more like humans and less like oxen. I know it's idealistic, but I need to act with some goal.
leeroihe · 14h ago
I don't really care what kind of work it is - you are my enemy if it's your objective to create a machine that will systematically devalue my work and kick me to the curb without really caring about it. Explicitly in a "pure" bs capitalistic way that "well you'll just figure something out, not my problem". I say this as someone who's a big proponent of capitalism and has owned a business.

It's a perversion of the free market and isn't good for anyone.

geraneum · 14h ago
> It's a perversion of the free market

We can, together, overcome such challenges when we accept that "The purpose of a system is what it does".

TeMPOraL · 12h ago
There's a "purpose of a system", but there's also a purpose which we want that system to serve, and which prompts us to correct the system should it deviate from the goals we set for it.
xanthor · 14h ago
So you think the free market should serve social ends?
abletonlive · 14h ago
Thanks for saying it out loud. I meet a lot of people like you that think the same way as part of my job and they aren't willing to say it out loud.

It's about protecting your work, even if an LLM can do it better.

The only way an LLM can devalue your work is if it can do it better than you. And I don't just mean quality, I mean as a function of cost/quality/time.

Anyway, we can be enemies I don't care - I've been getting rid of roles that aren't useful anymore as much as I can. I do care that it affects them personally but I do want them to be doing something more useful for us all whatever that may be.

horns4lyfe · 14h ago
lol “I do care, but not enough to actually care”
abletonlive · 14h ago
Caring doesn't mean that you stop everything you're doing to address someone's needs. That's a pretty binary world if it was the case and maybe a convenient way to look at motives when you don't want nuance.

Caring about climate change doesn't mean you need to spend your entire life planting trees instead of doing what you're doing.

einpoklum · 13h ago
I haven't worked in the US; and - have not yet worked in a company where such employees exist. Some are slower, some are fast or more efficient or productive - but they're all, everyone, under the pressure of too many tasks assigned to them, and it's always obvious that more personnel is needed but budget (supposedly) precludes it.

So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.

ryandrake · 10h ago
No, it's the same in the US, too. I don't know what these mythical companies are where people are saying 50% of the workforce does nothing, but I've never seen such a place. Everywhere I've ever worked had way more projects to get done than people available to do them. Everyone was working at capacity.
johnbenoe · 20h ago
Yea
Der_Einzige · 20h ago
Consulting companies like the Big 4 where this happens most are bigger/stronger than ever (primarily due to AI related consulting). Try again.
sevensor · 20h ago
What makes you think productive work is what consulting companies are selling? They're there for laundering accountability. When you bring in consultants to roll out your corporate AI strategy, and it all falls apart in a few years, you can say, "we were following best practices, nobody could have anticipated X," where X is whatever failure mode ultimately tanks the AI strategy.
code_for_monkey · 20h ago
you hire consultants so you can cut staff and quality, but the CEOs were already going to do that.
SpicyLemonZest · 19h ago
Do you think that it's possible in principle to have a better or worse corporate AI strategy? I do, and because I do, it seems clear that companies paying top dollar are doing so because they expect a better one. There's no reason to pay KPMG's rates if all you need is a fall guy.

Most criticisms I see of management consulting seem to come from the perspective, which I get the sense you subscribe to, that management strategy is broadly fake so there's no underlying thing for the consultants to do better or worse on. I don't think that's right, but I'm never sure how to bridge the gap. It'd be like someone telling me that software architecture is fake and only code is real.

ElevenLathe · 19h ago
I'm willing to believe that one can be better or worse at management, and that in principle somebody could coach you on how to get better.

That said, how would we measure if our KPMG engagement worked or not? There's no control group company, so any comparison will have to be statistical or vibes-based. If there is a large enough sample size this can work: I'm sure there is somebody out there that can prove management consulting works for dentist practices in mid-size US cities or whatever, though any well-connected group that discovers this information can probably make more money by just doing a rollup of them. This actually seems to be happening in many industries of this kind. Why consult on how to be a more profitable auto repair business when you can do a leveraged buyout of 30 of them, make them all more profitabl, and pocket that insight yourself? I can understand if you're an poorly-connected individual and short on capital, but the big consulting firms are made up entirely of well-connected people who rub elbows with rich people all day.

Fundamentally, there will never be enough data to prove that IBM engaging McKinsey on AI in 2025 will have made any difference in IBM's bottom line. There's only one IBM and only one 2025!

PeterStuer · 19h ago
The fall guy market is very sensitive to credentials. I hired Joey Blows from Juice-My-AI just hasn't that CYA shield of appoval.
Der_Einzige · 19h ago
Given that "design patterns" as a concept basically doesn't exist outside of Java and a few other languages no one actually uses, I'm apt to believe that "software architecture is fake and only code is real".
SAI_Peregrinus · 15h ago
Design patterns (as in commonly re-used designs that solve commonly encountered problems) exist in every language used enough to have commonly encountered problems. Gang-of-Four style named design patterns are mostly a Java thing, and repeatedly lead to the terrible outcome of (hopefully junior) developers trying to find a problem to use the design pattern they just learned about on.
airstrike · 20h ago
Consulting companies don't sell productive advice. They sell management insurance.
code_for_monkey · 20h ago
I think this is the kind of logic you wind up with when you start with the assumption that the Big 4 tell the truth about absolutely everything all the time
michaeldoron · 20h ago
Every time an analyst gives the current state of AI-based tools as evidence supporting AI disruption being just a hype, I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".

Is this really the level of analysis CNN has to offer on this topic?

They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.

Terr_ · 14h ago
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."

You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.

actuallyalys · 8h ago
Yeah, for this analogy to work, we’d have to see AI causing a small but consistently doubling amount of lost jobs.
bgwalter · 20h ago
Why not use the promised exponential growth of home ownership that led to the catastrophic real estate bubble that burst in 2008 as an example?

We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.

When this bubble bursts, the IT industry will collapse for some years like in 2000.

michaeldoron · 19h ago
The growth of home ownership was an indicator of real estate investment, not of real world capabilities - once the value of real estate dropped and the bubble burst, those investments were worth less than before, causing the crisis. In contrast, the growth in this scenario is the capabilities of foundation models (and to a lesser extent, the technologies that stem out of these capabilities). This is not a promise or an investment, it's not an indication of speculative trust in this technology, it is a non-decreasing function indicating a real increase in performance.
mjburgess · 14h ago
You can pick and choose problems from history where folk belief was wrong: WW1 vs. Y2K.

This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.

TheOtherHobbes · 20h ago
I'm not seeing how comparing AI to a virus that killed millions and left tens of millions crippled is an effective way to support your argument.
drewcon · 20h ago
Humans are not familiar with exponential change so they have almost no ability to manage through exponential change.

Its an apt comparison. The criticisms in the cnn article are already out date in many instances.

bayarearefugee · 5h ago
As a developer that uses LLMs, I haven't seen any evidence that LLMs or "AI" more broadly are improving exponentially, but I see a lot of people applying a near-religious belief that this is happening or will happen because... actually, I don't know? because Moore's Law was a thing, maybe?

In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.

geraneum · 14h ago
> Humans are not familiar with exponential change

Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.

Edit: typo

margalabargala · 14h ago
There's individual persons modelling exponential change just fine, and then there's what happens when you apply to the populace at large.

"A person is smart. People are dumb, panicky dangerous animals and you know it."

geraneum · 13h ago
> when you apply to the populace at large

What does this mean? What do you apply to populace at large? Do you mean a populace doesn’t model the exponential change right?

margalabargala · 13h ago
Yep that's what I meant! Context clues did you well here.
geraneum · 6h ago
“A populace modeling exponential change”. Yeah, that’s just word salad.
margalabargala · 6h ago
We can agree to disagree. After all, even you were able to figure out what I meant :-)
geraneum · 19m ago
disagree on what? You have not put forward a coherent statement. I had to fix your sentence. ;)
const_cast · 18h ago
Viruses spread and propagate themselves, often changing along the way. AI doesn't, and probably shouldn't. I think we've made a few movies on why that's a bad idea.
agarren · 13h ago
> The criticisms in the cnn article are already out date in many instances.

Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?

I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.

IshKebab · 4h ago
I think you missed the point. AI is dismissed by idiots because they are looking at its state now, not what it will be in future. The same was true in the pandemic.
dingnuts · 19h ago
especially when the world population is billions and at the beginning we were worried about double digit IFR.

Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.

timr · 14h ago
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.

If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.

monkeyelite · 12h ago
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.

In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.

SoftTalker · 20h ago
Analysis == Opinion when it comes to mainstream news reporting. It's one guy's thinking on something.
qgin · 12h ago
This is the exact thing I’ve expressed as well.

This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.

It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.

biophysboy · 19h ago
Its an article reformulated from a daily newsletter. Newsletters take the form of a quick, casual follow up to current events (e.g. an Amodei interview). Its not intended to be exhaustive analysis.

Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.

PeterStuer · 19h ago
"Is this really the level of analysis CNN has to offer on this topic?"

It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.

biophysboy · 19h ago
This is outdated. Most of journalism has shifted to subscription models, offering a variety of products under one roof: articles, podcasts, newsletters, games, recipes, product reviews, etc.
deadbabe · 11h ago
It goes both ways. Once the exponential growth of COVID started, I heard wildly outrageous predictions of what was going to happen next, none of which ever really came to fruition.
aaronbaugher · 20h ago
> Is this really the level of analysis CNN has to offer on this topic?

Not just this topic.

bckr · 20h ago
That’s not what major news outlets are for. I’m not sure exactly what they’re for.
leeroihe · 14h ago
The best heuristic is what people are realizing happened with uncheck "skilled" immigration in places like canada (and soon the U.S.). Everyone was sold that we "need these workers" because nobody was willing to work and that they added to GDP. When in reality, there's now significant evidence that all these new arrivals did was put a net drain on welfare, devalue the labor of endemic citizens (regardless of race - in many cases affecting endemic minorities MORE) and in the end, just reduced cost while degrading companies who did this.

We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.

matteotom · 14h ago
Can you provide more details about said "significant evidence"? This seems to be a pretty popular belief, despite being contrary to generally accepted economics, and I've yet to see good evidence for it.
CSMastermind · 11h ago
Huge amounts of white collar jobs have been automated since the advent of computers. If you look at the work performed by office workers in the 1960s and compared it to what people today do it'd be almost unrecognizable.

They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.

janalsncm · 11h ago
I don’t like this argument because 1) it doesn’t address the social consequences of rapid onset and large scale unemployment and 2) there is no law of nature that a job lost here creates a new job there.

On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.

The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.

And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.

ccorcos · 8h ago
What makes you so concerned about rapid onset of we haven’t seen any significant change in the (USA) unemployment rate?

And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.

ryukoposting · 9h ago
I'll preface this by saying I agree with most of what you said.

It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.

Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.

Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.

t-writescode · 9h ago
> > it doesn’t address the social consequences of rapid onset and large scale unemployment

> It'll be a slow burn, though.

Have you been watching the current developer market?

It's really, really rough out here for unemployed software developers.

anthomtb · 11h ago
> Huge amounts of white collar jobs have been automated since the advent of computers

One of which was the occupation of being a computer!

lambdasquirrel · 5h ago
Anecdotal, but AI was what enabled me to learn French, when I was doing that. Before LLMs, I would've had to pay a lot more money to get the class time I'd need, but the availability of Google Translate and DeepL meant that some meaningful, casual learning was within reach. I could reasonably study, try to figure things out, and have questions for the teachers the two or three times a week I had lessons.

Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.

PeterHolzwarth · 5h ago
The classic example is the 50's/60's photograph of an entire floor of a tall office building replaced by single spreadsheet. This passed without comment.
darth_avocado · 20h ago
I don’t understand how any business leader can be excited about humans being replaced by AI. If no one has a job, who’s going to buy your stuff? When the unemployment in the country goes up, consumer spending slows down and recession kicks in. How could you be excited for that?
ben_w · 20h ago
Game theory/Nash equilibrium/Prisoner's Dilemma, and the turkey's perspective in the problem of induction.

So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.

Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.

It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.

JKCalhoun · 20h ago
> turkey's perspective in the problem of induction…

Had to look that up: https://en.wikipedia.org/wiki/Turkey_illusion

abracadaniel · 20h ago
My long term fear with AI is that by replacing entry level jobs, it breaks the path to train senior level employees. It could take a couple of decades to really feel the heat from it, but could lead to massive collapse as no one is left with any understanding of how existing systems work, or how to design replacements.
xp84 · 13h ago
> It could take a couple of decades to really feel the heat from it, but could lead to massive collapse

When you consider how this interacts with the population collapse (which is inevitable now everywhere outside of some African countries) this seems even worse. In 20 years, we will have far fewer people under age 60 than we have now, and among that smaller cohort, the percentage of people at any given age who have useful levels of experience will be less because they may not be able to even begin meaningful careers.

Best case scenario, people who have gotten 5 or more years of experience by now (college grads of 2020) may scrape by indefinitely. They'll be about 47 then and have no one to hire that's more qualified than AI. Not necessarily because AI is so great; rather, how will there be someone with 20 years of experience when we simply don't hire any junior people this year?

Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.

baby_souffle · 9h ago
> Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.

This is my bet. Similar to Moores law. Where it plateaus is anybody’s guess…

pseudo0 · 19h ago
Juniors and offshore teams will probably be the most severely impacted. If a senior dev is already breaking off smaller tightly scoped tasks and fixing up the results, that loop can be accomplished much more quickly by iterating with a LLM. Especially if you have to wait a business day for someone in India to even start on the task when a LLM is spitting out a similar quality PR in minutes.

Ironically a friend of mine noticed that the team in India they work with is now largely pushing AI-generated code... At that point you just need management to cut out the middleman.

teitoklien · 14h ago
lol, what it’s soon going to lead to is unfortunately the very opposite of what you’re thinking.

Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines.

A CFO looks at balance sheet not operations context, even if you’re idea is better the opposite of what you think is likely going to happen very soon.

dagw · 40m ago
Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines

Management did all that at companies I've worked for for years before 'AI'. The big change is that the teams in India won't 200 developers, but 20 developers handholding an AI.

lurkshark · 19h ago
I’m actually worried we’ve gotten a kickstart on that process already. Anecdotally it seems like entry level developer jobs are harder to come by today than a decade ago. Without the free-money growth we were seeing for a long time it seems like companies are more incentivized to only hire senior developers at the loss of the greater good that comes with hiring and mentoring junior developers.

Caveat that this is anecdotal, not sure if there are numbers on this.

Traubenfuchs · 44m ago
As a senior software engineer code monkey this is my greatest hope!
scarlehoff · 13h ago
This is what I fear as well: some companies might adopt a "sustainable" approach to AI, but others will dynamite the entry path to their companies. Of course, if your only goal is to sell a unicorn and be out after three years, who cares... but serious companies with lifelong employees that adopt the AI-first strategy are in for a surprise (looking at you, Microsoft).
cjs_ac · 19h ago
This isn't AI-specific, though; businesses decided that it was everyone else's responsibility to train their employees over a decade ago.
BriggyDwiggs42 · 14h ago
If it takes a few decades, they may actually automate all but the most impressive among senior positions though.
Nasrudith · 6h ago
The worst case for such a cycle is generating new jobs in reverse engineers. Although in practice with what we have seen with machinists it tends to just accelerate existing trends towards outsourcing to countries who haven't had the 'entry level collapse'.

We've already eliminated certain junior level domains essentially by design. There aren't any 'barber-surgeons' with only two years of training for good reason. Instead we have surgery integrated it into a more lengthy and complicated educational path to become what we now would consider a 'proper' surgeon.

I think the answer is that if the 'junior' is uneconomical or otherwise unacceptable be prepared to pay more for the alternative, one way or another.

socalgal2 · 14h ago
I agree with your worry.

That said, the first thing that jumps to my mind is cars. Back when they were first introduced you had to be a mechanically inclined person to own one and deal with it. Today, people just buy them and hire the very small number of experts (relative to the population of drivers) to deal with any issues. Same with smartphones. The majority of users have no idea how they really work. If it stop working they seek out an expert.

ATM, AI just seems like another level of that. JS/Python programmers don't need to know bits and bytes and memory allocation. Vibe coders won't need to know what JS/Python programmers need to know.

Maybe there won't be enough experts to keep it all going though.

absurdo · 8h ago
Basically if anyone has an iota of sensibility you should have never taken sama, Zuckerberg, Gates, or anyone else of that sort at face value. When they tell you they’re doing things for the good of humanity, look at what the other hand is up to.
spacemadness · 18h ago
And we as humans figured all this out and still do nothing with this knowledge. We fight as hard as we can against collective wisdom.
anvandare · 20h ago
A cancerous cell does not care that it is (indirectly) killing the lifeform that it is a part of. It just does what it does without a thought.

And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.

thmsths · 20h ago
Tragedy of the commons: no one being able to buy stuff is a problem for everyone, but being able to save just a bit more by getting rid of your workforce is a huge advantage for your business.
bckr · 20h ago
“tragedy of the commons” is treated as a Theory of Human Nature when it’s really a religious principle underlying how we operate our society.
Jensson · 10h ago
People hunted large mammals to extinction long before modern society, so tragedy of the commons is nature in general. We know other predators do it as well, not just humans.
JKCalhoun · 20h ago
… in the interim, of course.
untrust · 20h ago
Another question: If AI is going to eat up everyone's jobs, how will any business be safe from a new competitor showing up and unseating them off their throne? I don't think that the low level peons would be the only ones at stake as a company could be easily outcompeted as well since AI could conceivably outperform or replace any existing product anyways.

I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?

zhobbs · 20h ago
Yeah this will likely lead to margin compression. The best companies will be fine though, as brand and existing distribution is a huge moat.
azemetre · 19h ago
“Best” is carrying a lot of wait. More accurate to say the monopolistic companies that engage in regulatory capture will be fine.
jrs235 · 14h ago
Empowering the current US President to demand more bribes.
layer8 · 20h ago
Institutional knowledge is key here. Third parties can’t replicate it quickly just by using AI.
lubujackson · 20h ago
Luckily we are firing all those people so they will be available for new roles.

This feels a lot like the dot boom/dot bust era where a lot of new companies are going to sprout up from the ashes of all this disruption.

floatrock · 20h ago
Also: network effects, inertia, cornering the market enough to make incumbents uneconomical, regulatory capture...

AI certainly will increase competition in some areas, but there are countless examples where being the best at something doesn't make you the leader.

JKCalhoun · 20h ago
The beginning of the AI Wars?
onlyrealcuzzo · 20h ago
> If no one has a job, who’s going to buy your stuff?

All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?

It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.

It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.

It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.

There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.

They're probably believing any combination of these concepts.

It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.

In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.

You can lose x% of the work force every year and keep unemployment stable...

A large portion of the population wants a lot more people to be able to not work and get entitlements...

It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.

Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.

It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?

darth_avocado · 20h ago
> All the people employed by the government and blue collar workers

If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.

trealira · 20h ago
> If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.

Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.

ben_w · 19h ago
> or a waiter serving food at a restaurant,

Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html

> or someone who restocks shelves at grocery stores

For physical retail, or home delivery?

People are working on this for traditional stores, but I can't tell which news stories are real and which are hype — after around a decade of Musk promising FSD within a year or so, I know not to simply trust press releases even when they have a video of the thing apparently working.

For home delivery, this is mostly kinda solved: https://www.youtube.com/watch?v=ssZ_8cqfBlE

> Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.

Sure… if they have the money.

But can we make an economy where all the stuff is free, and we're "working" n-hours a day smiling at bad jokes and manners of people we don't like, so we can earn money to spend to convince someone else who doesn't like us to spend m-hours a day smiling at our bad jokes and manners?

trealira · 19h ago
> Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html

Wow. I genuinely didn't think robotic waiters would ever exist anytime soon.

> For physical retail, or home delivery?

I was thinking for physical retail. Thanks for the video link.

pesus · 10h ago
I've seen robot waiters at one restaurant in SF as well, and I wouldn't be surprised if there were more. They'll most likely be here on a large scale faster than we think.
ryandrake · 19h ago
> I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing.

These are three totally different jobs requiring different kinds of skills, but they will all be replaced with automation.

1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

9x39 · 10h ago
> 1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

But if you have to be trained in the use of a variety of 'smart' tools - that sounds like engineering to know what tool to deploy and how.

It's also incredibly optimistic about future tools - what smart tool fixes leaky faucets, hauls and installs water heaters, unclogs or replaces sewer mains, runs new pipes, does all this work and more to code, etc? There are cool tools and power tools and cool power tools out there, but vibe plumbing by the unskilled just fills someone's house with water or worse...

> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

Takeout culture is popular among GenZ, and we're more likely to see walk-up orders with online order ahead than a facsimile of table service.

Why would cheap restaurants buy robots and allow a dining room to go unmanned and risk walkoffs instead of just skipping the whole make-believe service aspect and run it like a pay-at-counter cafeteria? You're probably right that waiters will disappear outside of high-margin fine dining as labor costs squeeze margins until restaurants crack and reorganize.

>3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

Do-anything-like-a-human robots might crack that, but today it's still sci-fi. Humans are going to haul things from A to B for a bit longer, I think. I bet we see drive-up and delivery groceries win via lights-out warehouses well before "I, Robot" shelf stockers.

trealira · 19h ago
> 1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

I'm not a plumber, but my background knowledge was that pipes can be really diverse and it could take different tools and strategies to fix the same problem for different pipes, right? My thought was that "robotic plumber" would be impossible for the same reasons it's hard to make a robot that can make a sandwich in any type of house. But even with a human worker that uses advanced robotic tools, I would think some amount of baseline knowledge of pipes would always be necessary for the reasons I outlined.

> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

That's true. I forgot about fast-food kiosks. And the other person showed me a link to some robotic waiters, which I didn't know about. Seems kind of depressing, but you're right.

> 3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

The way I imagine it, to automate it, you'd have to have some sort of 3D design software to choose where all the items would go, and customize it in the case of those special display stands for certain products, and then choose where in the backroom or something for it to move the products to, and all that doesn't seem to save much labor over just doing it yourself, except the physical labor component. Maybe I just lack imagination.

hnthrow90348765 · 12h ago
>or a waiter serving food at a restaurant

I've seen this already at a pizza place. Order from a QR code menu and a robot shows up 20-25 minutes later at your table with your pizza. Wait staff still watched the thing go around.

rufus_foreman · 12h ago
>> or someone who restocks shelves at grocery stores

They've already replaced part of that job at one of the grocery stores that I go to, there's a robot that checks the level of stock on the shelves, https://www.simberobotics.com/store-intelligence/tally.

DrillShopper · 19h ago
> a waiter serving food at a restaurant

I have already eaten at three restaurants that have replaced the vast majority of their service staff with robots, and they're fine at that. Do I think they're better than a human? No, personally, but they're "good enough".

JKCalhoun · 20h ago
Yeah, it's as though "middle class" was a brief miracle of our age. Serfs and nobility is the more probably human condition.

Hey, is there a good board game in there somewhere? Serfs and Nobles™

kevin_thibedeau · 18h ago
ML models don't make fully informed decisions and will not until AGI is created. They can make biased guesses at best and have no means of self-directed inquiry to integrate new information with an understanding of its meaning. People employed in a decision making capacity are safe, whether that's managing people or building a bridge from a collection of parts and construction equipment.
whattheheckheck · 14h ago
Has anyone made a fully informed decision?
madaxe_again · 2h ago
Look, human cognition is obviously better than machine cognition, and nobody has ever made a poor argument or decision.

End of conversation.

spamizbad · 14h ago
> All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?

I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...

Jensson · 10h ago
> The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...

That is exactly what blue collar economy used to be though: people making and fixing stuff for each other. White collar jobs is a new thing.

munksbeer · 20h ago
>It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.

History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.

Maybe it will be different this time though.

ryandrake · 19h ago
"Technology increases" have not made my life better than my boomer parents' and they will probably not make the next generation's lives better than ours. Big things like housing costs, education costs, healthcare costs are not being driven down by technology, quite the opposite.

Yes, the lives of "people selling stuff" will likely get better and better in the future, through technology, but the wellbeing of normal people seems to have peaked at around the year 2000 or so.

carlosjobim · 12h ago
I think that's mostly myth, and a very very deeply ingrained myth. That's why probably hundreds of people already feel the rage boiling up inside of them right now after reading my first sentence.

But it is myth. It has always been in the interest of the rulers and the old to try to imprint on the serfs and on the young how much better they have it.

Many of us, maybe even most of us, would be able to have fulfilling lives in a different age. Of course, it depends on what you value in life. But the proof is in the pudding, humanity is rapidly being extinguished in industrial society right now all over the world.

neutronicus · 19h ago
There are also blue- and pink-collar industries that we all tacitly agree are crazy understaffed right now because of brutal work conditions and low pay (health care, child care, K-12, elder care), with low quality-of-service a concern across the board, and with many job functions that seem very difficult to replace with AI (assuming liability for preventing children and elderly adults from physically injuring themselves and others).

If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.

Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.

const_cast · 18h ago
Right, but the elephant in the room is that despite those industries being constantly understaffed and labor being in extreme demand, they're underpaid. It seems nobody gives a flying fuck about the free market when it comes to the labor market, which is arguably the most important market.

So, flooding those industries with more warm bodies probably won't help anything. I imagine it would make the already fucked labor relations even more fucked.

neutronicus · 15h ago
It would be bad for compensation in the field(s) but the actual working conditions might improve, just by dint of having enough people to do all the work expected.
JKCalhoun · 20h ago
> All the people employed by the government and blue collar workers?

You forgot the born-wealthy.

I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)

ryandrake · 20h ago
This is the real answer. Eventually, when 95% of us have no jobs because AI and robotics are doing everything, then the rich will just buy and sell from each other. The other 7 billion people are not economically relevant and will just barely participate in the economy. It'll be like the movie Elysium.

I briefly did a startup that was kind of a side-project of a guy whose main business was building yachts. Why was he OK with a market that just consisted of rich people? "Because rich people have the money!"

hnthrow90348765 · 11h ago
>It'll be like the movie Elysium.

The rich were able to insulate themselves in space which is much harder to get to than some place on Earth. If the rich want to turtle up on some island because that's the only place they're safe, that's probably a better outcome for us all. They lose a lot of ability to influence because they simply can't be somewhere in person.

It also relies heavily on a security force (or military) being complicit, but they have to give those people a better life than average to make it worth it. Even those dumb MAGA idiots won't settle for moldy bread and leaky roofs. That requires more and more resources, capital, and land to sustain and grow it, which then takes more security to secure it. "Some rich dude controlling everything" has an exponential curve of security requirements and resources. This even comes down to how much land they need to be able to farm and feed their security guys.

All this assuming your personal detail and larger security force actually likes you enough, because if society has broken down to this point, they can just kill the boss and take over.

bluefirebrand · 13h ago
> This is the real answer. Eventually, when 95% of us have no jobs because AI and robotics are doing everything, then the rich will just buy and sell from each other

My prediction is that the poor will reinvent the guillotine

FeteCommuniste · 20h ago
I guess the idea is that the people left working will be made so productive and wealthy thanks to the miracle of AI that they can more than make up the difference with extravagant consumption.
isoprophlex · 20h ago
I too plan to buy 100.000 liters of yogurt each day once AI has transported me into the socioeconomic strata of the 0.1%
FeteCommuniste · 20h ago
My many robots will be busy building glorious mansions out of yogurt cups.
Terr_ · 14h ago
Or, as per a Love, Death, and Robots short film, the new superintelligence will be inextricable from yogurt...
darth_avocado · 20h ago
If you want to see what that looks like, just look at the economy of India. Do we really want that?
FeteCommuniste · 20h ago
Certainly not what I want, but it looks like we could be headed there. And the "industry leaders" seem cool with it, to judge by their politics.

No comments yet

munksbeer · 20h ago
The economy of India is trending in the opposite direction to this narrative. More and more people lifted out of poverty as they modernise.
darth_avocado · 20h ago
The comment wasn’t on the trend or where things are going and the historical progress the country has made. The comment was on the current state of the economy. The fact that wealth concentration creates its own unique challenges. If as many people were unemployed and in poverty (or in the low income bracket) in the US or any other developed nation, the living conditions would have been drastically deteriorated. The consumer market would have shrunk to the point where most people couldn’t afford to buy chips and soda.
munksbeer · 20h ago
The point is, I don't see that happening. The reverse is happening in the world. The percentage of people in poverty globally is decreasing each year.

I still fail to see why people think we're going to innovate ourselves into global poverty, it makes no sense.

darth_avocado · 19h ago
Poverty is decreasing because innovation is creating more jobs. Everything hinges on the fact that people can earn a living and spend their money to generate more jobs. If AI replaces those jobs you’re going the other way.
const_cast · 17h ago
Right, every economic system we've thought up relies on the assumption that everyone works. Or, close to everyone. Capitalism is just as much about consumption as it is production.
SpicyLemonZest · 12h ago
Close to everyone doesn't work today. The labor force participation rate is only about 62%.
JKCalhoun · 20h ago
I'd been thinking modern day Russia, but I admit to being ignorant of a lot of countries outside the U.S.
al_borland · 20h ago
A single rich person can only much door dash. Scaling a customer base needs to be done horizontally.
leeroihe · 14h ago
They want an omnipresent, lobotomized and defeated underclass who only exists to "respond" to the ai to continue to improve it. This is basically what alexander wang from Scale AI explained at a recent talk which was frankly terrifying.

Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!

pdfernhout · 10h ago
We already have that "defeated underclass" courtesy of a century of mainstream schooling (according to NYS Teacher of the Year John Taylor Gatto): "The Underground History of American Education -- A conspiracy against ourselves" https://www.lewrockwell.com/2010/10/john-taylor-gatto/the-cu... "As soon as you break free of the orbit of received wisdom you have little trouble figuring out why, in the nature of things, government schools and those private schools which imitate the government model have to make most children dumb, allowing only a few to escape the trap. The problem stems from the structure of our economy and social organization. When you start with such pyramid-shaped givens and then ask yourself what kind of schooling they would require to maintain themselves, any mystery dissipates — these things are inhuman conspiracies all right, but not conspiracies of people against people, although circumstances make them appear so. School is a conflict pitting the needs of social machinery against the needs of the human spirit. It is a war of mechanism against flesh and blood, self-maintaining social mechanisms that only require human architects to get launched. I’ll bring this down to earth. Try to see that an intricately subordinated industrial/commercial system has only limited use for hundreds of millions of self-reliant, resourceful readers and critical thinkers. In an egalitarian, entrepreneurially based economy of confederated families like the one the Amish have or the Mondragon folk in the Basque region of Spain, any number of self-reliant people can be accommodated usefully, but not in a concentrated command-type economy like our own. Where on earth would they fit? In a great fanfare of moral fervor some years back, the Ford Motor Company opened the world’s most productive auto engine plant in Chihuahua, Mexico. It insisted on hiring employees with 50 percent more school training than the Mexican norm of six years, but as time passed Ford removed its requirements and began to hire school dropouts, training them quite well in four to twelve weeks. The hype that education is essential to robot-like work was quietly abandoned. Our economy has no adequate outlet of expression for its artists, dancers, poets, painters, farmers, filmmakers, wildcat business people, handcraft workers, whiskey makers, intellectuals, or a thousand other useful human enterprises — no outlet except corporate work or fringe slots on the periphery of things. Unless you do "creative" work the company way, you run afoul of a host of laws and regulations put on the books to control the dangerous products of imagination which can never be safely tolerated by a centralized command system...."

In 2010, I put together a list of alternatives here to address the rise of AI and Robotics and its effect on jobs: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

SpicyLemonZest · 20h ago
Business leaders in AI are _not_ excited and agree with your concerns. That's what the source article is about - the CEO of AI lab Anthropic said he sees major social problems coming soon. The problem is that the information environment is twisted in knots. The author, like many commentators, characterizes your concerns as "optimism" and "hype", because she doesn't think AI will actually have these large impacts.
geraneum · 13h ago
They are. The audience of this talk is not normal people. He’s excited and is targeting a specific group in his messaging. The author is a person like majority.
SpicyLemonZest · 12h ago
I don't understand what you mean. The audience of this talk is Axios, a large news website targeting the general public.
spacemadness · 18h ago
I think he says this just to hype up how powerful of a force AI is which helps these CEOs bottom line eventually. Cynically “we’ve created something so powerful it will eliminate jobs and cause strife” gets those investors excited for more.
johnbenoe · 20h ago
You ever thought there’s more to life than work lol. Maybe humans can approach a new standard of living…
darth_avocado · 20h ago
I’m yet to be convinced that if majority of the humans are out of work, the government will be able to take care of them and allow them to “pursue their calling”. Hunger games is a more believable outcome to me.

No comments yet

JKCalhoun · 20h ago
If someone is going to suggest UBI, I wish they could explain to me how Reservations have failed so hard in the U.S.. I think that would be a cautionary tale.
duderific · 12h ago
Decades and decades of mistreatment are not going to be remedied by some modest handouts. That doesn't mean that UBI as a whole could never work.
9x39 · 10h ago
Shouldn't we be able to find at least one pilot or prototype with a lasting success story to build off of before concluding we need to do it on a huge scale?
codr7 · 20h ago
Excellent choice of words there: new standard.

I'm sure we are, but it doesn't look like an improvement for most people.

johnbenoe · 20h ago
Not yet at least, but there’s no stopping this kind of efficiency jump. Anyone who thinks otherwise is in denial.
codr7 · 9h ago
I would say anyone who sees that happening is in denial, because all the proof out there points in the opposite direction.
myko · 20h ago
Maybe, but aren't LLM companies burning cash? The efficiency gains I see from LLMs typically come from agents which perform circular prompts on themselves until they reach some desired outcome (or give up until a human can prod them along).

It seems like we'll need to generate a lot more power to support these efficiency gains at scale, and unless that is coming from renewables (and even if it is) that cost may outweigh the gains for a long time.

johnbenoe · 20h ago
They’re burning cash at a high rate because of the grand potential, and they are of course keeping some things behind closed doors.

I also respect the operative analysis, but the strategical, long-term thinking, is that this will come and it will only speed up everything else.

codr7 · 9h ago
The grand potential of short sighted profits with no concern for society nor other humans, yes.
rfrey · 20h ago
The most powerful nation on earth isn't even willing to extend basic health care to the masses, nevermind freeing them to pursue a higher calling than enriching billionaires.
keybored · 20h ago
We have consumer capitalism now. Before we didn’t. There’s no reason it can’t be replaced.

Sure there can be rich people who are radical enough to push for another phase of capitalism.

That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.

carlosjobim · 13h ago
That's a very pessimistic view. People can borrow money against their property, then later they can borrow money against their diploma and professional certificates (and nobody should be allowed to work without being certified, that's dangerous). Then later I think it's time for banks to start offering consumers the reproductive right of mortgaging their children, either born or unborn.
roenxi · 20h ago
You're being confused by the numbers. We aren't trying to maximise consumer spending, the point is to maximise living standards. If the market equilibrium price of all goods was $0 consumer spending would be $0 and living standards would be off the charts. It'd be a great outcome.

It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.

amanaplanacanal · 20h ago
In theory. In reality, how are the benefits of all this efficiency going to be distributed to the people who aren't working? I sure don't see any calls for higher taxes and more wealth redistribution.
SpicyLemonZest · 20h ago
The source article is an analysis of an interview (https://www.axios.com/2025/05/28/ai-jobs-white-collar-unempl...) where the CEO of Anthropic called for higher taxes and more wealth redistribution.
DrillShopper · 19h ago
I'm sure the Republican liches in the Senate have some views on that which kill it out of the gate
ikrenji · 13h ago
Let's face it ~ almost all work will be automated in the next 50 years. Either capitalism dies or humanity dies
alluro2 · 5h ago
Given the current mechanics evident in the society - declining education, healthcare and rising cost of living, homelessness and exploding economic inequality - who is "we", trying to maximise living standards, and what movement do you see leading towards such an outcome?
qgin · 9h ago
I often see people say “AI can’t do ALL of my job, so that means my job is safe.

But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.

AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.

As an employee, any efficiency gains you get from AI belong to the company, not you.

sram1337 · 9h ago
...or your job goes from commanding a $200k/yr salary to $60k/yr. Hopefully that's enough to pay your mortgage.
spcebar · 20h ago
Something is nagging me about the AI-human replacement conversation that I would love insight from people who know more about startup money than me. It seems like the AI revolution hit as interest rates went insane, and at the same time the AI that could write code was becoming available, the free VC money dried up, or at least changed. I feel like that's not usually a part of the conversation and I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero. I know next to nothing about this and would love to hear informed opinions.
sfRattan · 13h ago
> It seems like the AI revolution hit as interest rates went insane...

> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.

The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?

The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.

You're describing it as if that was the "normal?"

[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...

swyx · 20h ago
its not part of the conversation because the influence here is tangential at best (1) and your sense of how much vc money is on the table at any given time is not good (2).

1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder

1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc

2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr

2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private

mjburgess · 14h ago
I'm not seeing how you're replying to this comment. I'm not sure you've understood their point.

The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.

So the question is why do we event need to mention AI in our explanation of recent economic shifts?

What phenomena, exactly, require positing AI disruption?

munificent · 14h ago
> What phenomena, exactly, require positing AI disruption?

AI company CEOs trying to juice their stock evaluations?

golol · 20h ago
> To be clear, Amodei didn’t cite any research or evidence for that 50% estimate.

I truly belive these types of paper don't deserve to be valued so much.

righthand · 20h ago
Yes we live in a world where no “experts” are required to provide any evidence or truth, but media outlets will gladly publish every false word and idea. For the same reason these Ceos want to wipe their workforce for more money, not a functioning society.
airstrike · 20h ago
The attention economy is ruining society.
madaxe_again · 2h ago
And the journalist cited what research or evidence, precisely, in his rebuttal?
bachmeier · 20h ago
> AI is starting to get better than humans at almost all intellectual tasks

"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.

Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:

"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.

But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."

SoftTalker · 19h ago
It it sustainable? I know when I program, it's sometimes nice to get to something that's easy, even if it's tedious and repetitive. It's like stopping to walk for a bit when you're on a run. You're still moving, but you can catch your breath and recharge.
bachmeier · 19h ago
Oh, I agree, but I'd say that it's probably easier to do those small things than it is to figure out a prompt to have Copilot do them. If it feels good, there's no reason not to do it yourself. I think we'd all agree that it's a joy to be able to tell Copilot to write out the scaffolding at the start of a new project.
JKCalhoun · 20h ago
> I'm starting to become a billionaire

Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)

bachmeier · 19h ago
> That I think is the point of the original comment: AI is getting better, not worse.

Well, in order to meet the standard of the quote "wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years" we need more than just getting better. We need considerably better technology with a better cost structure to wipe out that many jobs. Saying we're starting on that task when the odds are no better than me becoming a billionaire within two years is what we used to call BS.

joshdavham · 11h ago
This type of hype is pretty perplexing to me.

Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?

taormina · 10h ago
He’s selling exclusively to the C-suite. Why would he care about the white collar workers? He wouldn’t be trying to put them all out of work if he cares
chr15m · 10h ago
Because it creates FOMO which creates sales.
econ · 7h ago
There use to be a cookie factory here that had up to 12 people sitting there all day doing nothing. If the machines broke down it really took all of them to clean up. This pattern will be rediscovered.
ghm2180 · 6h ago
I wonder when the investors and investors in the early printing press or steam engine or excel spreadsheet was invented did they think of the ways — soul crushing homework(books), rapid and cruel colonization(steam engines and trains), innovative project management(excel) — there tech would be used?

The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.

If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.

infinitebit · 5h ago
So glad to see a MSM outlet take the words of an AI ceo with even a single grain of salt. I’ve been really disappointed with the way so many publications have just been breathlessly repeating what is essentially a sales pitch.

(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)

elktown · 17h ago
Tech has a big problem of selective critical thinking due to a perpetual gold rush causing people to adopt a stockbroker mentality of not missing out on the next big thing - be it the next subfield like AI, the next cool tech that you can be an early adopter on etc. But yeah, nothing new under the sun; it's corruption.
mjburgess · 14h ago
In many spheres today "thought leadership" is a kind of marketing and sales activity. It is no wonder then that no one can think and no one can lead: either would be an fatal to healthy sales.
chris_armstrong · 12h ago
The wildest claims are those of increased labor productivity and economic growth: if they were true, our energy consumption would be increasing wildly beyond our current capacity to add more (dwarfing the increase from AI itself).

Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.

WaltPurvis · 12h ago
I plugged those two quotes from Amodei into ChatGPT along with this prompt: "Pretend you are highly skeptical about the potential of AI, both in general and in its potential for replacing human workers the way Amodei predicts. Write a quick 800-word takedown of his predictions."

I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.

Animats · 14h ago
The real bloodbath will come when coordination between multiple AIs, in a company sense, starts working. Computers have much better I/O than humans. Once a corporate organization can be automated, it will be too fast for humans to participate. There will be no place for slow people.

"Move fast and break things" - Zuckerberg

"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton

catigula · 14h ago
This doesn't even make sense. What corporations do you think will exist in this world?

You're not going to sell me your SaaS when I can rent AIs to make faster cheaper IP that I actually own to my exact specifications.

ofjcihen · 14h ago
This is always the indicator I look for whether or not someone actually knows what they’re talking about.

If you can’t extrapolate on your own thesis you can’t be knowledgeable in the field.

Good example was a guy on here who was convinced every company would be ran by one person because of AI. You’d wake up in the morning and decide which products your AI came up with while you slept would be profitable. The obvious next question is “then why are you even involved?”

catigula · 14h ago
I agree, I was actually leaving the question open-ended because I can't necessarily scale it all the way up, it's too complex. Why would they even rent me AIs when they can just be every company? Who is "they"?

All that needs to be understood is that the narcissistic grandeur delusion that you will singularly be positioned to benefit from sweeping restructuring of how we understand labor must be forcibly divested from some people's brains.

Only a very select few are positioned to benefit from this and even their benefit is only just mostly guaranteed rather than perfectly guaranteed.

sbierwagen · 6h ago
https://slatestarcodex.com/2016/05/30/ascended-economy/

Robot run iron mine that sells iron ore to a robot run steel mill that sells steel plate to a robot run heavy truck manufacturer that sells heavy trucks to robot run iron mines, etc etc.

The material handling of heavy industry is already heavily automated, almost by definition. You just need to take out the last few people.

rule2025 · 3h ago
The real "white-collar massacre" is not caused by AI, but you have no irreplaceable, or the value created by hiring you is not higher than using AI. Businesses will not hesitate to use AI, you can't say that companies are ruthless, but that's the pursuit of efficiency. Just as horse-drawn carriages were replaced by cars and coachmen lost their jobs, you can't say it's a problem with cars.

History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.

fny · 20h ago
I think everyone is missing the bigger picture.

This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:

1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.

I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.

And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.

Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.

For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?

ironman1478 · 14h ago
"I became proficient in Rust in a week". How did you evaluate that if you weren't an expert in Rust to begin with? What does proficient mean to you? Also, are you advocating we get rid of air traffic controllers with AI? How would we train the AI? What model would you use? If you can't solve a safety critical problem from first principles, there is no way an AI should be in the loop. This makes no sense.

Cynically, I'm happy we have this AI generated code. It's gonna create so much garbage and they'll have to pay good senior engineers more money to clean it all up.

ofjcihen · 14h ago
To your second point we’re seeing a huge comeback of vulnerabilities that we’re “mostly gone”. Things like very basic RCEs and SQLi. This is a great thing for security workers as well.
BigJono · 19h ago
> I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot.

fucking lmao

fny · 19h ago
My point is you learn X and your time to learn and ship Y is dramatically reduced.

It would have taken me a month to write the GPU code I needed in Blender, and I had everything working in a week.

And none of this was "vibed": I understand exactly what each line does.

whyowhy3484939 · 14h ago
You did not and you are not proficient. LLMs and AI in general cater to your insecurities. An actual good human mentor will wipe the floor with your arrogance and you'll be better for it.
ofjcihen · 14h ago
It would have taken you a month and you would have been able to understand it 100x more.

LLMs are great but what they really excel at is raising the rates of Dunning-Kruger in every industry they touch.

whyowhy3484939 · 14h ago
Yes, this is definitely missing a /s, I hope.

Please for the love of god tell me this is a joke.

lexandstuff · 11h ago
Re the last sentence, is the answer that more people will die in aviation disasters?
hooverd · 19h ago
Can you talk about Rust without your friend computer?
fny · 19h ago
Of course not! But I can definitely ship useful tools, and I can could learn to talk the talk in a tenth of the time it would otherwise have taken.

Which is my point, this is not about replacement, it's about reducing the need and increasing supply.

kttjoppl · 5h ago
How are you going to ship a tool you don't understand? What are you going to do when it breaks? How are you going to debug issues in a language you don't understand? How do you know the code the LLM generated is correct?

LLMs absolutely help me pick up new skills faster, but if you can't have a discussion about Rust and Svelte, no, you didn't learn them. I'm making a lot of progress learning deep learning and ChatGPT has been critical for me to do so. But I still have to read books, research papers, and my framework's documentation. And it's still taking a long time. If I hadn't read the books, I wouldn't know what question to ask or how to evaluate if ChatGPT is completely off base (which happens all the time).

MattSayar · 16h ago
Can you talk about assembly without the internet?

I fully understand your point and even agree with it to an extent. LLMs are just another layer of abstraction, like C is an abstraction for asm is an abstraction for binary is an abstraction for transistors... we all stand on the shoulders of giants. We write code to accomplish a task, not the other way around.

bluefirebrand · 12h ago
> Can you talk about assembly without the internet?

Yes.

Can you not?

hooverd · 13h ago
I think friction is important to learning and expertise. LLMs are great tools if you view them as compression. I think calculators are a good example, people like to bring those up as a gotcha, but an alarming amount of people are now innumerate on basic receipt math or comprehending orders of magnitude.
MattSayar · 13h ago
It is absolutely essential that we still have experts who know the details. LLMs are just the tide that lifts all ships.
stefan_ · 12h ago
I don't understand, no one ever needed an LLM to automate air traffic controllers. 1980s tech could do that just fine. The reason they continue to exist is essentially cultural. Fell into a local maximum trap and now the entire industry and governance is incapable of lifting itself out of it and instead come up with stuff like "standardized phrases for the voice coms that we have inexplicably made crucial to the entire system" while riding cultural cliches like "the pilot must be in control" as they continue manual flight into big rocks.
randomname4325 · 7h ago
Only way to know for sure you're safe from replacement is if your job is a necessary part of something generating revenue and your not easily replaceable. Otherwise you should assume the company won't hesitate to replace you. It's just business.
Voloskaya · 1h ago
> if your job is a necessary part of something generating revenue and your not easily replaceable.

First part of this statement is clearly false. People on the phone in a tech support company are very much necessary to generate revenue, people tending to field were very much necessary to extract the value of the fields. Draftsmen before CAD were absolutely necessary etc.

Yet technology replaced them, or is in the process of doing so.

So then, your statement simplifies to “if you want to be safe for replacement have a job that’s hard to replace” which isn’t very useful anymore.

snackernews · 5h ago
Anyone who thinks an executive considers them necessary or irreplaceable in the current environment is fooling themselves.
Tokkemon · 7h ago
Yeah I thought that too. Then they laid me off anyway.
cadamsdotcom · 17h ago
CEOs’ jobs involve hyping their companies. It’s up to us whether we believe.

I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”

geraneum · 13h ago
I’d love a journalist that do their job. For example when someone like this CEO pulls a number out of their ass, maybe push them on how they arrived at this? Why does it displace 50%? Why 70? Why not 45?
globalnode · 10h ago
i really liked this article, it puts into perspective how great claims require great proof, and so far all we've heard are great claims. i love ml tech but i just dont trust it to replace a human completely. sure it can augment roles but thats not the vision we're being sold.
topherPedersen · 6h ago
I could be wrong, but I think us software developers are going to become even more powerful, in demand, and valuable.
phendrenad2 · 20h ago
These are the moments that make millionaires. A majority of people believe that AI is going to thoroughly disrupt society. They've been primed to worry about an "AI apocalypse" by Hollywood for their entire lives. The prevailing counter-narrative is that AI is going to flop. HARD. You can't get more diametrically opposed than that. If you can correctly guess (or logically determine) which is correct, and bet all of your money on it, you can launch yourself into a whole other echelon of life.

I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.

I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?

barchar · 8h ago
It's not really enough to predict the outcome, you need something concrete to actually bet on, and you need to time things right (particularly for the pessimistic bet).

For any bet that involves purchasing bits of profits you you could be right and lose money because because the government generally won't allow the entire economy to implode.

By the time a bubble pops literally everyone knows they're in a bubble, knowing something is a bubble doesn't make it irrational to jump on the bandwagon.

SoftTalker · 19h ago
> The prevailing counter-narrative is that AI is going to flop. HARD.

Why is that the counter-narrative? Doesn't it seem more likely that it will contine to gradually improve, perhaps asymptotically, maybe be more specifically trained in the niches where it works well, and it will just become another tool that humans use?

Maybe that's a flop compared to the hype?

ls612 · 13h ago
At the rate the hyperscalers are increasing capex anything less than 1990s internet era growth rates will not be pretty. So far its been able to sustain those growth rates at the big boy AI companies (look at OpenAI revenue over time) but will it continue? Are we near the end of major LLM advances or are we near the beginning? There are compelling arguments both ways (running out of data is IMO the most compelling bear argument).
barchar · 8h ago
It's been able to sustain 90s era revenue growth rates, not 90s era income growth rates no?
ls612 · 7h ago
I think all of the dot com boom companies other than the shovel sellers like MS and Cisco were not profitable in the 90s? Not even future behemoths like Amazon.
hollerith · 7h ago
Amazon would've been profitable if it weren't investing so much in growth. Also, eBay, Yahoo!, AOL, Priceline, Cisco Systems, E*TRADE and DoubleClick became profitable in the 90s according to DeepSeek.
j_w · 10h ago
Re: running out of data

LLM bulls will say that they are going to generate synthetic data that is better than the real data.

ryandrake · 19h ago
I wouldn't worry too much about YCombinator. Although individual investors can get richer or poorer, "investors" as a class effectively have unlimited money. Collectively, they will always be looking for a place to put it so it keeps growing even more, so there will always be work for firms like YCombinator to sprinkle all that investment money around.
tokioyoyo · 8h ago
Not the biggest fan of crypto companies, but YC probably did well because of Coinbase.
ramesh31 · 5h ago
>The prevailing counter-narrative is that AI is going to flop. HARD. You can't get more diametrically opposed than that.

The answer (as always) lies somewhere in the middle. Expert software developers who embrace the tech whole heartedly while understanding its' limitations are now in an absolute golden era of being able to do things they never could have dreamed of before. I have no doubt we will see the first unicorns made of "single pizza" size teams here shortly.

1vuio0pswjnm7 · 18h ago
"If the CEO of a soda company declared that soda-making technology is getting so good it's going to ruin the global economy, you'd be forgiven for thinking that person is either lying or fully detached from reality.

Yet when tech CEOs do the same thing, people tend to perk up."

Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.

For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.

If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.

No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.

digianarchist · 10h ago
I saw a tweet the other day that stated AI will cure all diseases within 5-10 years. The tweet cites scientists and CEOs but only lists CEOs of AI companies.

https://x.com/kimmonismus/status/1927843826183589960

arthurcolle · 8h ago
atleastoptimal · 2h ago
losing jobs is the biggest predictable hazard of AI but far from the biggest

however there seems to be a big disconnect on this site and others

If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.

However many people don’t believe AGI is possible, thus will never consider those possibilities

I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.

ArtTimeInvestor · 20h ago
Imagine you had a crystal ball that lets you look 10 years into the future, and you asked it about whether we underestimate or overestimate how many jobs AI will replace in the future.

It flickers for a moment, then it either says

"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"

or

"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"

How would you use that information to invest in the stock market?

elcapitan · 20h ago
If I had a crystal ball that lets me look 10 years into the future and I wanted to invest in the stock market, I would ask it about the stock market.
JKCalhoun · 20h ago
I'm already assuming the first answer but nonetheless have absolutely no idea how I would use that to make a guess about the stock market.

So it's index funds (as always) with me anyway.

usersouzana · 19h ago
Heads or tails, then proceed accordingly. You won't waste any more time analyzing it in hopes of getting it right.
dehrmann · 10h ago
Ah, so a straddle.
leeroihe · 14h ago
I used to be a big proponent of AI tools and llms, even built products around them. But to be honest, with all of the big AI ceos promising that they're going to "replace all white collar jobs" I can't see that they want what's best for the country or the american people. It's legitimately despicable and ghoulish that they just expect everyone to "adapt" to the downstream affects of their knowledge-machine lock-in.
ggm · 13h ago
Without well paid middle classes, who is buying all the fancy goods and services?

Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.

Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind

Hilift · 12h ago
> Without well paid middle classes, who is buying all the fancy goods and services?

People charging on their credit cards. Consumers are adding $2 billion in new debt every day.

"Total household debt increased by $167 billion to reach $18.20 trillion in the first quarter"

https://www.newyorkfed.org/microeconomics/hhdc

ramesh31 · 5h ago
>Without well paid middle classes, who is buying all the fancy goods and services?

Rich people buying even fancier goods and services. You already see this in the auto industry. Why build a great $20,000 car for the masses when you can make the same revenue selling $80,000 cars to rich people (and at higher margins)? This doesn't work of course when you have a reasonably egalitarian society with reasonable wealth inequality. But the capitalists have figured out how to make 75% of us into willing slaves for the rest. A bonus of this is that a good portion of that 75% can be convinced to go into lifelong debt to "afford" those things they wish they could actually buy, further entrenching the servitude.

monero-xmr · 20h ago
https://en.m.wikipedia.org/wiki/List_of_predictions_for_auto...

It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.

I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem

Philpax · 20h ago
Because textual data is plentiful and easy to model, and physical data is not. This will change - there are now several companies working on humanoid robots and the models to power them - but it is a fundamentally different set of problems with different constraints.
GardenLetter27 · 20h ago
Bureaucracy and regulation is the main issue there though.

Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.

Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.

xxs · 20h ago
>Like in Europe where you're forced to pay a notary to start a business

Do you have a specific country in mind, as the statement is not true for quite a lot of EU member states... and likely untrue for most of the European countries.

dosinga · 20h ago
> Like in Europe

Like in the US you have a choice of which jurisdiction you want to start your company. Not all require a notary

jellicle · 20h ago
We are going to have an ever-increasing supply of stories along the lines of "used a LLM to write a contract; contract gave away the company to the counterparty; now trying to get a court to dissolve the contract".

Sure you'll have destroyed the company, but at least you'll have avoided bureaucracy.

Hilift · 12h ago
Self-driving cars are required to beep when in reverse. In both San Francisco and San Diego homes near Waymo charging facilities are a nuisance. The neighbors hate the beeping, and they operate late hours, and use things like shop vac cleaners that are loud. Whoever thought of this hates self driving cars and people. There is no way this can work in mixed urban areas.
edent · 20h ago
Buy a dishwasher - they're cheap, work really well, and don't use much energy / water.

Same as a washing machine / drier. Chuck the clothes in, press a button, done.

There are Roomba style lawnmowers for your grass cutting.

I'll grant you painting a house and plumbing a toilet aren't there yet!

al_borland · 20h ago
With the laundry machine and dishwasher, it still requires effort. A human needs to collect the dirty stuff, put it into the machine properly, decide when it should run, load the soap, select a cycle type, start it, monitor the machine to know when it’s done, empty the machine, and put the stuff away properly, thus starting the human side of the process again.

It’s less work than it used to be, but remove the human who does all that and the dirty dishes and clothes will still pile up. It’s not like we have Rosie, from The Jetsons, handling all those things (yet). How long before the average person has robot servants at home? Until that day, we are effectively project managers for all the machines in our homes.

Kirby64 · 14h ago
> A human needs to collect the dirty stuff, put it into the machine properly, decide when it should run, load the soap, select a cycle type, start it, monitor the machine to know when it’s done, empty the machine, and put the stuff away properly, thus starting the human side of the process again.

The really modern stuff is pretty much as simple as “load, start, unload” - you can buy combo washing machines that wash and dry your clothes, auto dispense detergent, etc. It’s not folding or putting away your clothes, and you still need to maintain it (clean the filter, add detergent occasionally, etc)… but you’re chipping away at what is left for a human to do. Who cares when it’s done? You unload it when you feel like it, just like every dishwasher.

al_borland · 11h ago
Unload timing on the washer/dryer matters.

Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).

I grew up watching everyone in my family do this, sometimes multiple times for the same load. That’s why I set timers and remove stuff promptly.

The dishwasher I agree, and it’s usually best to leave them in there at least for a little while once it’s done. However, not unloading it means dirty dishes start to stack up on the counter or in the sink, so it still creates a problem.

As far as “load, start, unload” goes. We covered unload, but load is also an issue where some people do have issues. They load the dishwasher wrong and things don’t get clear, or they start it wrong and are left with spots all over everything. Washing machines can be overloaded, or unbalanced. Washing machines and dryers can also be started wrong, the settings need to match the garments being washed. Some clothes are forgiving, others are not. There is still human error in the mix.

Kirby64 · 10h ago
> Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).

Not a problem for the two-in-one washer/dryers for the mildew issue, and for the wrinkles, most dryers have a cycle to keep running them intermittently after the cycle finishes for hours to mitigate most of the wrinkling issues. You’ve got a much much longer window before wrinkles are an issue with that setup.

ghaff · 12h ago
My understanding is combo machines aren't ideal. But running a load of laundry in a couple separate machines is pretty low effort.
MangoToupe · 20h ago
> I really wish the AI ceos would actually make my life useful.

TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.

coffeefirst · 14h ago
You know what I want? A LM that navigates customer support phone trees for me.

If you want to waste my time with an automated nonsense we should at least even the playing field.

This is feasible with today’s technology.

DrillShopper · 20h ago
> In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem

Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train

osigurdson · 10h ago
The real value is going to be in areas that neither machines nor humans could do previously.
deadbabe · 13h ago
Something I’ve come to realize in the software industry is: if you have more smart engineers than the competition, you win.

If you don’t snatch up the smartest engineers before your competition does: you lose.

Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.

Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.

AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.

bawana · 14h ago
When are we going to get AI CEOs as a service?
0x5f3759df-i · 6h ago
I asked ChatGPT to be a CEO and decide if everyone should work in office 5 days a week:

“ Final Thought (as a CEO):

I wouldn’t force a full return unless data showed a clear business case. Culture, performance, and employee sentiment would all guide the decision. I’d rather lead with transparency, flexibility, and trust than mandates that could backfire.

Would you like a sample policy memo I’d send to employees in this scenario?”

A better, more reasonable CEO than the one I have. So I’m looking forward to AI taking that white collar job especially.

crims0n · 12h ago
You may be onto something… sell strategic decisions by an AI cohort as a service, insure against the inevitable duds, profit.
josefritzishere · 20h ago
I don't think we've seen a technology more over-hyped in the history of industrialized society. Cars, which did fully replace horses, was not even hyped this hard.
rjurney · 20h ago
Workers in denial are like lemmings, headed for the cliff... not putting myself above that. A moderate view indicates great disruption before new jobs replace the current round being lost.
notyouraibot · 3h ago
The hype around AI replacing software engineers is truly delusional. Yes they are very good at solving known problems, writing for loops and boilerplate code but introduce a little bit of complexity and creativity and it all fails. There have been countless tasks that I have given to AI, to which it simply concluded its not possible and suggested me to use several external libraries to get it done, after a little bit of manual digging, I was able to achieve that same task without any libraries and I'm not even a seasoned engineer.
infinitebit · 5h ago
I am SO thankful to see a news outlet take what tech CEOs say with a grain of salt re: AI. I feel like so many have just been breathlessly repeating anything they say without even an acknowledgement that there might be, you know, some incentive for them to stretch the truth.

(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)

keybored · 20h ago
> If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.

Exactly. These people are growth-seekers first, domain experts second.

Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.

[1] American news outlets that lean social-democratic

gcanyon · 11h ago
...everyone here saying "someday AI will <fill in the blank> but not today" while failing to acknowledge that for a lot of things "someday" is 2026, and for an even larger number of things it's 2027, and we can't even predict whether or not in 2028 AI will handle nearly all things...
causal · 11h ago
The problem is that it's hard to pin down any job that's been eliminated by AI even after years of having LLMs. I'm sure it will happen. It just seems like the trajectory of intelligence defies any simple formula.
gcanyon · 9h ago
There's definitely an element of what we saw in the '90s -- software didn't always make people faster, it made the quality of their output better (wysiwyg page layout, better database tools/validation, spell check in email, etc. etc.).

But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...

brokegrammer · 5h ago
We don't need AI to wipe out entry-level office jobs. David Graeber wrote about this in Bullshit Jobs. But now that we have AI, it's a good excuse to wipe out those jobs for good, just like Elon did after he acquired Twitter. After that, we can blame AI for the deed.
kilroy123 · 3h ago
I've thought a lot about this. I think this is exactly what is happening. I've seen this first hand.

A lot of the BS jobs are being killed off. Do some non-bs jobs get burn up in the fire along the way, yes. But it's only the beginning.

JanisErdmanis · 4h ago
The productivity gains in activities will be countered by the same gains in counter activities. Everything is going to become more sophisticated, but bullshit will remain.
stephc_int13 · 12h ago
The main culprit behind the hype of the AI revolution is a lack of understanding of its true nature and capabilities. We should know better, Eliza demonstrated decades ago how easily we can be fooled by language, this is different and more useful but we rely so much on language fluency and knowledge retrieval as a proxy for intelligence that we are fooled again.

I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.

Anyone paying attention should understand this fact by now.

There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.

theawakened · 1h ago
I've said this before and I'll say it again: The idea that 'AI' will EVER take over any programmers job is ridiculous. These idiots think they are going to create AGI, it's never going to happen, not with this race of people. There is far too much ignorance in humanity. AI will never be able to be any better than it's source, humanity. It's a soon-to-be realization for these billionaire talking heads. Nothing can rise higher than it's source. Even if they cover every square foot of land with data centers, it'll never work like they expect it to. The AI bubble will burst so hard the entire world will quake. I give it 5 years max.
bawana · 14h ago
When are going to get AI CEOs as a service?
whynotminot · 20h ago
There’s a hype machine for sure.

But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.

It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”

People like this have banked their futures on AI not working out.

codr7 · 20h ago
The opposite is more true imo.

It's the AI hype squad that are banking their future on AI magically turning into AGI; because, you know, it surprised us once.

whynotminot · 20h ago
Not really — even if AGI doesn’t work and these models don’t get any better, there’s still enormous value to be mined just from harnessing the existing state of the art.

Or these guys pivot and go back to building CRUD apps. They’re either at the front of something revolutionary… or not… and they’ll go back to other lucrative big tech jobs.

SoftTalker · 19h ago
Is there enormous value? AI is burning cash at an extraordinary rate on the promise that it will be an enormous value. But if it plateaus, then all the servers, GPUs, data centers, power and cooling and other infrastructure will have to be paid for out of revenue. Will customers be willing to pay the actual costs of running this stuff.
whynotminot · 19h ago
I don’t know if what they’ve built and are building in the future will justify the level of investment. I’m not an economist or a VC. It’s hard to fathom the huge sums being so casually thrown around.

All I can tell you is that for what I use AI for now in both my personal and professional life, I would pay a lot of money (way more than I already am) to keep just the current capabilities I already have access to today.

hatefulmoron · 6h ago
I'm not trying to make a point, just curious -- what's stopping you from spending more money on AI? You could be using more API tokens, more Claude Code and whatever else.
codr7 · 9h ago
May I ask what exactly AI provides that's worth so much to you?

Because I wouldn't miss it at all if it disappeared tomorrow, and I'm pretty sure the society would be better off without it.

asadotzler · 14h ago
They've so far spent about what the world spent to build out almost all of the broadband internet, the fiber, cable, cellular, etc. If AI companies stop now, about 10 years after they got going, does their effort give us trillions of dollars being added to the economy each year from today forward, like we got for every year after the 10 years of internet build out between 1998 and 2008? I'm not seeing it. If they stop now, that's a trillion dollars in the dumper because no one can afford to operate the existing tech without a continual influx of investor cash that may never pay off.
bgwalter · 20h ago
Using the Upton Sinclair quote in this context is a sign of not understanding the quote. The original quote means that you ignore gross injustices of your employer in order to stay employed.

It was never used in the sense of denigrating potential competitors in order to stay employed.

> People like this have banked their futures on AI not working out.

If "AI" succeeds, which is unlikely, what is your recommendation to journalists? Should they learn how to code? Should they become prostitutes for the 1%?

Perhaps the only option would be to make arrangements with the Mafia like dock workers to protect their jobs. At least it works: Dock workers have self confidence and do not constantly talk about replacing themselves. /s

whynotminot · 20h ago
I think the quote makes perfect sense in this context, regardless of the prior application.

As to my recommendation to what they do — I dunno man. I’m a software engineer. I don’t know what I am going to do yet. But I’m sure as shit not burying my head in the sand.

bgwalter · 20h ago
Even if you apply the quote in a different sense, which would take away all its pithiness, you are still presupposing that "AI" will turn out to be a success.

The gross injustices in the original quote were already a fact, which makes the quote so powerful.

whynotminot · 20h ago
AI as is, is already a success, which is why I find it so baffling that people continue to write pieces like this.

We don’t need AGI for there to be large displacement of human labor. What’s here is already good enough to replace many of us.

AnimalMuppet · 20h ago
At least temporarily, it can be somewhat self-fulfilling, though. Companies believe it, think they'd better shed white-collar jobs to stay competitive. If enough companies believe that, white-collar jobs go down, even if AI is useless.

Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)

trhway · 12h ago
Read on about PLTR in recent days - all these government layoffs (including by DOGE well connected to PLTR) with the money redirected toward the Grand Unification Project using PLTR Foundry (with AI) platform.
ck2 · 14h ago
LLM is going to be used for oppression by every government, not just dictatorships but USA of course

Think of it as an IQ test of how new technology is used

Let me give you an easier example of such a test

Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year

Do you think the world will become more peaceful or much more war?

If you think peaceful, you fail, of course more war, it's all about oppression

It's always about the few controlling the many

The "freedom" you think you feel on a daily basis is an illusion quickly faded

jatora · 2h ago
While I agree that the current 'bloodbath' narrative is all hype, I'm honestly confused by a lot of the sentiment i see on here towards AI. Namely the dismissal of continual improvement and the rampant whistling past the graveyard attitude of what is coming.

It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.

I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.

One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.

Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.

You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.

AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.

I consistently make 200k+ token queries of code and context and receive highly accurate results.

I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.

The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.

Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.

The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.

I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.

johnwheeler · 20h ago
I previously worked at a company called Recharge Payments, directly supporting the CTO, Mike—a genuinely great person, and someone learning to program. Mike would assign me small tasks, essentially making me his personal AI assistant. Now, I approach everything I do from his perspective. It’s clear that over time, he’ll increasingly rely on AI, asking employees less frequently. Eventually, it’ll become so efficient to turn to AI that he’ll rarely need to ask employees anything at all.
lexandstuff · 11h ago
I've never had a job like that. My job has always involved helping my company, not just figure out how to build something, but what to build. We typically collaborate on a few ideas and then go away, let them percolate in our brains, before coming back with some new ideas to try. The whole point of the Agile Manifesto is that we don't know what to build in the first place.

Sometimes my boss has asked me to do something that in the long run will cost the company dearly. Luckily for him, I am happy to push back, because I can understand what we're trying to achieve and help figure the best option for the company based on my experience, intuition and the data I have available.

There's so much more to working with a team than: "Here is a very specific task, please execute it exactly as the spec says". We want ideas, we want opinions, we want bursts of creative inspiration, we want pushback, we want people to share their experiences, their intuition, the vibe they get, etc.

We don't want AI agents that do exactly what we say; we want teams of people with different skill sets who understand the problem and can interpret task through the lens of their skill set and experience, because a single person doesn't have all the answers.

I think your ex-boss Mike will very soon find himself trapped in local minima of innovation, with only his own understanding of the world, and a sycophantic yes-man AI employee that will always do exactly as he says. The fact that AI mostly doesn't work is only part of the problem.

smeeger · 12h ago
if being redundant would lead to mass layoffs then half of white collar workers would have been laid off decades ago. and white collar people will fiddle with rules and regulations to make their ever more bloated redundancy even more brazen with the addition of AI… and then later when AI has the ability to replace blue collar workers it will do so immediately and swiftly while the white collar people get all the money. its happened a thousand times before and will happen again.
paulluuk · 20h ago
Around the time when bitcoin started to get serious public attention, late 2017, I remember feeling super hyped about it and yet everyone told me that money spent on bitcoin was wasted money. I really believed that bitcoin, or at least cryptocurrency as a whole, would fundamentally change how banking and currencies would work. Now, almost 10 years later, I would say that it did not live up to my believe that it would "fundamentally" change currencies and banking. It made some minor changes, sure, but if it weren't for the value of bitcoin, it would still be a nerdy topic about as well known as perlin noise. Although I did make quite a lot of money from it, though I sold out way too soon.

As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.

Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.

The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.

jollyllama · 20h ago
Crypto is a really interesting point, because even the subset of people who have invested in it don't use it on a day to day basis. The entire valuation is based on speculative use cases.
layer8 · 20h ago
> already have a huge impact on tech

As a user, I haven’t seen a huge impact yet on the tech I use. I’m curious what the coming years will bring, though.

surgical_fire · 19h ago
Something can be useful and massively overhyped at the same time.

LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though

1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.

2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.

3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.

All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.

The world is always ending anyway.

paulluuk · 20h ago
On a side note, I do worry about the energy consumption of AI. I'll admit that, like the silicon valley tech bros, there is a part of me that hopes that AI will allow researchers to invent a solution to that -- something like fusion or switching to quantum-computing AI models or whatever. But if that doesn't happen, it's probably the biggest problem related to AI. More so even than alignment, perhaps.
rvz · 13h ago
> By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.

Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,

That is the true definition of AGI.

DrillShopper · 20h ago
I look forward to the day where executive overpromises and engineering underdeliveries bring about another AI winter so the useful techniques can continue without the stench of the "AI" association and so the grifters go bankrupt.
sevensor · 20h ago
The implosion of this AI bubble is going to have a stupendous blast radius. It’s never been harder to distinguish AI from “things people do with computers” more generally. The whole industry is implicated, complicit, and likely to suffer when AI winter arrives. Dotcom bust didn’t just hit people who were working for pets.com.
pixl97 · 20h ago
Just like the internet was a fad, right?
threeseed · 3h ago
Internet only became a fad once it was already large and had tens of millions of users.

I remember the pre-Web days of Usenet and BBS and no one thought those were trendy.

AI is far more akin to crypto.

DrillShopper · 20h ago
More like the dot-com bubble