Companies like Nvidia and OpenAI base their answers to any questions on economic risk on their own best interests and a pretty short view of history. They are fighting like hell to make sure they are among a small set of winners while waving away the risk or claiming that there's some better future for the majority of people on the other side of all this.
To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?
Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.
zozbot234 · 6h ago
"Massive disruption" of what kind? Current AI abilities make white-collar work more productive and potentially higher-paid, not less.
kbos87 · 6h ago
Why would my employer pay me more for using their AI? I am already massively more productive at work using AI. I'm not getting paid more, and I'm not working fewer hours. The road we are headed down is one where all of the economic benefits go straight to the owning class.
MostlyStable · 5h ago
For the same reasons that, on average and in general, increases in productivity have lead to increases in wages across history. It's not universal and it's not instantaneous, but the base level assumption should be that increases in productivity will lead to increases in wages in the long run and we should have specific reasons why it won't happen for a given case to believe otherwise.
bluefirebrand · 5h ago
> For the same reasons that, on average and in general, increases in productivity have lead to increases in wages across history
Historically, increased productivity has almost literally never increased wages or benefits without worker uprisings
Seriously look into the history of labor and automation
MostlyStable · 5h ago
Cool. What worker uprising occurred in the US in 1995? I (and basically every economist) disagree with your interpretation of historical events.
Luckily for me, I didn't claim (nor do I believe) that workers/wages capture 100% of productivity.
bluefirebrand · 5h ago
Where is worker productivity on that graph exactly?
MostlyStable · 4h ago
Why does that matter? According to you, productivity is not what leads to increases in wages. You claimed that wages only ever go up with a worker uprising. I showed that worker wages went up starting in 1995. According to you, there must have been a worker uprising to explain that.
tobr · 4h ago
I don’t have a strong opinion on the topic, but bluefirebrand did not claim that wages only go up because of worker uprisings.
MostlyStable · 4h ago
In the long term, the only thing that _can_ increase wages on average and in general is productivity. Productivity increases mean the pie is bigger, and without that, there is no where for increased wages to come _from_. So by saying that producity only results in increased wages in the case of the worker uprising, that is equivalent to saying that wages can only rise from a worker uprising. The only way this isn't true is for some other proposed mechanism for worker wages to go up, and the only other methods would be short time, single increases where all you are doing is taking from someone else (presumably capital I guess?)
Sustained rates of increase _require_ increasing productivity. So no, they didn't explicitly state it, what they explicitly said was that productivity only results in increased wages if a worker uprising forces it. But that's the logical requirement of that statement.
bluefirebrand · 3h ago
> there is no where for increased wages to come _from
The pie doesn't have to grow when the pie is already massive, but only 1% of people are taking 90% of the pie
MostlyStable · 3h ago
You can get short term or one time increases by taking from somewhere else, but without productivity gains, it's an inherently limited, zero sum game.
hollerith · 3h ago
20-22% of the pie in the US.
bluefirebrand · 3h ago
Sure, whatever.
The point was not in the exact number
The point is that there is an answer to "Where could increased wages come from if we don't increase productivity"
We don't have to increase productivity to pay some of the population less and other parts of the population more
bluefirebrand · 4h ago
> You claimed that wages only ever go up with a worker uprising.
I'm not sure you really understood my original post. I never said or meant to imply that wages never grow ever
I was talking about how increases in productivity do not lead to proportionally increased wages
Look, here's an example. Let's say I'm a worker producing Widgets for $20/hour by hand, and I can produce 10 widgets an hour. The company sells widgets for $10 each
In one hour I have produced $100 worth of widgets. The company pays me $20, the company keeps $80
Now the company buys a WidgetMachine. Using the WidgetMachine I can now produce 20 widgets an hour
I now produce $200 worth of Widgets per hour. The company still pays me $20, the company has now earned $180
My productivity has doubled, but my wage hasn't
So next year inflation is $5. The company increases my income to $25, starts charging a couple of cents more per Widget so they can absorb my wage increase without any change to their bottom line
My wage matches inflation, it still "grows" but it is completely divorced from my productivity
More importantly, my wage growing to match inflation doesn't help my buying power even remotely. If my wage only goes up to match exactly inflation then all I'm ever doing is treading water. At best all I can do is keep the exact same standard of living and lifestyle
Increases in "real wages" should have "real" impact on your life, it should let you have a better life than before
MostlyStable · 4h ago
I never claimed that workers get 100% of the increase in productivity. I said that increased productivity leads to increased wages. Increased productivity is in fact a _requirement_ for wages to go up (note that I am _not_ claiming that in increase in productivity _must_ result in increase wages. It is necessary, but not always sufficient), or else there is no where for those increases to come from. So when you claim that productivity increases _only_ result in worker wages increasing in the co-occurence of a worker uprising, that is the same thing as claiming that worker uprisings are a requirement.
So I showed you an increase in worker wages. If there is not corresponding worker uprising, your original claim is false.
And,again, you keep ignoring that the plot I showed is already inflation adjusted. It is a real increase, not a nominal increase.
bluefirebrand · 3h ago
> Increased productivity is in fact a _requirement_ for wages to go up
No it isn't. This is an extremely naive understanding of how any of this works
You even say it yourself, with the silly graph you keep posting
That graph doesn't show productivity it just shows "real inflation adjusted wages", like you keep harping on about.
But in general a person is not increasing their productivity year over year. So why would their wage go up to match inflation if, as you say, wages only go up when productivity increases? That doesn't make sense
The reality is that people already provide their employers with vastly more productivity than they are paid for. Their employers are capturing the majority of the value from that productivity. If someone's wage goes up to match inflation, their productivity hasn't increased, inflation has increase the value of their current productivity
You seriously don't seem to understand how any of this works
MostlyStable · 3h ago
Given that economists almost universally agree with me, I'm going to have to suggest that it is in fact you who don't know know how any of this works.
You keep shifting the goal posts, you keep talking about unrelated things, and you keep not addressing the core claims, and you keep not responding to the main refutation of your own original claim. I'm done beating my head against this wall. I hope you have a nice day.
zozbot234 · 5h ago
You could easily rephrase that as "Historically, worker uprisings have almost literally never increased wages or benefits, absent increased productivity" and it would be a lot closer to the truth of how automation impacts wages.
DrillShopper · 5h ago
> For the same reasons that, on average and in general, increases in productivity have lead to increases in wages across history.
This trend is likely to accelerate (productivity skyrocketing, wages stagnant)
bdangubic · 4h ago
you are confusing increases in profits with increases in wages :)
JumpCrisscross · 5h ago
> Why would my employer pay me more for using their AI?
We’re on HN. AI makes it easier for you to disrupt your employer.
ethbr1 · 3h ago
* Unless your employer happens to have engineered a monopoly
jstummbillig · 5h ago
Owning what? Your employer (likely) owns little of value in a world of cheap AI software generation. Open Source models are already the thing that you can use to code with, and will obviously get better. We have to believe some really weird stuff for this vision to be coherent, for example a nvidia-everything-world, where they are not only the sole provider of hardware but also control access to all relevant models, and all software products that any (non-tech) business needs to use AI.
If by "owning class" you actually mean "all people with agency" then, yeah, I agree.
bigbadfeline · 1h ago
> Your employer (likely) owns little of value in a world of cheap AI software generation.
That applies to you, not to your employer - in your hands, "cheap AI software generation" is, well... cheap. On the other hand, your employer owns patents, copyrights, distribution channels, politicians and connections - those become more valuable as the coding skills get cheaper. The "owning class" are those who own most of the high value items enumerated above.
jstummbillig · 1h ago
I don't know who you are describing, but it's certainly nothing close to the median employer.
bigbadfeline · 10m ago
The median employer isn't in the software business either, so "cheap coding" has little direct effect on them and it doesn't help the agency-rocking small fish in any way. However, as customers, the median employers are highly dependent upon precisely the type of employer I described, not upon "people with agency".
charcircuit · 5h ago
You need to switch jobs in order to get paid at market rate. Companies have found that they do not need to keep up with the market rate to retain employees.
ethbr1 · 3h ago
It varies by company.
I know of at least one major company that continually benchmarks market rates and uses those as default raises.
Unsurprisingly, they have an average tenure of 10+ years...
mjr00 · 5h ago
Why would your employer pay you more for using Python/Java/JavaScript? You're massively more productive when using those languages instead of C for many common development tasks.
Did the introduction of Python drastically reduce software developer salaries?
ethbr1 · 3h ago
{Total demand for software} is oddly not being mentioned by most people in this thread.
First approximation, there are two AI coding futures:
1. AI coding increases software development productivity, decreasing the cost of software, stimulating more demand for the now more efficient development.
2. AI coding increases software development productivity such that the existing labor pool is too large for demand.
I'd hazard (1) in the short term and (2) in the long term.
Arainach · 5h ago
Productivity per capita is dramatically up since the 1970s. Wages are flat. Employers are greedy and short sighted.
Employers would rather pay more to hire someone new who doesn't know their business than give a raise to an existing employee who's doing well. They're not going to pay someone more because they're more productive, they'll pay them the same and punish anyone who can't meet the new quota.
MostlyStable · 5h ago
This is not true. Wages did stagnate from about the mid 70s until about the mid 90s, but median, real wages have been increasing steadily since then [0]
Capital earn more than Work since the 90s-2000s(depending on the country).
bluefirebrand · 5h ago
Sounds like wages are about 20 years removed where they should be then, since cost of living didn't exactly stagnate for 20 years
MostlyStable · 5h ago
"Real" as opposed to "nominal" in this context means inflation adjusted, which means that wages have been increasing faster than inflation for 30 years. Now, it is true that some components of cost of living, most notably housing, have (especially in particular regions) also increased in cost faster than the general inflation level. But that's not due to some inherent breakdown between productivity and wages. It's due to a series of bad policy and development choices mostly in cities.
So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
bluefirebrand · 5h ago
> which means that wages have been increasing faster than inflation for 30 years
If you look at the graph you posted and carried the slope of the pre-70s trajectory forward, assuming that the 70s-90s slump had not happened, would the graph end in the same place it is currently?
No. Not even close
> So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
It's absolutely not a separate discussion, the end result is that the same "real wage" that used to provide a comfortable life is now poverty
You cannot just shrug your shoulders and say "well incomes are matching productivity so this is fine actually"
tiahura · 2h ago
The average us worker now had to complete with more women entering the workforce and entering more fields and careers. They also had to compete with ever increasing numbers of immigrants and they had to compete with foreign workers.
When you massively increase the supply of labor, you’re going to have downward pressure on wages.
thayne · 5h ago
Increasing and stagnating are not mutually exclusive. Wages may be increasing, but the benefits of productivity gains have not been evenly distributed. Productivity has increased faster than wages have, and difference has gone to the already wealthy.
andsoitis · 5h ago
> Productivity has increased faster than wages have, and difference has gone to the already wealthy.
What/who has financed productivity increases? Isn’t it tools and infrastructure etc. for the most part, paid for by asset owners? There are likely exceptions, but big picture.
tiahura · 5h ago
Now, imagine a 70s - present where the labor supply curve didn’t didn’t massively shift because of women and foreign trade.
uhhhhhhh · 5h ago
Companies are actively not hiring expecting AI to compensate and still have growth. I have seen these same companies giving smaller raises and less promotions, and eliminate junior positions.
The endgame isn't more employees or paying them more. It's paying less people or no skilled people when possible.
That's a fairly massive disruption.
seadan83 · 5h ago
We're just 3 years after a giant hiring binge, a similar amount of time post zero interest rates, the US economy has been threatening recession for two years, and economic uncertainty is very high, and post Covid had a glut of junior engineers coming onto the market. Between all of these plausible explanations for why hiring is way down, is there macro economic evidence it really is AI and not anything else?
lovich · 5h ago
That hiring binge was already nullified by the waves of layoffs.
I think everything else you’re saying is happening/has happened but companies hiring less because of anticipated AI productivity gains is also occurring. Like the scuttlebutt I hear about certain faangs requiring managers have 9-10 direct reports now instead of 7
shanemhansen · 5h ago
I don't believe them. I believe that as we exit zero interest rates, companies have to cut back and "we are doing AI" is easier to sell the investors and even their own employees than "yeah we want to spend less on people".
candiddevmike · 5h ago
All that I've seen with AI in the workplace is my coworkers becoming dumber. Asking them technical questions they should know the answer to has turned into "let me LLM that for you" and giving me an overly broad response. It's infuriating. I've also hilariously seen this in meetings, where folks are being asked something and awkwardly fill time while they wait for a response which they try and not read word for word.
zozbot234 · 5h ago
That's totally normal actually. When you ask, you have to tell them "Think through this step by step."
jaredklewis · 1h ago
Salaries are determined by the replacement cost of the employee in question, not their productivity. How does AI increase wages?
ffsm8 · 5h ago
You seem to have the same opinion as kbos87 then, because given your higher productivity, do you honestly think there will not be less job openings from your employer going forward?
What you just said as a rebuttal was pretty much his point, you just didn't internalize what the productivity gains mean at the macro level, only looking at the select few that will continue to have a job
zozbot234 · 5h ago
Were there "less job openings from our employers" when the software industry shifted en masse from coding in FORTRAN to C/C++, then to Java and later on to Python and JavaScript? Each of these choices came with a massive gain in productivity. Why then did the software sector grow and not shrink at the macro level?
VWWHFSfQ · 5h ago
USA is presently in the midst of a massive offshoring of software jobs which will only continue to accelerate as AI becomes better. These are "white collar" jobs that will never come back.
mjr00 · 5h ago
The USA has "presently" been offshoring software development jobs since around 2001.
I remember, because the same type of people dooming about AI were also telling me, a university student at the time, that I shouldn't get into software development, because salaries would cap out at $50k/year due to competition with low-cost offshore developers in India and Bangladesh.
zozbot234 · 5h ago
> a massive offshoring of software jobs
Where have I heard this before? The drawbacks of offshoring are well known by now and AI does not really mitigate them to any extent.
seadan83 · 5h ago
So many of the drawbacks have been mitigated at this point too. Offshore teams are well organized, have their own managers, orgs, the offshore talent has really grown in ths last few decades with many excellent engineers, and remote/async work has become a first class citizen now. Despite offshore getting way better, the US jobs market in IT has not collapsed.
mixmastamyk · 4h ago
US tech jobs market has collapsed. Although partially self-inflicted wounds. If you don’t think so, you haven’t looked for a job in a while.
AI should improve code quality for these offshore teams. That leaves time zone issues, which may or may not be a problem. If it is, offshore to Latin America.
sbierwagen · 4h ago
>Who gets the nice car and the vacation home?
AI will crash the price of manufactured goods. Since all prices are relative, the price of rivalrous goods will rise. A car will be cheap. A lakeside cabin will be cheap. A cottage in the Hamptons will be expensive. Superbowl tickets will be a billion dollars each.
>meager universal basic income allotment
What does a middle class family spend its money on? You don't need a house within an easy commute of your job, because you won't have one. You don't need a house in a good school district, because there's no point in going to school. No need for the red queen's race of extracurriculars that look good on a college application, or to put money in a "college fund", because college won't exist either.
The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
bigbadfeline · 2h ago
> AI will crash the price of manufactured goods.
Quite the opposite, persistent inflation has been with us for a long time despite automation, it's not driven by labor cost (even mainstream econ knows it), it's driven by monopolization which corporate AI facilitates and shifts to overdrive.
> The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
AI will blow up only what its controllers tell it to, that control is the crux of the problem. The AI-driven monopolization allows few controllers to keep the multitudes in their crosshairs and do whatever they want, with whomever they want, J. Huang will make sure they have the GPUs they need.
> You don't need a house within an easy commute of your job, because you won't have one.
Remote work has been a thing for quite some time but remote housing is still rare anyway - a house provides access not only to jobs and school but also to medical care, supply lines and social interaction. There are places in Montana and the Dakotas who see specialist doctors only once a week or month because they fly weekly from places as far away as Florida.
> You don't need a house in a good school district, because there's no point in going to school... and college won't exist either.
What you're describing isn't a house, it's a barn! Can you lactate? Because if you can't, nobody is going to provide you with a stall in the glorious AI barn.
amazingman · 3h ago
The main flaw in your framing is that physical resources are still scarce. All prices are not relative in the sense you're building your projections on.
lend000 · 5h ago
Friends and others who have described the details of their non-technical white collar work to me over the last 15 or so years have typically evoked the unspoken response... "Hmm, I could probably automate about 50-80% of your job in a couple weeks." That's pre-AI. And yet years later, they would still have similar jobs with repetitive computer work.
So I'm quite confident the future will be similar with AI. Yes, in theory, it could already replace perhaps 90% of the white collar work in the economy. But in practice? It will be a slow, decades-long transition as old-school / less tech savvy employers adopt the new processes and technologies.
Junior software engineers trying to break into high paying tech jobs will be hit the hardest hit IMO, since employers are tech savvy, the supply of junior developers is as high as ever, and they simply will take too long to add more value than using Claude unless you have a lot of money to burn on training them.
brookst · 6h ago
I’m very skeptical of claims that all things will always do this and never do that, etc.
IMO Jensen and others don’t know where AI is going any more than the rest of us. Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
danans · 5h ago
> Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
Absent some form of meaningful redistribution of the economic and power gains that come from AI, the techno-feudalist dystopia becomes a more likely outcome (though not a certain outcome), based on a straightforward extrapolation of the last 40 years of increasing income and wealth inequality. That trend could be arrested (as it was just after WW2), but that probably won't happen by default.
kbos87 · 5h ago
Fair point and I absolutely acknowledge that the future AI will usher in is still very much an unknown. I do think it's worth recognizing that there is one part of the story that is very predictable because it's happened over and over again - the part where some sort of innovation creates new efficiencies and advantages. I think it's fair to debate the extent to which AI will completely disrupt the white collar working class, but to whatever extent it does, I don't think there's much argument about where the benefit will accrue under our current economic system.
No comments yet
hnlmorg · 5h ago
That backlash is already happening. Which is why we are seeing the rise in right wing extremism. People are voting for change. The problem is they’re also voting for the very establishment they’re protesting against.
willis936 · 5h ago
Surveys aren't revealing that AI legislation is a top 3 issue for constituents on either side. It might as well be under the noise floor politically.
Aurornis · 5h ago
AI doesn’t really register on polls of voter priorities.
timewizard · 5h ago
> base their answers to any questions on economic risk on their own best interests and a pretty short view of history.
We used to just call that lying.
> When AI finally does cause massive disruption to white collar work
It has to exist first. Currently you have a chat bot that requires terabytes of copyrighted data to function and has sublinear increases in performances for exponential increases in costs. These guys genuinely seem to be arguing over a dead end.
> what happens then?
What happened when gasoline engines removed the need to have large pools of farm labor? It turns out people are far more clever than a "chat bot" and entire new economies became invented.
> that we see some form of swift and punitive backlash, politically or otherwise.
Or people just move onto the next thing. It's hilarious how small imaginations become when "AI" is being discussed.
zer00eyz · 5h ago
> and a pretty short view of history
Great lets see an example!
> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
Go back to the 1960's when automation was new. It was an expensive, long running failure for GM to put in those first robotic arms. Today there are people who have CNC shops in their garage. The cost of starting that business up is in the same price range as the pickup truck you might put in there. You no longer need accountants, payroll, and your not spending as much time doing these things yourself its all software. You dont need to have a retail location, or wholesale channels, build your website, app, leverage marketplaces and social media. The reality is that it is cheaper and easier than ever to be your own business... and lots of people are figuring this out and thriving.
> Do we really think that most of the American economy is just going to downshift
No I think my fellow Americans are going to scream and cry and hold on to dying ways of life -- See coal miners.
willis936 · 5h ago
I struggle to see how AI innovation falls into the "automate creation of material goods" camp and not the "stratification of wealth" camp.
How about the computers, the people who used to do math, at desks, with slide rules, before we replaced them with machines.
These are all white colar jobs that we replaced with "automation".
Amazon existed before, it was called Sears... it was a catalog so pictures, and printing, and mailing in checks, we replaced all of that with a website and CC processing.
willis936 · 3h ago
I'm saying that AI isn't in the class of "inventions that automate useful work".
imperialdrive · 6h ago
Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead, at least for my daily flavor which is PowerShell. No way a double-digit amount of jobs aren't at stake. This stuff feels like it is really starting to take off. Incredible time to be in tech, but you gotta be clever and work hard every day to stay on the ride. Many folks got comfortable and/or lazy. AI may be a kick in the pants. It is for me anyway.
WXLCKNO · 6h ago
I've been trying every flavor of AI powered development and after trying Claude Code for two days with an API key, I upgraded to the full Max 20x plan.
Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
dandaka · 5h ago
Claude Code works surprisingly well and is also cheaper, compared to Windsurf and Cline + Sonnet 4. The rate of errors dropped dramatically for my side projects, from "I have to check most changes" to "I have not written a line".
wellthisisgreat · 6h ago
hey can you explain the appeal of Claude Code vs Cursor?
I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?
Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?
I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.
I do agree Claude's the best for programming so would love to use it full-featured version.
JoeMattie · 5h ago
I would also like to know this. I've only very briefly looked into Claude code and I may just not understand how I'm supposed to be using it.
I currently use cursor with Claude 4 Sonnet (thinking) in agent mode and it is absolutely crushing it.
Last night i had it refactor some Django / react / vite / Postgres code for me to speed up data loading over websocket and it managed to:
- add binary websocket support via a custom hook
- added missing indexes to the model
- clean up the data structure of the payload
- add messagepack and gzip compression
- document everything it did
- add caching
- write tests
- write and use scripts while doing the optimizations to verify that the approaches it was attempting actually sped up the transfer
All entirely unattended. I just walked away for 10 minutes and had a sandwich.
The best part is that the code it wrote is concise, clean, and even stylistically similar to the existing codebase.
If claude code can improve on that I would love to know what I am missing!
WXLCKNO · 5h ago
My best comparison is that it's like MacBooks/iPhones etc.
Apple builds both the hardware and the software so it feels harmonious and well optimized.
Anthropic build the model and the tool and it just works, although sonnet 4 in cursor is good too but if you've got the 20$ plan often you're crippled on context size (not sure if that's true with sonnet 4 specifically).
I had actually heard about the OpenAI Codex CLI before Claude Code and had the same thought initially, not understanding the appeal.
Give it a shot and maybe you'll change your mind, I just tried because of the hype and the hype was right for once.
steveklabnik · 5h ago
If you use VS: Code, install the plugin, and run Claude code from the terminal, you get the same experience as Cursor.
cybrjoe · 6h ago
i rewrote a code base that i’ve been tinkering on for the last 2 years or so this weekend. a complete replatform, new tech stack, ui, infra, the whole nine yards. the rewrite took exactly 3 days, referenced the old code base, online documentation, github issues all without (mostly) ever leaving claude.
it completely blew my mind. i wrote maybe 10 lines of code manually. it’s going to eliminate jobs.
lazystar · 5h ago
> it’s going to eliminate jobs.
that's the part i'm not sold on yet. it's a tool that allows you to do a year's work in a week - but every dev in every company will be able to use that tool, thus it will increase the productivity of each engineer by an equal amount. that means each company's products will get much better much faster - and it means that any company that cuts head count will be at risk of falling behind it's competitors.
i could see it getting rid of some of the infosec analysts, i guess. since itll be easier to keep a codebase up to date, the folks that run a nessus scan and cut tickets asking teams to upgrade their codebase will have less work available.
bluefirebrand · 5h ago
> it's a tool that allows you to do a year's work in a week
Exaggerations like this really don't help your credibility
lazystar · 5h ago
the amount isn't relevant to the argument; the point is that the amount - whatever that may be - is applied equally to all companies, which means the competitive balance will stay the same. its a great build tool, but you still need builders to use the tool.
owebmaster · 5h ago
How many teams of 10 took a year and USD 5 millions to develop a CRUD that failed? That can be done in one week now
bluefirebrand · 5h ago
I'm not sure that spending $5 Million to fail every week is actually better than spending $5 million to fail once a year
Brings a crazy new meaning to "fail fast" though
owebmaster · 1h ago
if the success rate is the same, that would be actually good
sorcerer-mar · 6h ago
> I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
You should never have to copy/paste something from Claude Code...?
WXLCKNO · 5h ago
Claude Code has a VS Code (and therefore cursor / windsurf) extension so it will show you changes it wants to make directly in the IDE.
I still use the Cursor auto complete but the rest is all Claude Code.
Even without the extension Claude is directly modifying and creating files so you never have to copy paste.
solumunus · 6h ago
It really is night and day. Most of them feel like cool toys, Claude Code is a genuine work horse. It immediately became completely integral to my workflow. I own a small business and I can say with absolute confidence this will reduce the amount of devs I need to hire going forward.
mirkodrummer · 5h ago
I don't get claims like that, if AI let me do more and be more productive with less people I could also grow and scale more, that means that I can also hire more and again multiply growth because each dev will bring more and more... I'm skeptic because I don't see it happening, actually the contrary more people doing more things maybe, but the not 10x nor 100x otherwise we would see products built in 5 years coming out in literally 15 days
davnicwil · 5h ago
It might be that the value more software can add is already at its limit in any given business - or at least returns will be diminishing. Meaning in those particular businesses the appetite to hire devs might stay flat (or even shrink!) as AI makes existing devs more efficient.
The more interesting question is whether this is true across the economy as a whole. In my view the answer is clearly no. Are we already operating at the limit of more software to add value at the margin? No.
So though any particular existing business might stop hiring or even cut staff, it won't matter if more businesses are created to do yet more things in the world with software. We might even end up in a place where across the economy, more dev jobs exist as a result of more people doing more with software in a kind of snowball effect.
More conservatively, though, you'd at least expect us to just reach equilibrium with current jobs if indeed there is new demand for software to soak up.
GardenLetter27 · 6h ago
I find it's good if you can get a really clean context, but on IRL problems with 100k+ lines of code that's extremely hard to manage.
But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.
Aurornis · 5h ago
> Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead,
I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.
Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.
The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.
There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.
Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.
neilfrndes · 5h ago
Yup, Claude Code is the real deal. It's a massive force multiplier for me. I run a small SaaS startup. I've gotten more done in the last month than the previous 3 months or more combined. Not just code, but also emails, proposals, planning, legal etc. I feel like working in slo-mo when Claude is down (which unfortunately happens every couple of days). I believe that tools like Claude code will help smaller companies disproportionately.
finlayson_point · 4h ago
how are you using claude code for emails? with a MCP connection or just taking the output from the terminal
unshavedyak · 6h ago
I purchased Max a week ago and have been using it a lot. Few experiences so far:
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.
levocardia · 6h ago
Nvidia is also very mad about Anthropic's advocacy for chip export controls, which is not mentioned in this article. Dario has an entire blog post explaining why preventing China from getting Nvidia's top of the line chips is a critical national security issue, and Jensen is, at least by his public statements, furious about the export controls. As it currently stands, Anthropic is winning in terms of what the actual US policy is, but it may not stay that way.
KerrAvon · 6h ago
Jensen is right, though. If we force China to develop their own technology they’ll do that! We don’t have a monopoly on talent or resources. The US can have a stake at the table or nothing at all. The time when we, the US, could do protectionism without shooting ourselves in the foot is well and truly over. The most we can do is inconvenience China in the short term.
orangecat · 5h ago
The most we can do is inconvenience China in the short term.
If scaling holds up enough to make AGI possible in the next 5-10 years, slowing down China by even a few years is extremely valuable.
cedws · 4h ago
Nothing says we’re the good guys like “we’ll do whatever it takes to sandbag our competitors.” Of course, we’re the benevolent ones who will only use this tool for wealth and prosperity.
DaSHacka · 17m ago
Nothing says intelligent geopolitical strategy like conceding your advantage to foreign adversaries for short-term private gains.
nickysielicki · 4h ago
> If we force China to develop their own technology they’ll do that!
They’re going to do that anyway. They already are. The reason that they want to buy these cards in the first place is because developing these accelerators takes time. A lot of time.
sorcerer-mar · 6h ago
Should we also give them the plans for all of our military equipment then, by the same logic?
Neither side is obviously right.
dsign · 6h ago
Why look at five years and say "everything is gonna be fine in five years, thus, everything is gonna be fine and we should keep this AI thing going"?
It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.
I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.
quonn · 6h ago
> It's early days and nobody knows how things will go, but to me it looks that in the next century or so
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
dsign · 6h ago
I said that one hundred years from now humans would have likely gone the way of the horse. It will be a finished business, not a thing starting. We may take it with some chill, depending on how we value our species and our descendants and the long human history and our legacy. It's a very individual thing. I'm not chill.
There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.
quonn · 5h ago
I think you severely underestimate what the 8 billion human beings on this planet can and will do. They are not like horses at all. They will not allow themselves to be ruled by, like, 10 billionaires operating an AI and furthermore if all work vanishes then we will find other things to do. Just ask a beach bum or a monk or children in school or athletes or students or an artist or the filthy rich. There _are_ ways to spend your time.
You can‘t just judge humans in terms of economic value given the fact that the economy is something that those humans made for themselves. It‘s not like there can be an „economy“ without humankind.
The only problem is the current state where perhaps _some_ work disappears, creating serious problems for those holding those jobs.
As for being creative, we had GPT2 more than 5 years ago and it did produce stories.
And the current AI is nothing like a human being in terms of the quality of the output. Not even close. It‘s laughable and to me it seems like ChatGPT specifically is getting worse and worse and they put more and more lipstick on the pig by making it appear more submissive and producing more emojis.
falcor84 · 5h ago
> How is it early days?
When you have exponential growth, it's always early days.
Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?
seadan83 · 5h ago
Is it early days of exponential growth? The growth of AI to beat humans in chess and then Go took a long time. Appears to be step function growth. LLMs have limitations and can't double their growth for much longer. I'd argue they never did double, just a step function with a slow linear growth since.
The crowd claiming exponential growth have been at it for not quite a decade now. I have trouble separating fact from CEOs of AI companies shilling to attract that VC money. VCs desperately want to solve the expensive software engineer problem, you don't get that cash by claiming AI will be 3% better YoY
quonn · 4h ago
> When you have exponential growth, it's always early days.
Let‘s take the development of CPUs where for 30-40 years the observable performance actually did grow exponentially (unlike the current AI boom where it does not).
Was it always early days? Was it early days for computers in 2005?
davemp · 4h ago
> …to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs.
I’m not sure. I think we can extrapolate that repetitive knowledge work will require much less labor. For actual AGI capable of applying rigor, I don’t think it clear that the computational requirements are achievable without a massive breakthrough. Also for general purpose physical tasks, humans are still pretty dang efficient at ~100watts and self maintaining.
fmbb · 6h ago
We have only been selling our labor for a couple of hundred years. Humanity has been around for hundreds of thousands of years.
We will manage. Hey, we can always eat the rich!
dsign · 6h ago
>> we can always eat the rich!
As long as they are not made out of silicon....
falcor84 · 5h ago
And even then, we could perhaps genetically engineer ourselves to metabolize silicon.
skeledrew · 5h ago
What about metabolizing light and carbon dioxide directly instead?
lostmsu · 41m ago
Does not sell as a narrative.
pixl97 · 6h ago
"Dinosaurs have been around 100 million years and they will be around 100 million more" --Dinosaurs 65.1 million years ago.
bearjaws · 6h ago
Pretty bad example, maybe something more like "Horses have been working for thousands of years -- horse in 1927".
falcor84 · 5h ago
How are the two examples different? In both cases the extrapolation was a reasonable bet that ended with a bust, due to a "black swan" event, no? Taleb himself would say that we should always try to include black swans in our predictions, but it's really difficult to do.
jjfoooo4 · 6h ago
The AI executives predicting AI doomsday trend has been pretty tiresome, and I'm glad it's getting push back. It's impossible to take it seriously given an Anthropic's CEO's incentives: to thrill investors and to shape regulation of competitors.
The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.
scuol · 6h ago
Just this morning, I had Claude come up with a C++ solution that would have undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified) just by reading the code.
These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.
Maybe this is different for JS and Python code?
jsrozner · 6h ago
This is exactly right. LLMs do not build appropriate world models. And no...python and JS have similar failure cases.
Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).
unshavedyak · 6h ago
I agree, but i think the thing we often miss in these discussions is how much LLMs have potential to be productivity multipliers.
Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.
Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.
scuol · 6h ago
Oh I agree it can be a multiplier for sure. I think it's not "AI will take your job" but rather "someone who uses AI well will take your job if you don't learn it".
At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.
unshavedyak · 6h ago
Yea i'm a big fan of using it in Rust for that same reason. I watch it work through compile errors constantly, i can't imagine what it would be like in JS or Python
skerit · 5h ago
Sonnet or Opus?
Well, I guess they both still can do that. But I'm just keeping on asking it to review all its code. To make sure it works. Eventually, it'll catch its errors.
Now this isn't a viable way of working if you're paying for this token-by-token, but with the Claude Code $200 plan ... this thing can work for the entire day, and you will get a benefit from it. But you will have to hold its hand quite a bit.
rangestransform · 6h ago
> assuming iterator stability in a vector that was being modified
This is the crux of an interview question I ask, and you’d be amazed how many experienced cpp devs require heavy hints to get it
phamilton · 5h ago
(not trolling)
Would that undefined behavior have occurred in idiomatic rust?
Will the ability to use AI to write such a solution correctly be enough motivation to push C++ shops to adopt rust? (Or perhaps a new language that caters to the blindspots of AI somehow)
There will absolutely be a tipping point where the potential benefits outweigh the costs of such a migration.
ddaud · 6h ago
I agree. That mental model is precisely why I don’t use LLMs for programming.
pepinator · 6h ago
This is where one can notice that LLM are, after all, just stochastic parrots. If we don't have a reliable way to systematically test their outputs, I don't see many jobs being replaced by AI either.
mistrial9 · 6h ago
> just stochastic parrots
this is flatly false for two reasons -- one is that all LLMs are not equal. The models and capacities are quite different, by design. Secondly a large number of standardized LLM testing, tests for sequence of logic or other "reasoning" capacity. Stating the fallacy of stochastic parrots is basically proof of not looking at the battery of standardized tests that are common in LLM development.
pepinator · 5h ago
Even if not all LLMs are equal, almost all of them are based on the same base model: transformers. So the general idea is always the same: predict the next token. It becomes more obvious when you try to use LLMs to solve things that you can't find in internet (even if they're simple).
And the testing does not always work. You can be sure that only 80% of the time it will be really really correct, and that forces you to check everything. Of course, using LLMs makes you faster for some tasks, and the fact that they are able to do so much is super impressive, but that's it.
mistrial9 · 6h ago
a difference emerges when an agent can run code and examine the results. Most platforms are very cautious about this extension. Recent MCP does define toolsets and can enable these feedback loops in a way that can be adopted by markets and software ecosystems.
fassssst · 6h ago
It’s another league for JS and python, yes.
zozbot234 · 6h ago
> undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified)
This is not an AI thing, plenty of "mid-level" C++ developers could have made that same mistake. New code should not be written in C++.
(I do wonder how Claude AI does when coding Rust, where at least you can be pretty sure that your code will work once it compiles successfully. Or Safe C++, if that ever becomes a thing.)
sampullman · 5h ago
It does alright with Rust, but you can't assume it works as intended if it compiles successfully. The issue with current AI when solving complex or large scale coding problems is usually not syntax, it's logical issues and poor abstraction. Rust is great, but the borrow checker doesn't protect you from that.
I'm able to use AI for Rust code a lot more now than 6 months ago, but it's still common to have it spit out something decent looking, but not quite there. Sometimes re-prompting fixes all the issues, but it's pretty frustrating when it doesn't.
zozbot234 · 5h ago
That's why I said "work" (i.e. probably will do something and won't crash) not "work as intended". Big difference there!
steveklabnik · 5h ago
I use Claude Code with Rust regularly and am very happy with it.
jeffreygoesto · 6h ago
Go ahead and modify A Python dict while iterating over it then.
bugglebeetle · 6h ago
I haven’t tried with the most recent Claude models, but for the last iteration, Gemini was far better at Rust and what I still use to write anything in it. As an experiment, I even fed it a whole ebook on Rust design patterns and a small script (500 lines) and it was able to refactor to use the correct ones, with some minor back and forth to fix build errors!
rectang · 6h ago
The Anthropic CEO wants companies to lay off workers and pay Anthropic to do the work instead. Is Anthropic capable enough to replace those workers, and will it actually happen? Such pronouncements should be treated with the skepticism you'd apply to any sales pitch.
leetrout · 6h ago
Anthropic warns unemployment is a serious risk. Nvidia has an inflated stock and knows how to play the game so of course they deny any such thing with a view not much past the next quarterly earnings call.
No surprises here.
zozbot234 · 6h ago
Unemployment will be a serious risk indeed once the current AI bubble bursts and we enter a new economic downturn. As it is, AI-generated unemployment is just FUD.
seadan83 · 5h ago
When that bubble bursts, perhaps non-AI tech companies will start receiving VC funds again.
Tepix · 5h ago
The reason that AI companies are valuated so highly is the inherent promise of replacing humans in the labor force.
akomtu · 4h ago
In the Big Short movie there is a scene where the old trader scolds two newbies: "why are you so happy? don't you understand that if your bet is right, it means the american economy is going to crash, that millions will lose their jobs?"
Right now we're betting on sp500 going up, which is mostly backed by the belief that machines are going to replace us soon.
swalsh · 5h ago
Sonnet 4 changed my mind on AI safety. It can do ALOT of work unattended, real work like configuring servers. If you give it a goal, and a set of tools, it will get the job done. But I got freaked out the first time I used it, since I didn't realize just how good it was at pursuing it's goal. I gave it a custom MCP server with limited bash commands. But one of the commands was python (I assumed Anthropic would have trained it not to be so relentless... i was wrong), with that it just gladly used python to build and execute any command I didn't give direct access to. Sonnet 4 is scary smart and efficient. The only hesitation i have is that it's messy. For example, since it does not have a memory (i'm using claude desktop) i've seen it duplicate installations/configurations of containers if it failed to find the origional installation. The solution is to add language to the prompt instructing it to drop documentation, and to read documentation on everything it does.
tonyhart7 · 3h ago
I don't care if AI eliminates job or not tbh, maybe its like internet that jobs would created but maybe not
the only thing that I certain is I would take advantage of this "AI revolution" so called, maybe just maybe Human would get replace with Human + AI tools for now at least
jasonsb · 5h ago
So far they're both wrong. Jensen says AI technologies will open more career opportunities in the future with zero evidence to support his claim. Dario says unemployment will skyrocket, but we're not seeing a spike in unemployment yet. If Dario's claims were indeed valid, we should already be observing at least a slight spike in the unemployment data.
theptip · 5h ago
Anthropic is the most open of the major labs; RSP, actual model cards, and they publish their red teaming results.
C.f. openAI who released o3 and didn’t publish any model card or safety eval at all, their justification being “it’s not GA, only the top paid subscription can use it”. That’s not how safety works.
mirkodrummer · 5h ago
Am I a skeptic/fool if I think every claim made these CEOs is just bs and they're just over exaggerating to let their product/self image look like real game changer? Like I still don't see any white collar job lost except for companies pretending AI will do the work as an execuse to do massive lay offs. Plus over the history every game changer technology enabled more jobs than before, I don't get the fomo they're injecting into society causing all this anciety just for selfish returns. IMO We should keep Amodei accountable in five years if no jobs are lost but more are created
Spivak · 6h ago
Hey now, let's not criticize the Anthropic CEO just yet. He made a totally not just pulling a number out of his ass prediction, but a prediction that's nonetheless falsifiable.
> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years
I'm not a betting woman but I feel extremely confident taking the other end of this bet.
tossandthrow · 5h ago
The other end is that we continue a roughly 3% unemployment over the next 5 years.
I am curious to hear why you think that?
seadan83 · 5h ago
False dichotomy, we don't have to continue at 3% for the 20% prediction to be wrong.
So far, I've seen jobs lost to tariffs. I've yet to see a job lost to AI. Observations are not evidence, but so far there is no evidence I see that shows AI to be a stronger macro economic force than say recessions, tariffs (trade wars) or actual wars.
syspec · 6h ago
I use Claud Opus 4, and it's pretty good at writing code. For very common languages.
It makes a lot of mistakes in its own code, trivial ones, like creating functions and calling them with the arguments reversed.
The idea that it's going to blackmail me somehow by showing me what /looks/ like an email, sounds laughable.
ninetyninenine · 5h ago
This theory that if AI becomes sufficiently advanced and cheap, company productivity will go up and they will in fact need to hire more people because of it is completely false.
If AI becomes sufficiently advanced and cheap productivity will go up and companies will as a result need to hire more AI.
Let’s be clear. If an AI is developed that is equal to a human in intelligence and cheaper than a human to employ capitalism in general is impacted in a major way.
If that happens just pray that robotics doesn’t become sufficiently advanced such that jobs requiring crafting or manual work doesn’t get replaced.
Also to be clear I’m not advocating or saying whether or not any of these things will happen. I am simply saying that hypothetically if AI progresses in a certain way then the following consequence is inevitable.
croes · 5h ago
What people like Nvidia‘s CEO (deliberately) don’t understand is that lost jobs are lost jobs but job opportunities aren’t necessarily real new jobs.
The same happened with blue collar jobs.
artemsokolov · 6h ago
Strange to see such unfounded criticism directed at Anthropic and Dario. So far, they seem to be the most transparent and responsible in the AI race.
jjfoooo4 · 6h ago
Anthropic's marketing has been successful at position them as the responsible alternative to OpenAI, but what concretely do they actually do differently than any other model provider?
It feels very akin to the Uber vs Lyft situation, two companies with very different perceptions pursuing identical business models
sorcerer-mar · 6h ago
Because this has nothing to do with wanting more transparency or responsibility – just doesn't want a chilling of demand, export controls, or other regulation on chips.
qoez · 6h ago
Definitely responsible but transparent definitely not. Anyway he's probably just saying this because it's beneficial for nvidias bottom line if it's open.
sorcerer-mar · 6h ago
> If you want things to be done safely and responsibly, you do it in the open
AFAICT this is a complete article of faith. Or insofar as it's true, it's true because doing it in the open allows other stakeholders to criticize and shape its direction – which seems precisely the dialogue that Jensen seems allergic to (makes sense given his incentives, of course)
To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?
Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.
Historically, increased productivity has almost literally never increased wages or benefits without worker uprisings
Seriously look into the history of labor and automation
https://x.com/LettieriDC/status/1922704140834099370
Capital doesn't pay labor unless it has to.
Average wages = f(labor productivity, demand for labor, labor supply)
The AGI future is that demand for labor crashes.
https://www.epi.org/productivity-pay-gap/
Change 1979q4–2025q1:
Productivity +86.0%
Hourly pay +32.0%
Productivity has grown 2.7x as much as pay
Sustained rates of increase _require_ increasing productivity. So no, they didn't explicitly state it, what they explicitly said was that productivity only results in increased wages if a worker uprising forces it. But that's the logical requirement of that statement.
The pie doesn't have to grow when the pie is already massive, but only 1% of people are taking 90% of the pie
The point was not in the exact number
The point is that there is an answer to "Where could increased wages come from if we don't increase productivity"
We don't have to increase productivity to pay some of the population less and other parts of the population more
I'm not sure you really understood my original post. I never said or meant to imply that wages never grow ever
I was talking about how increases in productivity do not lead to proportionally increased wages
Look, here's an example. Let's say I'm a worker producing Widgets for $20/hour by hand, and I can produce 10 widgets an hour. The company sells widgets for $10 each
In one hour I have produced $100 worth of widgets. The company pays me $20, the company keeps $80
Now the company buys a WidgetMachine. Using the WidgetMachine I can now produce 20 widgets an hour
I now produce $200 worth of Widgets per hour. The company still pays me $20, the company has now earned $180
My productivity has doubled, but my wage hasn't
So next year inflation is $5. The company increases my income to $25, starts charging a couple of cents more per Widget so they can absorb my wage increase without any change to their bottom line
My wage matches inflation, it still "grows" but it is completely divorced from my productivity
More importantly, my wage growing to match inflation doesn't help my buying power even remotely. If my wage only goes up to match exactly inflation then all I'm ever doing is treading water. At best all I can do is keep the exact same standard of living and lifestyle
Increases in "real wages" should have "real" impact on your life, it should let you have a better life than before
So I showed you an increase in worker wages. If there is not corresponding worker uprising, your original claim is false.
And,again, you keep ignoring that the plot I showed is already inflation adjusted. It is a real increase, not a nominal increase.
No it isn't. This is an extremely naive understanding of how any of this works
You even say it yourself, with the silly graph you keep posting
That graph doesn't show productivity it just shows "real inflation adjusted wages", like you keep harping on about.
But in general a person is not increasing their productivity year over year. So why would their wage go up to match inflation if, as you say, wages only go up when productivity increases? That doesn't make sense
The reality is that people already provide their employers with vastly more productivity than they are paid for. Their employers are capturing the majority of the value from that productivity. If someone's wage goes up to match inflation, their productivity hasn't increased, inflation has increase the value of their current productivity
You seriously don't seem to understand how any of this works
You keep shifting the goal posts, you keep talking about unrelated things, and you keep not addressing the core claims, and you keep not responding to the main refutation of your own original claim. I'm done beating my head against this wall. I hope you have a nice day.
Not in the last 40 years in real income terms: https://www.epi.org/productivity-pay-gap/
This trend is likely to accelerate (productivity skyrocketing, wages stagnant)
We’re on HN. AI makes it easier for you to disrupt your employer.
If by "owning class" you actually mean "all people with agency" then, yeah, I agree.
That applies to you, not to your employer - in your hands, "cheap AI software generation" is, well... cheap. On the other hand, your employer owns patents, copyrights, distribution channels, politicians and connections - those become more valuable as the coding skills get cheaper. The "owning class" are those who own most of the high value items enumerated above.
I know of at least one major company that continually benchmarks market rates and uses those as default raises.
Unsurprisingly, they have an average tenure of 10+ years...
Did the introduction of Python drastically reduce software developer salaries?
First approximation, there are two AI coding futures:
1. AI coding increases software development productivity, decreasing the cost of software, stimulating more demand for the now more efficient development.
2. AI coding increases software development productivity such that the existing labor pool is too large for demand.
I'd hazard (1) in the short term and (2) in the long term.
Employers would rather pay more to hire someone new who doesn't know their business than give a raise to an existing employee who's doing well. They're not going to pay someone more because they're more productive, they'll pay them the same and punish anyone who can't meet the new quota.
[0] https://x.com/LettieriDC/status/1922704140834099370
Capital earn more than Work since the 90s-2000s(depending on the country).
So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
If you look at the graph you posted and carried the slope of the pre-70s trajectory forward, assuming that the 70s-90s slump had not happened, would the graph end in the same place it is currently?
No. Not even close
> So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
It's absolutely not a separate discussion, the end result is that the same "real wage" that used to provide a comfortable life is now poverty
You cannot just shrug your shoulders and say "well incomes are matching productivity so this is fine actually"
When you massively increase the supply of labor, you’re going to have downward pressure on wages.
What/who has financed productivity increases? Isn’t it tools and infrastructure etc. for the most part, paid for by asset owners? There are likely exceptions, but big picture.
The endgame isn't more employees or paying them more. It's paying less people or no skilled people when possible.
That's a fairly massive disruption.
I think everything else you’re saying is happening/has happened but companies hiring less because of anticipated AI productivity gains is also occurring. Like the scuttlebutt I hear about certain faangs requiring managers have 9-10 direct reports now instead of 7
What you just said as a rebuttal was pretty much his point, you just didn't internalize what the productivity gains mean at the macro level, only looking at the select few that will continue to have a job
I remember, because the same type of people dooming about AI were also telling me, a university student at the time, that I shouldn't get into software development, because salaries would cap out at $50k/year due to competition with low-cost offshore developers in India and Bangladesh.
Where have I heard this before? The drawbacks of offshoring are well known by now and AI does not really mitigate them to any extent.
AI should improve code quality for these offshore teams. That leaves time zone issues, which may or may not be a problem. If it is, offshore to Latin America.
AI will crash the price of manufactured goods. Since all prices are relative, the price of rivalrous goods will rise. A car will be cheap. A lakeside cabin will be cheap. A cottage in the Hamptons will be expensive. Superbowl tickets will be a billion dollars each.
>meager universal basic income allotment
What does a middle class family spend its money on? You don't need a house within an easy commute of your job, because you won't have one. You don't need a house in a good school district, because there's no point in going to school. No need for the red queen's race of extracurriculars that look good on a college application, or to put money in a "college fund", because college won't exist either.
The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
Quite the opposite, persistent inflation has been with us for a long time despite automation, it's not driven by labor cost (even mainstream econ knows it), it's driven by monopolization which corporate AI facilitates and shifts to overdrive.
> The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
AI will blow up only what its controllers tell it to, that control is the crux of the problem. The AI-driven monopolization allows few controllers to keep the multitudes in their crosshairs and do whatever they want, with whomever they want, J. Huang will make sure they have the GPUs they need.
> You don't need a house within an easy commute of your job, because you won't have one.
Remote work has been a thing for quite some time but remote housing is still rare anyway - a house provides access not only to jobs and school but also to medical care, supply lines and social interaction. There are places in Montana and the Dakotas who see specialist doctors only once a week or month because they fly weekly from places as far away as Florida.
> You don't need a house in a good school district, because there's no point in going to school... and college won't exist either.
What you're describing isn't a house, it's a barn! Can you lactate? Because if you can't, nobody is going to provide you with a stall in the glorious AI barn.
So I'm quite confident the future will be similar with AI. Yes, in theory, it could already replace perhaps 90% of the white collar work in the economy. But in practice? It will be a slow, decades-long transition as old-school / less tech savvy employers adopt the new processes and technologies.
Junior software engineers trying to break into high paying tech jobs will be hit the hardest hit IMO, since employers are tech savvy, the supply of junior developers is as high as ever, and they simply will take too long to add more value than using Claude unless you have a lot of money to burn on training them.
IMO Jensen and others don’t know where AI is going any more than the rest of us. Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
Absent some form of meaningful redistribution of the economic and power gains that come from AI, the techno-feudalist dystopia becomes a more likely outcome (though not a certain outcome), based on a straightforward extrapolation of the last 40 years of increasing income and wealth inequality. That trend could be arrested (as it was just after WW2), but that probably won't happen by default.
No comments yet
We used to just call that lying.
> When AI finally does cause massive disruption to white collar work
It has to exist first. Currently you have a chat bot that requires terabytes of copyrighted data to function and has sublinear increases in performances for exponential increases in costs. These guys genuinely seem to be arguing over a dead end.
> what happens then?
What happened when gasoline engines removed the need to have large pools of farm labor? It turns out people are far more clever than a "chat bot" and entire new economies became invented.
> that we see some form of swift and punitive backlash, politically or otherwise.
Or people just move onto the next thing. It's hilarious how small imaginations become when "AI" is being discussed.
Great lets see an example!
> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
Except that innovation has lead to more jobs, new industries, and more prosperity and fewer working hours. The stark example of this: you arent a farmer: https://modernsurvivalblog.com/systemic-risk/98-percent-of-a...
Your shirts arent a weeks or a months income: https://www.bookandsword.com/2017/12/09/how-much-did-a-shirt...
Go back to the 1960's when automation was new. It was an expensive, long running failure for GM to put in those first robotic arms. Today there are people who have CNC shops in their garage. The cost of starting that business up is in the same price range as the pickup truck you might put in there. You no longer need accountants, payroll, and your not spending as much time doing these things yourself its all software. You dont need to have a retail location, or wholesale channels, build your website, app, leverage marketplaces and social media. The reality is that it is cheaper and easier than ever to be your own business... and lots of people are figuring this out and thriving.
> Do we really think that most of the American economy is just going to downshift
No I think my fellow Americans are going to scream and cry and hold on to dying ways of life -- See coal miners.
There isnt a line of unemployed draftsmen out there begging for change cause we invented Autocad: https://azon.com/2023/02/16/rare-historical-photos/
What happened to all the switchboard operators.
How about the computers, the people who used to do math, at desks, with slide rules, before we replaced them with machines.
These are all white colar jobs that we replaced with "automation".
Amazon existed before, it was called Sears... it was a catalog so pictures, and printing, and mailing in checks, we replaced all of that with a website and CC processing.
Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?
Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?
I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.
I do agree Claude's the best for programming so would love to use it full-featured version.
I currently use cursor with Claude 4 Sonnet (thinking) in agent mode and it is absolutely crushing it.
Last night i had it refactor some Django / react / vite / Postgres code for me to speed up data loading over websocket and it managed to:
- add binary websocket support via a custom hook - added missing indexes to the model - clean up the data structure of the payload - add messagepack and gzip compression - document everything it did - add caching - write tests - write and use scripts while doing the optimizations to verify that the approaches it was attempting actually sped up the transfer
All entirely unattended. I just walked away for 10 minutes and had a sandwich.
The best part is that the code it wrote is concise, clean, and even stylistically similar to the existing codebase.
If claude code can improve on that I would love to know what I am missing!
Apple builds both the hardware and the software so it feels harmonious and well optimized.
Anthropic build the model and the tool and it just works, although sonnet 4 in cursor is good too but if you've got the 20$ plan often you're crippled on context size (not sure if that's true with sonnet 4 specifically).
I had actually heard about the OpenAI Codex CLI before Claude Code and had the same thought initially, not understanding the appeal.
Give it a shot and maybe you'll change your mind, I just tried because of the hype and the hype was right for once.
it completely blew my mind. i wrote maybe 10 lines of code manually. it’s going to eliminate jobs.
that's the part i'm not sold on yet. it's a tool that allows you to do a year's work in a week - but every dev in every company will be able to use that tool, thus it will increase the productivity of each engineer by an equal amount. that means each company's products will get much better much faster - and it means that any company that cuts head count will be at risk of falling behind it's competitors.
i could see it getting rid of some of the infosec analysts, i guess. since itll be easier to keep a codebase up to date, the folks that run a nessus scan and cut tickets asking teams to upgrade their codebase will have less work available.
Exaggerations like this really don't help your credibility
Brings a crazy new meaning to "fail fast" though
You should never have to copy/paste something from Claude Code...?
I still use the Cursor auto complete but the rest is all Claude Code.
Even without the extension Claude is directly modifying and creating files so you never have to copy paste.
The more interesting question is whether this is true across the economy as a whole. In my view the answer is clearly no. Are we already operating at the limit of more software to add value at the margin? No.
So though any particular existing business might stop hiring or even cut staff, it won't matter if more businesses are created to do yet more things in the world with software. We might even end up in a place where across the economy, more dev jobs exist as a result of more people doing more with software in a kind of snowball effect.
More conservatively, though, you'd at least expect us to just reach equilibrium with current jobs if indeed there is new demand for software to soak up.
It absolutely aced an old take-home test I had though - https://jamesmcm.github.io/blog/claude-data-engineer/
But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.
I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.
Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.
The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.
There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.
Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.
If scaling holds up enough to make AGI possible in the next 5-10 years, slowing down China by even a few years is extremely valuable.
They’re going to do that anyway. They already are. The reason that they want to buy these cards in the first place is because developing these accelerators takes time. A lot of time.
Neither side is obviously right.
It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.
I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.
You can‘t just judge humans in terms of economic value given the fact that the economy is something that those humans made for themselves. It‘s not like there can be an „economy“ without humankind.
The only problem is the current state where perhaps _some_ work disappears, creating serious problems for those holding those jobs.
As for being creative, we had GPT2 more than 5 years ago and it did produce stories.
And the current AI is nothing like a human being in terms of the quality of the output. Not even close. It‘s laughable and to me it seems like ChatGPT specifically is getting worse and worse and they put more and more lipstick on the pig by making it appear more submissive and producing more emojis.
When you have exponential growth, it's always early days.
Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?
The crowd claiming exponential growth have been at it for not quite a decade now. I have trouble separating fact from CEOs of AI companies shilling to attract that VC money. VCs desperately want to solve the expensive software engineer problem, you don't get that cash by claiming AI will be 3% better YoY
Let‘s take the development of CPUs where for 30-40 years the observable performance actually did grow exponentially (unlike the current AI boom where it does not).
Was it always early days? Was it early days for computers in 2005?
I’m not sure. I think we can extrapolate that repetitive knowledge work will require much less labor. For actual AGI capable of applying rigor, I don’t think it clear that the computational requirements are achievable without a massive breakthrough. Also for general purpose physical tasks, humans are still pretty dang efficient at ~100watts and self maintaining.
We will manage. Hey, we can always eat the rich!
As long as they are not made out of silicon....
The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.
These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.
Maybe this is different for JS and Python code?
Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).
Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.
Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.
At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.
Now this isn't a viable way of working if you're paying for this token-by-token, but with the Claude Code $200 plan ... this thing can work for the entire day, and you will get a benefit from it. But you will have to hold its hand quite a bit.
This is the crux of an interview question I ask, and you’d be amazed how many experienced cpp devs require heavy hints to get it
Will the ability to use AI to write such a solution correctly be enough motivation to push C++ shops to adopt rust? (Or perhaps a new language that caters to the blindspots of AI somehow)
There will absolutely be a tipping point where the potential benefits outweigh the costs of such a migration.
this is flatly false for two reasons -- one is that all LLMs are not equal. The models and capacities are quite different, by design. Secondly a large number of standardized LLM testing, tests for sequence of logic or other "reasoning" capacity. Stating the fallacy of stochastic parrots is basically proof of not looking at the battery of standardized tests that are common in LLM development.
And the testing does not always work. You can be sure that only 80% of the time it will be really really correct, and that forces you to check everything. Of course, using LLMs makes you faster for some tasks, and the fact that they are able to do so much is super impressive, but that's it.
This is not an AI thing, plenty of "mid-level" C++ developers could have made that same mistake. New code should not be written in C++.
(I do wonder how Claude AI does when coding Rust, where at least you can be pretty sure that your code will work once it compiles successfully. Or Safe C++, if that ever becomes a thing.)
I'm able to use AI for Rust code a lot more now than 6 months ago, but it's still common to have it spit out something decent looking, but not quite there. Sometimes re-prompting fixes all the issues, but it's pretty frustrating when it doesn't.
No surprises here.
Right now we're betting on sp500 going up, which is mostly backed by the belief that machines are going to replace us soon.
the only thing that I certain is I would take advantage of this "AI revolution" so called, maybe just maybe Human would get replace with Human + AI tools for now at least
C.f. openAI who released o3 and didn’t publish any model card or safety eval at all, their justification being “it’s not GA, only the top paid subscription can use it”. That’s not how safety works.
> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years
I'm not a betting woman but I feel extremely confident taking the other end of this bet.
I am curious to hear why you think that?
So far, I've seen jobs lost to tariffs. I've yet to see a job lost to AI. Observations are not evidence, but so far there is no evidence I see that shows AI to be a stronger macro economic force than say recessions, tariffs (trade wars) or actual wars.
It makes a lot of mistakes in its own code, trivial ones, like creating functions and calling them with the arguments reversed.
The idea that it's going to blackmail me somehow by showing me what /looks/ like an email, sounds laughable.
If AI becomes sufficiently advanced and cheap productivity will go up and companies will as a result need to hire more AI.
Let’s be clear. If an AI is developed that is equal to a human in intelligence and cheaper than a human to employ capitalism in general is impacted in a major way.
If that happens just pray that robotics doesn’t become sufficiently advanced such that jobs requiring crafting or manual work doesn’t get replaced.
Also to be clear I’m not advocating or saying whether or not any of these things will happen. I am simply saying that hypothetically if AI progresses in a certain way then the following consequence is inevitable.
The same happened with blue collar jobs.
It feels very akin to the Uber vs Lyft situation, two companies with very different perceptions pursuing identical business models
AFAICT this is a complete article of faith. Or insofar as it's true, it's true because doing it in the open allows other stakeholders to criticize and shape its direction – which seems precisely the dialogue that Jensen seems allergic to (makes sense given his incentives, of course)