I keep hearing this and then try using windsurf with the latest Anthropoc models to perform simple refactorings. There’s often goofy mistakes like hallucinations that mistakenly remove imports. Will check back next year…
lwo32k · 1d ago
Well these are nonlinear emergence engines. No two will come up with same solution the more ambiguous, dynamic or complex problems get.
Just because your AI gives you a solution, doesn't mean my AI will provide the same. Now scale that fact up to different teams and different firms. How are things going to work? Why will it reduce the number of people? Just like Jurassic Park(or working on Linux), once strange unpredictable things start happening, you need more and more people running around to clean things up. They don't know what the fuck they are doing, or how to do it well, cause thats the nature of complex problems. So things spiral.
Most people are just defaulting to - oh AI will have one answer to everything. And we will all agree to that solution. This will never happen and therefore the predictions will all break.
andy99 · 1d ago
The market can remain irrational longer than you can remain solvent
It continues two trends dating back well before ChatGPT.
1. Companies' declining investment in "developing people to senior". It's been declining for decades.
2. Self-education becoming key to "finishing" your education. College can't reasonably provide a complete education with the complexity and pace of software. New frameworks, CI, git, just all sorts of things aren't in curricula. University starts with Von Neumann, bubble sort, & big-O and has to proceed forward from there. Luckily today's kids have infinitely-patient LLMs! And insane amounts of content from youtubers! And infinite distribution! It's easier than ever to put your work out there and have it be seen. Kids can apply to jobs adding links to their portfolios and their open-source and show their chops that way, meaning companies need to lean less on interviews.
> "AI isn't stealing job categories outright — it's absorbing the lowest-skill tasks," Doshay said. "That shifts the burden to universities, boot camps, and candidates to level up faster."
Taking on interns and junior devs used to be part of the deal for tech companies that wanted the best talent. Now they can just look at kids' public portfolios and pluck the best ones.
It's a brave new world built on public personas where everyone is their own CEO and it's not for everyone. That's where the race comes in.
Once companies realize only so many AIs can be overseen by one person, they'll hire anyone and everyone who can babysit AIs to produce what the company needs - the more AI you can babysit the more valuable you are. Companies will become desperate for talent to put the compute to work. Jevons Paradox at full tilt.
Young guns WILL succeed in this environment. They'll learn on their own time and dime. It was never easier thanks to LLMs with infinite patience and youtubers providing deep explanations.
But it's not entry level software engineering. It's seat-of-your-pants learning and moving fast, running and gunning to get a thing built. Quality guardrails like PRs, code review, tests and such are more important than ever - installing and instilling is where you as senior dev can shine.
grues-dinner · 1d ago
> Jevons Paradox at full tilt.
Quite so. Unless AI can do literally everything, at which point all prognostication is worthless, you can get more done with more people. The entry level jobs just might not be the same jobs that they are today. Which is actually not really much skin off the nose of the entree, as they are by definition not locked into a skillset anyway.
There is an absolutely ridiculous amount of work to be done, always. You can 10x, 100x everyone with a pulse and we still only find more work uncovered. Companies shed staff when the money runs out; the work will never run out.
Even if every CRUD webapp in the world collapses to one bored guy overseeing a fleet of 50000 AIs, as a global society we have fucking loads of work to do. We have PWh of energy capacity to design and install, a million km of high speed rail, hundreds of thousands of square kilometres of hospitals and schools, literally billions of homes to renovate from shacks to houses, forests to replant, moon bases, asteroid mines, generation ships, it goes on and on. If we want it to.
They only way work as a concept runs out is if we as a species decide we want it to (e.g. by giving all the money, aka human time rental credits, to billionaires and refusing to pay for anything they don't personally want), everyone dies, is a slave in the mines, ascends or otherwise doesn't require work to sustain, or if AGI actually happens and happens at scale.
"AI will take the jobs" is a shareholder-fellating euphemism for "we want AI to do enough work to sustain the people who own the AIs without reference to the rest of humanity". Which they were already doing quite handily anyway. Whether they can keep doing it in safety in perpetuity remains to be seen.
dinfinity · 1d ago
All the remaining jobs you mention involve physical labor. Those can not be taken by AI alone, obviously. They will be taken by robots, however.
The question is when. Robotics is in a worse shape than AI when compared to humans, but the industry is now rapidly integrating modern AI into both the process and the actual products. It's hard to say, but there might be a 'ChatGPT moment' for robotics soon.
grues-dinner · 1d ago
They don't only require physical work. Only recently and quite briefly has any business been even potentially entirely non-physical. Maybe that's the aberration.
If you replace all people at all levels with robots (and the robots and their tasks don't require people to design, maintain or direct) then the "in safety" aspect of the final paragraph will probably become the important part.
If you can, say, design and build and run a railway network entirely automatically then we're well into singularity territory and there's literally no point guessing. The result could equally be infinite luxury space communism or all humans fed into the algae disgestors.
dinfinity · 1d ago
You previously said this: "They only way work [for humans] as a concept runs out is if we as a species decide we want it to"
Do you now agree that that is false? Because I think I've shown it to be false. There is nothing unique about humans that precludes robots/AI/inorganics from doing every job a human can better and cheaper at some point in the future.
grues-dinner · 1d ago
Read the rest of the sentence. Pretty sure that would be covered by the last clause. Also the second sentence of the whole thing.
Also it can't infinitely be cheaper because money is fundamentally based on human time. If you can do everything without humans then the concept of money is fatally wounded. What that would mean is anyone's guess.
dinfinity · 22h ago
Fair point. My apologies, I did not read your initial post carefully enough.
returnInfinity · 1d ago
He is playing the CEO 101 game, trying to convince investors to pump more money into his AI company.
bicepjai · 16h ago
What would you expect from AI model creating company CEO who figured their models were better with coding. These were the people who wanted alignment :)
If AI lab leaders and researchers really believe this, and continue working on the products that they believe will make it happen, does that make them psychopaths?
I use Claude daily, and I am not saying this as a hot take, or burn. I have been genuinely thinking about how this works in one's brain.
immibis · 16h ago
All successful corporate leaders are psychopaths.
consumer451 · 14h ago
I can agree with that some extent, however this is an entirely new scale and level of immediacy, isn't it?
One comparison might be to an oil exec who understands the science of what happens when you add CO2 to the atmosphere, yet continues working to produce more of it. However, that is a somewhat distant horizon. Humans are terrible at that.
AI lab folks are the ones who think that the damage from their products will occur in just 1 to 5 years, and yet they still continue.
My other point here is that it's not just the C-suite, it's also the researchers.
bigyabai · 1d ago
In other words, it's getting really good at breaking your software!
firefoxd · 1d ago
Now that we all have stoves at home, all restaurants will be going out of business any day now.
pupppet · 1d ago
Better analogy is we now all have cooks in our homes, and yes the restaurants will go out of business.
000ooo000 · 1d ago
>"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,"
Thank you for being so honest Mr CEO. What a great guy.
Just because your AI gives you a solution, doesn't mean my AI will provide the same. Now scale that fact up to different teams and different firms. How are things going to work? Why will it reduce the number of people? Just like Jurassic Park(or working on Linux), once strange unpredictable things start happening, you need more and more people running around to clean things up. They don't know what the fuck they are doing, or how to do it well, cause thats the nature of complex problems. So things spiral.
Most people are just defaulting to - oh AI will have one answer to everything. And we will all agree to that solution. This will never happen and therefore the predictions will all break.
1. Companies' declining investment in "developing people to senior". It's been declining for decades.
2. Self-education becoming key to "finishing" your education. College can't reasonably provide a complete education with the complexity and pace of software. New frameworks, CI, git, just all sorts of things aren't in curricula. University starts with Von Neumann, bubble sort, & big-O and has to proceed forward from there. Luckily today's kids have infinitely-patient LLMs! And insane amounts of content from youtubers! And infinite distribution! It's easier than ever to put your work out there and have it be seen. Kids can apply to jobs adding links to their portfolios and their open-source and show their chops that way, meaning companies need to lean less on interviews.
> "AI isn't stealing job categories outright — it's absorbing the lowest-skill tasks," Doshay said. "That shifts the burden to universities, boot camps, and candidates to level up faster."
Taking on interns and junior devs used to be part of the deal for tech companies that wanted the best talent. Now they can just look at kids' public portfolios and pluck the best ones.
It's a brave new world built on public personas where everyone is their own CEO and it's not for everyone. That's where the race comes in.
Once companies realize only so many AIs can be overseen by one person, they'll hire anyone and everyone who can babysit AIs to produce what the company needs - the more AI you can babysit the more valuable you are. Companies will become desperate for talent to put the compute to work. Jevons Paradox at full tilt.
Young guns WILL succeed in this environment. They'll learn on their own time and dime. It was never easier thanks to LLMs with infinite patience and youtubers providing deep explanations.
But it's not entry level software engineering. It's seat-of-your-pants learning and moving fast, running and gunning to get a thing built. Quality guardrails like PRs, code review, tests and such are more important than ever - installing and instilling is where you as senior dev can shine.
Quite so. Unless AI can do literally everything, at which point all prognostication is worthless, you can get more done with more people. The entry level jobs just might not be the same jobs that they are today. Which is actually not really much skin off the nose of the entree, as they are by definition not locked into a skillset anyway.
There is an absolutely ridiculous amount of work to be done, always. You can 10x, 100x everyone with a pulse and we still only find more work uncovered. Companies shed staff when the money runs out; the work will never run out.
Even if every CRUD webapp in the world collapses to one bored guy overseeing a fleet of 50000 AIs, as a global society we have fucking loads of work to do. We have PWh of energy capacity to design and install, a million km of high speed rail, hundreds of thousands of square kilometres of hospitals and schools, literally billions of homes to renovate from shacks to houses, forests to replant, moon bases, asteroid mines, generation ships, it goes on and on. If we want it to.
They only way work as a concept runs out is if we as a species decide we want it to (e.g. by giving all the money, aka human time rental credits, to billionaires and refusing to pay for anything they don't personally want), everyone dies, is a slave in the mines, ascends or otherwise doesn't require work to sustain, or if AGI actually happens and happens at scale.
"AI will take the jobs" is a shareholder-fellating euphemism for "we want AI to do enough work to sustain the people who own the AIs without reference to the rest of humanity". Which they were already doing quite handily anyway. Whether they can keep doing it in safety in perpetuity remains to be seen.
The question is when. Robotics is in a worse shape than AI when compared to humans, but the industry is now rapidly integrating modern AI into both the process and the actual products. It's hard to say, but there might be a 'ChatGPT moment' for robotics soon.
If you replace all people at all levels with robots (and the robots and their tasks don't require people to design, maintain or direct) then the "in safety" aspect of the final paragraph will probably become the important part.
If you can, say, design and build and run a railway network entirely automatically then we're well into singularity territory and there's literally no point guessing. The result could equally be infinite luxury space communism or all humans fed into the algae disgestors.
Do you now agree that that is false? Because I think I've shown it to be false. There is nothing unique about humans that precludes robots/AI/inorganics from doing every job a human can better and cheaper at some point in the future.
Also it can't infinitely be cheaper because money is fundamentally based on human time. If you can do everything without humans then the concept of money is fatally wounded. What that would mean is anyone's guess.
> Click here and you could win $100
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
I use Claude daily, and I am not saying this as a hot take, or burn. I have been genuinely thinking about how this works in one's brain.
One comparison might be to an oil exec who understands the science of what happens when you add CO2 to the atmosphere, yet continues working to produce more of it. However, that is a somewhat distant horizon. Humans are terrible at that.
AI lab folks are the ones who think that the damage from their products will occur in just 1 to 5 years, and yet they still continue.
My other point here is that it's not just the C-suite, it's also the researchers.
Thank you for being so honest Mr CEO. What a great guy.
/s