I think a lot of people are feeling this, not just engineers. Engineering already has a high entry bar, and now with AI moving so fast, it’s honestly overwhelming. Feels like there's no way to avoid it—we either embrace it, actively or passively, whether we like it or not.
Personally, I think this whole shift might actually be better for young people early in their careers. They can change direction more easily, and in a weird way, AI kind of puts everyone back at the starting line. Stuff that used to take years to master, you can now learn—or get help with—in minutes. I’ve had interns solve problems way faster and smarter than me just because they knew how to use AI tools better. That’s been a real wake-up call.
I’m doing my best to treat AI as a teammate. It really does help with productivity. But the world never stop changing, and that’s exhausting sometimes. I try to keep learning, keep practicing, and keep adapting. And yeah, if I ever lose my job because of AI... ok, fine, I’ll have to change and try getting another, maybe different job. Easy to say, harder to do—but that mindset at least helps me not spiral.
Arainach · 5h ago
This is a rational take, which is why it is wrong.
I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.
None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.
simonw · 5h ago
The C suite may make some dumb short-term mistakes - just like twenty years ago when they tried to
outsource all software development to cheaper countries - but they'll either course-correct when they spot those mistakes or will be out-competed by other, smarter companies.
reactordev · 5h ago
The market will always respond. When it happened before, younger - smarter - more efficient competitors emerged.
jimbob45 · 5h ago
HR can remain irrational longer than I can remain solvent.
lloeki · 1h ago
It's not just the C-suite.
I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.
The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".
I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.
zombot · 28m ago
I bet there are also a good number of paid shills among those. If you look at how much money goes into the tech it's not too far-fetched to invest a tiny fraction of that in human "agents" to push the narrative. The endless repetition will produce some believers, even if the story is a complete lie.
guappa · 12m ago
Not necessarily paid shills, but it's a good way to get a promotions, and then when it will be revealed that it doesn't actually work, they got the promotion already and will jump on the next hype to get the next promotion.
globalnode · 7m ago
don't forget the useful idiots.
DanielHB · 56m ago
it takes years, if not decades, but that kind of attitude eventually leads to IBM.
paulddraper · 3h ago
Well that’s the nice thing about capitalism.
If it doesn’t work, it eventually dies.
zombot · 25m ago
The belief in a rational market approaches religion.
Arainach · 3h ago
And the not nice thing about capitalism is that it can keep not working longer than most of us can pay for rent and food.
guappa · 12m ago
Capitalism failed in 1929 but its corpse is still here…
billy99k · 6h ago
"What I see is a future where AI handles the grunt work, freeing us up to focus on the truly human part of creation: the next step, the novel idea, the new invention. If we don’t have to spend five years of our early careers doing repetitive tasks, that isn’t a threat, it’s a massive opportunity. It’s an acceleration of our potential."
The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.
When these jobs are eliminated, many developers will be out of work.
anilgulecha · 5h ago
IMO, if this ends up occuring, it will follow how other practitioner roles have evolved. Take medicine, and doctors for eg: there's a high bar to reach to be able to do a specialist surgery, until then you hone up your skills and practice. Compensation wise it isn't lucrative from the get-go, but can get so once you reach the specialist level. At that point they are liable for the work done. Hence such roles are typically licensed (CAs, lawyers, etc).
So if I have to make a few 5 year predictions:
1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.
2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.
3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.
4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.
chii · 5h ago
> At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.
I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.
turbofreak · 5h ago
Canada?? They can’t build a subway station in 5 years nevermind restructure a massive job sector like this lmao
csomar · 3h ago
This. There are millions of software developers. There are hundreds (thousands?) that are working on the cutting edge of things. Think of popular open source projects used by the masses, usually there is one or a handful of developers doing most of the work. If the other side of the puzzle (integration) becomes automated, 95% or more of software developers are redundant.
chii · 5h ago
> A much larger percentage will only be able to work on things like CRUD apps and grunt work.
which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.
> When these jobs are eliminated, many developers will be out of work.
like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.
bugglebeetle · 4h ago
> which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.
Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).
AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.
chii · 3h ago
> AI models will never obtain the ability to completely replace “low value work”
if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.
A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.
123yawaworht456 · 5h ago
90% of real grunt work is "stitch an extra appendage to this unholy abomination of God" or "*points at the screen* look at this shit, figure out why is it happening and fix it". LLMs are more or less useless for either of those things.
zeta0134 · 3h ago
An LLM can certainly try to consume an extremely poorly specified bug report, in half English half the user's native language, then consume the entire codebase and guess what the devil they're actually referring to. My guess is the humans are better at this, and I mostly speak from experience on two separate support floors that tried to add AI to that flow. It fails, miserably.
It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.
csomar · 3h ago
LLMs are very good at fixing bugs. They do lack broader contexts and tools to navigate the codebase/interface. That's why Claude Code was such a breakthrough despite using the very same models you run on the chat.
or this about MS pushing for more internal AI usage, and the resulting hilarity (or tragedy, depending if you are the one having to read the resulting code)? https://news.ycombinator.com/item?id=44404067
danielbln · 3h ago
I disagree, agentic LLMs are incredibly useful for both.
asimpletune · 13m ago
Software engineers should unionize. We’re not real engineers until we have professional standards that are enforced (as well as liability for what we make). Virtually every other profession has some mandatory license or other mechanism to bring credibility and standards to their fields. AI just further emphasizes the need for such an institution.
smallstepforman · 5h ago
Google search is giving us a taste of AI summarised results, and for simple things its passable, but ask a serious question and you get good looking garbage. Yes, I know its early days, but looking at the current output quality we have nothing to worry about. It will be used as calculators, offload some menial repetetive task which can be automated, but the next gen of developers will still be tasked to solve complex problems.
999900000999 · 5h ago
Case in point.
I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.
Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.
I had to find an actual human written web page.
The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.
Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.
It’s impossible for anything to always be right since it’s impossible to know everything.
erentz · 4h ago
Google AI the other day told me that tinnitus is listed as a potential adverse reaction of Saphnelo.
Only it damn well isn’t. Anywhere. Not even patient reports.
The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?
cwalv · 3h ago
This problem is has already gotten so much better. In my experience it's no longer 10% of the time (I'd estimate more like 1%). In the end, you still need to use judgement; maybe it doesn't matter if it's wrong, and maybe it really does. It could be citing papers, and even then you don't know if the results are reproducible.
simonw · 5h ago
Google's AI overviews is the single worst AI-driven experience in widespread use today. It's a mistake to make conclusions about how good AI has got based on that.
Have you tried search in ChatGPT with o4-mini or o3?
cwalv · 3h ago
I don't use it that much, but I have noticed these AI overviews still seem hallucinate a lot, compared to others. Meanwhile I hear that Gemini is catching up or surpassing other models, so I wonder if I'm just unlucky (or just haven't used it enough to see how much better it is)
input_sh · 2h ago
One is their state-of-the-art model, the other one's the best model they can run at scale and speed people expect from a search engine.
RaftPeople · 4h ago
I had a couple of great examples of getting the exact opposite answer depending on how I worded my question, now I can't trust any of the answers.
cwalv · 3h ago
Maybe the answer to your question was subjective?
csomar · 3h ago
Google Search AI is the worst and considering that AI is not a good alternative to search (the models are compressed data), I am not sure why Google has decided to use LLMs to answer questions.
readthenotes1 · 5h ago
Try asking Perplexity for a real taste. It works far better than google's--good enough to make searching fun again.
Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.
Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."
Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.
jbs789 · 4h ago
As this started with career advice, two points: the world values certain things (usually making people’s lives easier, one version of that is building useful tools) and the individual has a set of interests and skills. Finding the intersection of that for you should help guide you toward a career that the world values and interests you (if that’s important to you).
I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.
octo888 · 2h ago
I wasted 2 days using Cursor with the 3.7 thinking models to implement a relatively straightforward task (somewhat malicious compliance with being highly encouraged to use the tools, and because a coworker insisted I use their overly complex mini framework instead of just plain code for this task)
It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.
Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc
Yes yes I'm holding it wrong
The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down
simonw · 5h ago
Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
chii · 5h ago
More like quitting blacksmithing due to the invention of CNC.
globular-toast · 2h ago
CNC produces something no blacksmith could. The same cannot be said of LLMs.
disambiguation · 4h ago
> We are all, collectively, building a giant, shared brain.
"Shared" as in shareholder?
"Shared" as in piracy?
"Shared" as in monthly subscription?
"Shared" as in sharing the wealth when you lose your job to AI?
zkmon · 2h ago
It's not just the grunt work going to AI. Actually it is the opposite. Grunt work of dealing with mess is the only thing that is left for humans. Think of legacy systems, archaic processes, meaningless workflows, dealing with other teams and people, politics of work, negotiations, team calls, history of technical issues... AI is a new recruit that has massive general abilities, but has no clue about the dingy layers of the corporate mess.
Animats · 5h ago
It's hard to see manual coding of web sites remaining a thing for much longer.
globular-toast · 2h ago
Most people younger than 30 probably haven't "manually coded a website" anyway.
lawgimenez · 5h ago
I just recently inherited a vibe coded project in iOS (fully created with ChatGPT), and not even close to working. This is annoying.
grogenaut · 4h ago
I helped my brother a few times with iOS apps he had folks from upwork build. They also didn't work beyond the initial version. They always wanted to rebuid theapp from scratch for each new requirement.
lawgimenez · 4h ago
Everything's a mess, a thousand commits and PRs from ChatGPT. Code won't compile because Codex it seems doesn't seem to understand static variables.
And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.
Apocryphon · 4h ago
AI is a red herring. The current malaise in the industry is caused by interest rates. Certainly, AI has the potential to disrupt things further later down the line, but the present has already been shaken enough by the whiplash between pandemic binging and post-ZIRP purging.
mirsadm · 3h ago
Initially I felt anxiety about AI and it's potential to destroy my career. However now I am much more concerned about what will happen in 5 or 10 years of widespread AI slop. When humans lose motivation to produce content and all we're left is AI continually regenerating the same rubbish over and over again. I suspect there'll be a shortage of programmers in the future as people are hesitant to start a career in programming.
Personally, I think this whole shift might actually be better for young people early in their careers. They can change direction more easily, and in a weird way, AI kind of puts everyone back at the starting line. Stuff that used to take years to master, you can now learn—or get help with—in minutes. I’ve had interns solve problems way faster and smarter than me just because they knew how to use AI tools better. That’s been a real wake-up call.
I’m doing my best to treat AI as a teammate. It really does help with productivity. But the world never stop changing, and that’s exhausting sometimes. I try to keep learning, keep practicing, and keep adapting. And yeah, if I ever lose my job because of AI... ok, fine, I’ll have to change and try getting another, maybe different job. Easy to say, harder to do—but that mindset at least helps me not spiral.
I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.
None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.
I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.
The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".
I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.
If it doesn’t work, it eventually dies.
The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.
When these jobs are eliminated, many developers will be out of work.
So if I have to make a few 5 year predictions:
1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.
2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.
3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.
4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.
I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.
which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.
> When these jobs are eliminated, many developers will be out of work.
like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.
Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).
AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.
if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.
A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.
It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.
or this about MS pushing for more internal AI usage, and the resulting hilarity (or tragedy, depending if you are the one having to read the resulting code)? https://news.ycombinator.com/item?id=44404067
I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.
Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.
I had to find an actual human written web page.
The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.
Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.
It’s impossible for anything to always be right since it’s impossible to know everything.
Only it damn well isn’t. Anywhere. Not even patient reports.
The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?
Have you tried search in ChatGPT with o4-mini or o3?
Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.
Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."
Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.
I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.
It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.
Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc
Yes yes I'm holding it wrong
The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down
"Shared" as in shareholder?
"Shared" as in piracy?
"Shared" as in monthly subscription?
"Shared" as in sharing the wealth when you lose your job to AI?
And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.