The uncertain future of coding careers and why I'm still hopeful

51 mooreds 86 7/3/2025, 1:45:27 AM jonmagic.com ↗

Comments (86)

3D30497420 · 2h ago
> We are all, collectively, building a giant, shared brain. Every time you write a blog post, answer a question on a forum, or push a project to GitHub, you are contributing to this massive corpus of human knowledge.

I would be more excited about this concept if this shared brain wasn't owned by rich, powerful people who will most certainly deploy this power in ways that benefit themselves to the detriment of everyone else.

Arainach · 9h ago
This is a rational take, which is why it is wrong.

I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.

None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.

simonw · 9h ago
The C suite may make some dumb short-term mistakes - just like twenty years ago when they tried to outsource all software development to cheaper countries - but they'll either course-correct when they spot those mistakes or will be out-competed by other, smarter companies.
reactordev · 8h ago
The market will always respond. When it happened before, younger - smarter - more efficient competitors emerged.
jimbob45 · 8h ago
HR can remain irrational longer than I can remain solvent.
shikon7 · 46m ago
Until someone makes the idea fashionable to replace HR with AI.
owebmaster · 45m ago
Then it will be forever irrational.
fakedang · 1h ago
Except it makes sense in a lot of recent off-shoring cases. While there are still your usual stupid companies outsourcing to Indian WITCH, a lot of midsize companies are also doing alright with nearshoring to either LatAm or even just out of SF/Seattle/NY (in the case of US companies), and Eastern Europe (in the case of EU companies). Even in biotech for instance, preliminary research has been successfully outsourced to either academia or to Indian and Chinese CROs. The latter have done exceptionally well, even innovating new products on their own and licensing them to the US market. The myth of onsite worker efficiency was practically shattered with Covid.

Where the Western worker can really only shine is in advancing on the tech forefront and helping keep that tech within Western borders. Stuff like defence or cybersecurity or some advanced new product/tool development. Anything else is free to be arbitraged away.

DanielHB · 4h ago
it takes years, if not decades, but that kind of attitude eventually leads to IBM.
lloeki · 5h ago
It's not just the C-suite.

I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.

The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".

I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.

CuriouslyC · 20m ago
Sounds like you're pretty peeved about this, is your manager on you because your peers are out-delivering you?

You realize even if you're onboard the agentic coding hype train, you don't have to just blithely paste tickets to the agent and let them rip. You can have a long conversation about design and architecture, and have them write their own implementation plan based on that, then watch them ticking items off the list and review code for changes as they're completed while the agent forges ahead. A lot of times you don't even need to have a long conversation about this stuff, just write a readme that very clearly outlines what you're doing and how to do it, and the agent will read it and do just fine.

pjc50 · 28m ago
> this PR is 90% AI!", _ignoring any previous failed attempt_

Someone pointed out that AI has a gatcha / intermittent reinforcement effect to it, which explains a lot of its power to capture minds. Sometimes it produces great results. The entire history of industrial process control is about trying to always produce good, or at least identical, results. These two are incompatible.

AI is right in the sweet spot of winning the demo but failing in mass production, and I think people are too optimistic about this being fixable.

ben_w · 4m ago
You may well be correct that people are too optimistic about it being fixable, but I would say the "why" is a little more complex than that.

It is also possible for artisans to produce functioning goods in low volumes, which is why "hand made" has often been seen a positive quality sign rather than a negative quality sign, and AI can do "artisanal" (same root as artificial). With mass production we want all three of "good", "cheap", and "fast", and we've been able to get it.

But even that AI can do artisanal is not sufficient for AI to succeed, and likely still won't be sufficient even when AI is near the best you can hire in a human, because AI is a memetic monoculture: while the errors of many human artisans can be somewhat uncorrelated and therefore the failure modes different, if everything comes from a single AI mind, it all fails in the same general kind of way. Likewise the quotation about science advancing one funeral at a time: diversity beats even very smart geniuses.

This is what I think can't be fixed with current approaches, because current approaches need too much information for it to be possible to give each one (even from different providers) a statistically distinct training set to learn from.

zombot · 4h ago
I bet there are also a good number of paid shills among those. If you look at how much money goes into the tech it's not too far-fetched to invest a tiny fraction of that in human "agents" to push the narrative. The endless repetition will produce some believers, even if the story is a complete lie.
guappa · 3h ago
Not necessarily paid shills, but it's a good way to get a promotions, and then when it will be revealed that it doesn't actually work, they got the promotion already and will jump on the next hype to get the next promotion.
globalnode · 3h ago
don't forget the useful idiots.
zombot · 55m ago
I don't exclude them and they have already been covered by my parent comment.
bluefirebrand · 1h ago
In tech we seem to have a tendency to believe people are rational and behave predictably, but the truth is there are still truly just a ton of useful idiots even in software and other "high intelligence" areas
paulddraper · 7h ago
Well that’s the nice thing about capitalism.

If it doesn’t work, it eventually dies.

hermitcrab · 3h ago
>If it doesn’t work, it eventually dies.

Unless it is a bank, in which case it gets bailed out by tax payers.

hnthrow90348765 · 1h ago
Cannot understate how absolutely enraging it is every time "basic economics", "supply and demand", or "basic capitalism" comes up as a thought-terminating response despite everything government does to keep failing stuff going
AbstractH24 · 53m ago
Death and rot are inevitable, it’s the friends we make along the way that matter most.
zombot · 4h ago
The belief in a rational market approaches religion.
rightbyte · 3h ago
The dissolving of the Soviet Union gave the neoliberals hubris or something and as they dwindle the ones left seem to get crazier.
easyThrowaway · 20m ago
Mark Fisher's "Capitalist Realism" proved this wrong in 2009.
Arainach · 7h ago
And the not nice thing about capitalism is that it can keep not working longer than most of us can pay for rent and food.
asimovfan · 3h ago
can you please define "does not work"? and give some examples of things that died because it didn't work?
the_real_cher · 3h ago
Unless its a monopoly.
guappa · 3h ago
Capitalism failed in 1929 but its corpse is still here…
zombot · 3h ago
Depends on who you ask. Today's billionaires would disagree.
billy99k · 10h ago
"What I see is a future where AI handles the grunt work, freeing us up to focus on the truly human part of creation: the next step, the novel idea, the new invention. If we don’t have to spend five years of our early careers doing repetitive tasks, that isn’t a threat, it’s a massive opportunity. It’s an acceleration of our potential."

The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.

When these jobs are eliminated, many developers will be out of work.

anilgulecha · 9h ago
IMO, if this ends up occuring, it will follow how other practitioner roles have evolved. Take medicine, and doctors for eg: there's a high bar to reach to be able to do a specialist surgery, until then you hone up your skills and practice. Compensation wise it isn't lucrative from the get-go, but can get so once you reach the specialist level. At that point they are liable for the work done. Hence such roles are typically licensed (CAs, lawyers, etc).

So if I have to make a few 5 year predictions:

1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.

2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.

3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.

4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

AbstractH24 · 50m ago
There’s a sweet spot right now to be in. Early enfough career to have gotten in the door, but young enfough to be mailable and open to new ways.
chii · 8h ago
> At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.

turbofreak · 9h ago
Canada?? They can’t build a subway station in 5 years nevermind restructure a massive job sector like this lmao
bluefirebrand · 1h ago
You're being downvoted but you're actually spot on

Calgary was supposed to have a new train line, planning has been in motion for years. Back in 2019 when I bought my house, the new train was supposed to open in 2025. As far as I know not a single piece of track has been placed yet. So... Yes

csomar · 7h ago
This. There are millions of software developers. There are hundreds (thousands?) that are working on the cutting edge of things. Think of popular open source projects used by the masses, usually there is one or a handful of developers doing most of the work. If the other side of the puzzle (integration) becomes automated, 95% or more of software developers are redundant.
chii · 9h ago
> A much larger percentage will only be able to work on things like CRUD apps and grunt work.

which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

> When these jobs are eliminated, many developers will be out of work.

like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.

AbstractH24 · 45m ago
> This has happened before

When do you think this is most comparable to?

Were there this many software developers around the peak of the dot com era? Im 35 so im old enfough to remember the excess of the time and all the weird products, but nothing about how the sausage was being made.

mooreds · 3m ago
> When do you think this is most comparable to?

I don't think it has happened to the software industry. We've been growing as computers have become more and more capable since the 1960s.

One analogy is farm employment cratering in the first half of the 20th century.

Went from 11.77M in 1910 to 5.88M in 1950 in the USA[0], even as the population went from 92M to 151M[1]. That's going from 12% of the population to 4%.

Another is travel agents, which went from 100k workers in the USA in 2001 to 30k in 2020[2] (though there has been a rebound since).

0: https://ourworldindata.org/employment-in-agriculture#all-cha...

1: https://www.census.gov/data/tables/time-series/dec/popchange...

2: https://fred.stlouisfed.org/series/LEU0254497900A

bugglebeetle · 7h ago
> which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).

AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.

chii · 7h ago
> AI models will never obtain the ability to completely replace “low value work”

if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.

A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.

blackbear_ · 2h ago
> AI models will never obtain the ability to completely replace “low value work"

Maybe, but this is not the meaning of replacement in this context and it need not hold for the "economic" reasoning to work.

All that matters is that AI makes developers more productive, as measured by number of (CRUD or whatever) apps per developer per unit of time. If this is true, then the current supply of apps can be provided by fewer developers, meaning that some of the current developers aren't needed anymore to sustain the current production level. In this scenario lower level engineers still exist, they are just able to do more in the same time by using AI.

scarface_74 · 52m ago
I’m working on a system now where the hard part is the integration, user experience, business requirements, solving XYProblems, etc.

Honestly this is true for most problems and has been forever for most developers.

But between all of the different Lambdas (yes we had to use Lambda to do business logic it’s Amazon Connect), there is probably around 2000 lines of relatively straightforward code and around 1000 lines of infrastructure as code.

I didn’t write a single line of code, I started by giving ChatGPT the diagram and very much did “vibe coding” between the code, the database design and the IAC.

I would have had to have at least one junior dev do the grunt work for me. I don’t think I wrote a single line of code.

Before the gate keeping starts, I started programming in assembly in 1986 and had an official title of “software engineer” or something similar until 2020.

123yawaworht456 · 9h ago
90% of real grunt work is "stitch an extra appendage to this unholy abomination of God" or "*points at the screen* look at this shit, figure out why is it happening and fix it". LLMs are more or less useless for either of those things.
zeta0134 · 7h ago
An LLM can certainly try to consume an extremely poorly specified bug report, in half English half the user's native language, then consume the entire codebase and guess what the devil they're actually referring to. My guess is the humans are better at this, and I mostly speak from experience on two separate support floors that tried to add AI to that flow. It fails, miserably.

It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.

danielbln · 6h ago
I disagree, agentic LLMs are incredibly useful for both.
csomar · 7h ago
LLMs are very good at fixing bugs. They do lack broader contexts and tools to navigate the codebase/interface. That's why Claude Code was such a breakthrough despite using the very same models you run on the chat.
xarope · 5h ago
very good at fixing bugs like these, which requires a senior developer to address and prompt? https://news.ycombinator.com/item?id=44159166

or this about MS pushing for more internal AI usage, and the resulting hilarity (or tragedy, depending if you are the one having to read the resulting code)? https://news.ycombinator.com/item?id=44404067

Fendy · 3h ago
I think a lot of people are feeling this, not just engineers. Engineering already has a high entry bar, and now with AI moving so fast, it’s honestly overwhelming. Feels like there's no way to avoid it—we either embrace it, actively or passively, whether we like it or not.

Personally, I think this whole shift might actually be better for young people early in their careers. They can change direction more easily, and in a weird way, AI kind of puts everyone back at the starting line. Stuff that used to take years to master, you can now learn—or get help with—in minutes. I’ve had interns solve problems way faster and smarter than me just because they knew how to use AI tools better. That’s been a real wake-up call.

I’m doing my best to treat AI as a teammate. It really does help with productivity. But the world never stop changing, and that’s exhausting sometimes. I try to keep learning, keep practicing, and keep adapting. And yeah, if I ever lose my job because of AI... ok, fine, I’ll have to change and try getting another, maybe different job. Easy to say, harder to do—but that mindset at least helps me not spiral.

skydhash · 3h ago
The true value of experience comes in two ways: Knowing when not to do something; and knowing the shortest path to produce the result needed.

More often, the result of juniors using LLM is a frankenstein ball of mud that is close to its implosion point. Individual features are part of a system and are judged based on how they contribute to its goal, not how they are individually correct.

smallstepforman · 9h ago
Google search is giving us a taste of AI summarised results, and for simple things its passable, but ask a serious question and you get good looking garbage. Yes, I know its early days, but looking at the current output quality we have nothing to worry about. It will be used as calculators, offload some menial repetetive task which can be automated, but the next gen of developers will still be tasked to solve complex problems.
999900000999 · 8h ago
Case in point.

I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.

Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.

I had to find an actual human written web page.

The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.

Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.

It’s impossible for anything to always be right since it’s impossible to know everything.

erentz · 7h ago
Google AI the other day told me that tinnitus is listed as a potential adverse reaction of Saphnelo.

Only it damn well isn’t. Anywhere. Not even patient reports.

The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?

cwalv · 7h ago
This problem is has already gotten so much better. In my experience it's no longer 10% of the time (I'd estimate more like 1%). In the end, you still need to use judgement; maybe it doesn't matter if it's wrong, and maybe it really does. It could be citing papers, and even then you don't know if the results are reproducible.
bluefirebrand · 1h ago
Has it actually become that much better or have you let your standards and judgment lapse because you want to trust it?

How would you even know to evaluate that?

simonw · 9h ago
Google's AI overviews is the single worst AI-driven experience in widespread use today. It's a mistake to make conclusions about how good AI has got based on that.

Have you tried search in ChatGPT with o4-mini or o3?

cwalv · 7h ago
I don't use it that much, but I have noticed these AI overviews still seem hallucinate a lot, compared to others. Meanwhile I hear that Gemini is catching up or surpassing other models, so I wonder if I'm just unlucky (or just haven't used it enough to see how much better it is)
input_sh · 6h ago
One is their state-of-the-art model, the other one's the best model they can run at scale and speed people expect from a search engine.
RaftPeople · 8h ago
I had a couple of great examples of getting the exact opposite answer depending on how I worded my question, now I can't trust any of the answers.
cwalv · 7h ago
Maybe the answer to your question was subjective?
csomar · 6h ago
Google Search AI is the worst and considering that AI is not a good alternative to search (the models are compressed data), I am not sure why Google has decided to use LLMs to answer questions.
readthenotes1 · 9h ago
Try asking Perplexity for a real taste. It works far better than google's--good enough to make searching fun again.

Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.

Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."

Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.

disambiguation · 8h ago
> We are all, collectively, building a giant, shared brain.

"Shared" as in shareholder?

"Shared" as in piracy?

"Shared" as in monthly subscription?

"Shared" as in sharing the wealth when you lose your job to AI?

simonw · 9h ago
Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
chii · 9h ago
More like quitting blacksmithing due to the invention of CNC.
globular-toast · 6h ago
CNC produces something no blacksmith could. The same cannot be said of LLMs.
bluefirebrand · 1h ago
I agree, but it doesn't matter what we think

Execs are convinced LLMs produce something that no programmer ever could. Or at least the same thing, faster than any programmer ever could

So they will drive us off a cliff chasing it

imtringued · 2h ago
Forging and subtractive manufacturing are different techniques.
zkmon · 6h ago
It's not just the grunt work going to AI. Actually it is the opposite. Grunt work of dealing with mess is the only thing that is left for humans. Think of legacy systems, archaic processes, meaningless workflows, dealing with other teams and people, politics of work, negotiations, team calls, history of technical issues... AI is a new recruit that has massive general abilities, but has no clue about the dingy layers of the corporate mess.
jbs789 · 7h ago
As this started with career advice, two points: the world values certain things (usually making people’s lives easier, one version of that is building useful tools) and the individual has a set of interests and skills. Finding the intersection of that for you should help guide you toward a career that the world values and interests you (if that’s important to you).

I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.

octo888 · 6h ago
I wasted 2 days using Cursor with the 3.7 thinking models to implement a relatively straightforward task (somewhat malicious compliance with being highly encouraged to use the tools, and because a coworker insisted I use their overly complex mini framework instead of just plain code for this task)

It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.

Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc

Yes yes I'm holding it wrong

The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down

bluefirebrand · 1h ago
If it helps any, this is not just you. I'm having the same kinds of problems, both with "being pressured to use the tools" and also being completely run around in circles when I try
asimpletune · 4h ago
Software engineers should unionize. We’re not real engineers until we have professional standards that are enforced (as well as liability for what we make). Virtually every other profession has some mandatory license or other mechanism to bring credibility and standards to their fields. AI just further emphasizes the need for such an institution.
lawgimenez · 9h ago
I just recently inherited a vibe coded project in iOS (fully created with ChatGPT), and not even close to working. This is annoying.
grogenaut · 8h ago
I helped my brother a few times with iOS apps he had folks from upwork build. They also didn't work beyond the initial version. They always wanted to rebuid theapp from scratch for each new requirement.
lawgimenez · 8h ago
Everything's a mess, a thousand commits and PRs from ChatGPT. Code won't compile because Codex it seems doesn't seem to understand static variables.

And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.

GardenLetter27 · 1h ago
I feel exactly the same way.

I just wish they were forced to publish open weights in return for using copyrighted materials in the "brain".

Animats · 9h ago
It's hard to see manual coding of web sites remaining a thing for much longer.
sublinear · 2h ago
If you're talking about personal websites I think that ship sailed almost 20 years ago with the rise of social media.

If you mean business websites, they are just about the most volatile code out there with crazy amounts of work that never stops. It's still a form of publishing after all. Every marketing decision has to filter through design agencies, legal and compliance, SEO, etc. before it gets handed off to web devs. Then the web dev has to push back when a ton of details are still wrong. Many decisions after testing are left unresolved by the time it goes live and those pages still need maintenance until its expiry.

Smaller businesses also have these problems with their websites, but with less complexity until they get more attention from the public.

bluefirebrand · 1h ago
> I think that ship sailed almost 20 years ago with the rise of social media

Well forget social media even. Wordpress, and now Shopify have definitely eaten the personal website

globular-toast · 6h ago
Most people younger than 30 probably haven't "manually coded a website" anyway.
mirsadm · 6h ago
Initially I felt anxiety about AI and it's potential to destroy my career. However now I am much more concerned about what will happen in 5 or 10 years of widespread AI slop. When humans lose motivation to produce content and all we're left is AI continually regenerating the same rubbish over and over again. I suspect there'll be a shortage of programmers in the future as people are hesitant to start a career in programming.
Apocryphon · 8h ago
AI is a red herring. The current malaise in the industry is caused by interest rates. Certainly, AI has the potential to disrupt things further later down the line, but the present has already been shaken enough by the whiplash between pandemic binging and post-ZIRP purging.
ifwinterco · 2h ago
This is my read as well.

Hard to say for sure because chatGPT came out at almost exactly the same time the post-covid wheels where starting to fall off anyway, but I think it's fair to say that as of right now you can't really replace all (or even many) of your engineers with AI.

What you definitely can do though, is fire 20+% of your engineers and get the same amount done simply because more is not necessarily better

anovikov · 1h ago
Sad truth is that we probably don't see any AI effects on hiring yet, or maybe only minimally so. For now, this is just normal cyclic shit. The worst is yet to come.