The uncertain future of coding careers and why I'm still hopeful

65 mooreds 131 7/3/2025, 1:45:27 AM jonmagic.com ↗

Comments (131)

cardanome · 53m ago
> What I see is a future where AI handles the grunt work, freeing us up to focus on the truly human part of creation: the next step, the novel idea, the new invention. If we don’t have to spend five years of our early careers doing repetitive tasks, that isn’t a threat, it’s a massive opportunity. It’s an acceleration of our potential.

Yeah, no that is always promised but historically this has never been true. On the contrary. Every technical revolution ever has brought great short term suffering to the majority and in the long term served to alienate people from their work.

It has been creative people, translator, writers, editor, artists, musicians who most fear to lose their jobs due to generative AI.

What is more fulfilling? Creating had crafted items or just being a cog in the assembly line? Writing your own code or micromanaging an AI?

Doesn't mean that progress is inherently bad but that it is a political question. Will the productivity gains allow us to work less and enjoy life more or will they make rich people more rich? Currently rich people are winning but the wind can change.

an0malous · 19m ago
> historically this has never been true

What examples do you have besides for generative AI? Also generative AI has not replaced any creative work yet.

The last decades of the computing industry automated things like boilerplate tasks, resource management, and infra/scaling. Before that a lot of factory automation was removing the need for repetitive labor. Historically it’s been exactly like OP has said, automating repetitive tasks and freeing up time for creative ones.

samdoesnothing · 15m ago
> Every technical revolution ever has brought great short term suffering to the majority

This is absurdly incorrect.

cherryteastain · 44m ago
If we followed this logic every time we got a labour saving innovation, we'd be still living in the stone ages. Using a bow and arrow to kill an antelope from afar is more efficient, but nothing gives the artisanal flavour of meat developed by hours of persistence hunting.
cardanome · 36m ago
Yeah that is why I wrote:

> Doesn't mean that progress is inherently bad

Progress is inevitable. And while it leads to short term suffering it does also lead to higher standards of living or at least it can.

Still, the claim that tech would safe us from the "mundane" tasks and would allow us to focus more and more on creative and fun work is often claimed but never fulfilled.

Especially we software devs seem to be complete suckers for that kind of thinking always searching for that silver bullet, the new hot framework, the new language, the new way to deploy servers, the complete rewrite that will make everything better. No, it wont.

kiba · 31m ago
Progress isn't inevitable. It's the hard work of million of people exerting effort everyday.

Look at space. We had suffered stagnation for a while. Even newspace companies aren't necessarily better, just new entrants. We had widely divergent outcomes for "new" companies that are more than two decades old. At least it only take one to have massive success.

thenthenthen · 40m ago
I think creative ppl dont worry so much. For repetitive tasks, like write 200 line of css, llms are great. For creative work not so much.
OJFord · 6m ago
Have you never tried generating an image with one?
kiba · 34m ago
Socialism and Georgism are a response and reaction to the industrial revolution more than a hundred of years ago. They're even more relevant today even if you disagree with their proposed solution.

It's not some inevitable fate or destiny that all automation will ever does is alienate people.

jajko · 21m ago
This push is nothing new, since the need is there probably since 80s, definitely 90s. Visual programming in its myriad implementations, various DSLs, everything you can do to just stay away from that ugly low level hard code that does grunt of the work. Managers and analysts becoming 'devs' when needed.

Suffice to say the wheel turned one more time. Now its more shiny than ever and I can see some real uses for it for ie smarter embedded machines or home appliances or spying and ranking of whole global population easier than ever, but very, very little on top of that. That huge hype wave that every second person keeps mentioning is out of touch with reality and mostly annoying or dysfunctional use cases people experience (and continuous costs of those).

Look , I get it, tons of eager folks also here looking for ways to earn their first million before 30-35 or whatever is their inner goal they think will finally lead to life satisfaction (it won't).

I may be fired one day (in fact any day is a good day, part of this job since day 1) but in next 10 years the reason won't be an LLM doing what I do better.

And last point - that dream of working less generally is a proper pipe dream. You earn more but raising kids is a nightmare also from cost point. Prices of properties are skyrocketing everywhere. This rat race has also its own wheel that keeps spinning. The max most folks can dream of is either work 80% 4 days a week or something akin to what I currently have - 90% contract, working 5 days a week, but having cca 50 paid vacation days yearly. The real financial hit is somewhere around 6-7% after taxes and social payments, I'd say a good deal (while it lasts).

3D30497420 · 4h ago
> We are all, collectively, building a giant, shared brain. Every time you write a blog post, answer a question on a forum, or push a project to GitHub, you are contributing to this massive corpus of human knowledge.

I would be more excited about this concept if this shared brain wasn't owned by rich, powerful people who will most certainly deploy this power in ways that benefit themselves to the detriment of everyone else.

gruez · 1h ago
>if this shared brain wasn't owned by rich, powerful people [...]

There are open weights models

moritzwarhier · 1h ago
But so far, the point stands, as far as I'm concerned, as these are not competitive (in my humble experience).

And the platforms where valuable input comes in (StackOverflow, Reddit, AI tools themselves) all have licenses that would allow them to become rent-seekers on AI models and challenging open models to not provably train on "their" data (the user contributions).

This giant clusterfuck could only be avoid when the major social media platforms and coding agents would each use CC0 license for the content they collect.

Take this with a huge grain of salt as I'm not a lawyer, not even a well-informed layman. But recent news seem to demonstrate that knowing law is not important anyway, only having capital is.

More specifically, I still don't know what the current situation with "fair use" is around training data.

gruez · 54m ago
>But so far, the point stands, as far as I'm concerned, as these are not competitive (in my humble experience).

Source? The popular narrative, at least a few months ago was that Chinese AI models like deepseek were not far behind models from OpenAI, hence the stock market crash.

eg. "DeepSeek’s models are much cheaper and almost as good as American rivals" https://archive.is/2yNzH

>And the platforms where valuable input comes in (StackOverflow, Reddit, AI tools themselves) all have licenses that would allow them to become rent-seekers on AI models and challenging open models to not provably train on "their" data (the user contributions).

Is there any evidence this actually caused open weights models to be worse? As much as you hate this "rent seeking", it's mostly a sideshow. If open weights models are competitive, then the proof is in the pudding, no need to fret about the morality platforms "rent seeking" over UGC.

>This giant clusterfuck could only be avoid when the major social media platforms and coding agents would each use CC0 license for the content they collect.

This makes no sense. You can't take some copyrighted content you scraped (eg. some blog post), relicense it as "CC0", and then redistribute it. That's straightforwardly copyright infringement. Search engine and AI companies get away with this because they're not redistributing the raw data itself, but rather a transformed version (ie. snippets and AI models respectively), which helps it pass as "transformative" under the "Purpose and character of the use" part of the fair use test.

>More specifically, I still don't know what the current situation with "fair use" is around training data.

There was a recent court ruling in the US that basically ruled it's fine to train on copyrighted books without permission, but you can't pirate books to obtain them. This hasn't reached the supreme court yet, so it's not final.

moritzwarhier · 20m ago
And (I don't want to edit my long comment):

it's good if in the end the "shared human brain" ends up being open, without AI companies seeking to monetize things they didn't create (of course they deserve margin for the training itself).

Maybe it's the best we can hope for. Personally, DeepSeek felt like a step in the right direction regarding this. Not sure, but we might agree there.

moritzwarhier · 30m ago
> Source? The popular narrative, at least a few months ago was that Chinese AI models like deepseek were not far behind models from OpenAI, hence the stock market crash

Like I said, I was talking from my personal experience.

I was comparing the on-prem models I know with the large, proprietary cloud models.

This includes variants of DeepSeek, but not the largest one that they host online.

> Is there any evidence this actually caused open weights models to be worse? As much as you hate this "rent seeking", it's mostly a sideshow. If open weights models are competitive, then the proof is in the pudding, no need to fret about the morality platforms "rent seeking" over UGC.

AI performance is famously hard to benchmark, so no, I don't have "evidence". But it is not unreasonable to assume some generality of my experiences with OpenAI's, Google's and other cloud models vs on-prem self-hosted coding agents models, that is, that the really large proprietary models perform best.

Also, I'm not "fretting about morality", I'm relating to the comment you replied to. The point was a monopolization of monetization for knowledge and skills by AI companies, when the data they build on was created for free by humans contributing to social media, or people paying to use cloud AI models.

It's possible for me to think about thinks beyond my control without claiming moral authority.

> This makes no sense. You can't take some copyrighted content you scraped (eg. some blog post), relicense it as "CC0", and then redistribute it.

You are distorting my point and building a strawman. Yes, people willingly give away their work for free, either

a) because they willingly accept CC0 or CC-SA (like with SO): https://stackoverflow.com/help/licensing Most people who participated in these communities did it for fun, reputation, etc, not anticipating someone monetizing it by interpolating public content using AI

or

b) because they use Social Media like Reddit and simply don't know or don't care

> That's straightforwardly copyright infringement

What about a coding agent being able to do something "in its own words" because it sucked up lots of GitHub projects solving this problem, where people spent a lot of time on proper documentation etc

> There was a recent court ruling in the US that basically ruled it's fine to train on copyrighted books without permission, but you can't pirate books to obtain them. This hasn't reached the supreme court yet, so it's not final.

Wasn't this the same ruling that said it's enough to buy one copy of a book in order to basically sell paraphrased excerpts of said book?

Please correct me where I'm wrong, but I have not yet conceived any new information from your comment, and it seems that you distorted some of the previous discussion to make your point, which I'd paraphrase as "current copyright covers AI and there is no threat that AI companies would monetize public content that was put out under licenses meant to prohibit monetizing derivative work"

Arainach · 11h ago
This is a rational take, which is why it is wrong.

I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.

None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.

simonw · 10h ago
The C suite may make some dumb short-term mistakes - just like twenty years ago when they tried to outsource all software development to cheaper countries - but they'll either course-correct when they spot those mistakes or will be out-competed by other, smarter companies.
reactordev · 10h ago
The market will always respond. When it happened before, younger - smarter - more efficient competitors emerged.
ringeryless · 1h ago
this. i mean, look at xerox or big blue
cardanome · 1h ago
That assumes a free and efficient market. Good luck with that.

The big tech monopolists can literally buy off any emerging competition. They have massive political influence that they can leverage to shape the market based on their needs. Not that there would be any political appetite to challenge national "champions" in times of rising international tension anyway. They are too big to fail.

ringeryless · 1h ago
until they aren't.

don't discount the possibility of a total rout for those who put all eggs in such a shoddy basket.

interesting to note the limits of political influence now that the megabill stripped the ai moratorium nonsense.

i suspect that the brick and mortar politicos are a bit like "thanks for the money, now bugg off, kid, before i beat you up"

it will be interesting to see what sort of revenge the worlds richest man is cooking for his uppity fixer dude.

jimbob45 · 10h ago
HR can remain irrational longer than I can remain solvent.
pseudocomposer · 1h ago
HR isn’t irrational; you’re just making the mistake of thinking their role is to serve you, the employee, as opposed to the C-suite and shareholders.
shikon7 · 2h ago
Until someone makes the idea fashionable to replace HR with AI.
owebmaster · 2h ago
Then it will be forever irrational.
fakedang · 3h ago
Except it makes sense in a lot of recent off-shoring cases. While there are still your usual stupid companies outsourcing to Indian WITCH, a lot of midsize companies are also doing alright with nearshoring to either LatAm or even just out of SF/Seattle/NY (in the case of US companies), and Eastern Europe (in the case of EU companies). Even in biotech for instance, preliminary research has been successfully outsourced to either academia or to Indian and Chinese CROs. The latter have done exceptionally well, even innovating new products on their own and licensing them to the US market. The myth of onsite worker efficiency was practically shattered with Covid.

Where the Western worker can really only shine is in advancing on the tech forefront and helping keep that tech within Western borders. Stuff like defence or cybersecurity or some advanced new product/tool development. Anything else is free to be arbitraged away.

simonh · 1h ago
It's getting there, slowly, because a lot of the initial bootstrapping problems have been worked through.

Lets say there are 1,000 decent engineers in a country and they get hired by various western companies and it's all a great success. They're very happy, obviously offshoring there works well, other companies pile in and hire 10,000 engineers. Ok, are there another 10,000 decent engineers in the country? Probably not, but maybe you can poach some of the 1,000 and use them to train up the other 10,000. But soon there are openings for 100,000 offshored jobs, then 500,000. How many well trained engineers are there again?

That ramp up takes time, and it's not just a matter of smart people, it's relevant experience. If you have two countries with the same number of competent trained people, but in one of them 5% have directly relevant experience, and in the other 50% of them do, that's a massive advantage that cannot even be solved with money.

Then there's the fact the the most talented and experienced people move out of the country. So for every 10 engineers that get to top tier, 3-4 of them move to Europe or the US. So, not only are these countries losing those people, but also they're losing out on all the juniors they could lead and train up. Hiring there is like trying to fill a leaky bucket.

None of which is to say this is impossible, or that it isn't worth doing, or that it won't work. I've seen it work, and the company I'm at now has reaped huge benefits from offshoring. You just need to go into it clear eyed and with the understanding that it has to work for the people you're offshoring to, and that country as well, and you need to be adaptable as the situation changes.

DanielHB · 6h ago
it takes years, if not decades, but that kind of attitude eventually leads to IBM.
lloeki · 6h ago
It's not just the C-suite.

I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.

The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".

I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.

pjc50 · 2h ago
> this PR is 90% AI!", _ignoring any previous failed attempt_

Someone pointed out that AI has a gatcha / intermittent reinforcement effect to it, which explains a lot of its power to capture minds. Sometimes it produces great results. The entire history of industrial process control is about trying to always produce good, or at least identical, results. These two are incompatible.

AI is right in the sweet spot of winning the demo but failing in mass production, and I think people are too optimistic about this being fixable.

ben_w · 1h ago
You may well be correct that people are too optimistic about it being fixable, but I would say the "why" is a little more complex than that.

It is also possible for artisans to produce functioning goods in low volumes, which is why "hand made" has often been seen a positive quality sign rather than a negative quality sign, and AI can do "artisanal" (same root as artificial). With mass production we want all three of "good", "cheap", and "fast", and we've been able to get it.

But even that AI can do artisanal is not sufficient for AI to succeed, and likely still won't be sufficient even when AI is near the best you can hire in a human, because AI is a memetic monoculture: while the errors of many human artisans can be somewhat uncorrelated and therefore the failure modes different, if everything comes from a single AI mind, it all fails in the same general kind of way. Likewise the quotation about science advancing one funeral at a time: diversity beats even very smart geniuses.

Memetic monoculture is what I think can't be fixed with current approaches, because current approaches need too much information for it to be possible to give each LLM* (even from different providers) a statistically distinct training set to learn from.

* The image generators do seem to be able to get meaningfully distinct output styles. At least superficially, I'm not an artist, so I may be missing some common thread they share beyond the "indistinct blurry background item" that I count as an error rather than a style. But in this topic, even that error would count as a common failure mode.

pjc50 · 1h ago
> if everything comes from a single AI mind, it all fails in the same general kind of way

I'm reminded of https://qntm.org/mmacevedo ; even if we don't get brain uploading, we have a similar sort of thing going on with synthetic "personalities" of LLMs.

> diversity beats even very smart geniuses.

Some of HN will get very mad about that statement but it is often true. Of course, the best combination is a diverse collection of geniuses, which is why midcentury openish borders US did so well.

> The image generators do seem to be able to get meaningfully distinct output styles

Generally they can plagiarize in the style of any human artist you can name whose work is on the internet. Artists are, not unreasonably, upset about this.

ben_w · 1m ago
> I'm reminded of https://qntm.org/mmacevedo

My inspiration was from Alastair Reynolds putting indoctrinal viruses into the Revelation Space series several years before that was published. At the time, I was thinking of how AI might enforce group-think, rather than write the content itself directly: https://benwheatley.github.io/blog/2019/12/30-18.46.50.html

> Generally they can plagiarize in the style of any human artist you can name whose work is on the internet. Artists are, not unreasonably, upset about this.

Indeed, but so can LLMs of text styles. I meant more of the "voice" seems to be different between them — there seems to be more of a difference between getting SDXL and SD 3.5 (or DALL•E 2 and whatever it's called now it's baked into ChatGPT) to mimic, say, Hieronymus Bosch, than there is between getting GPT-4o and Claude Sonnet 4 to do a poem in the style of a Shakespeare soliloquy.

zombot · 6h ago
I bet there are also a good number of paid shills among those. If you look at how much money goes into the tech it's not too far-fetched to invest a tiny fraction of that in human "agents" to push the narrative. The endless repetition will produce some believers, even if the story is a complete lie.
guappa · 5h ago
Not necessarily paid shills, but it's a good way to get a promotions, and then when it will be revealed that it doesn't actually work, they got the promotion already and will jump on the next hype to get the next promotion.
globalnode · 5h ago
don't forget the useful idiots.
zombot · 2h ago
I don't exclude them and they have already been covered by my parent comment.
globalnode · 13m ago
this is true -- the 'believers'
bluefirebrand · 3h ago
In tech we seem to have a tendency to believe people are rational and behave predictably, but the truth is there are still truly just a ton of useful idiots even in software and other "high intelligence" areas
CuriouslyC · 2h ago
Sounds like you're pretty peeved about this, is your manager on you because your peers are out-delivering you?

You realize even if you're onboard the agentic coding hype train, you don't have to just blithely paste tickets to the agent and let them rip. You can have a long conversation about design and architecture, and have them write their own implementation plan based on that, then watch them ticking items off the list and review code for changes as they're completed while the agent forges ahead. A lot of times you don't even need to have a long conversation about this stuff, just write a readme that very clearly outlines what you're doing and how to do it, and the agent will read it and do just fine.

paulddraper · 9h ago
Well that’s the nice thing about capitalism.

If it doesn’t work, it eventually dies.

hermitcrab · 5h ago
>If it doesn’t work, it eventually dies.

Unless it is a bank, in which case it gets bailed out by tax payers.

gruez · 1h ago
...and that's fine, because banks are highly regulated.
hermitcrab · 5m ago
Are they though? There seems to be a lot of talk about relaxing various regulations bought in after the 2008 crash. Did we learn nothing?
hnthrow90348765 · 3h ago
Cannot understate how absolutely enraging it is every time "basic economics", "supply and demand", or "basic capitalism" comes up as a thought-terminating response despite everything government does to keep failing stuff going
hermitcrab · 1h ago
Capitalism for the poor. Socialism for the rich.
AbstractH24 · 2h ago
Death and rot are inevitable, it’s the friends we make along the way that matter most.
zombot · 6h ago
The belief in a rational market approaches religion.
rightbyte · 5h ago
The dissolving of the Soviet Union gave the neoliberals hubris or something and as they dwindle the ones left seem to get crazier.
Arainach · 8h ago
And the not nice thing about capitalism is that it can keep not working longer than most of us can pay for rent and food.
asimovfan · 5h ago
can you please define "does not work"? and give some examples of things that died because it didn't work?
gruez · 1h ago
Literally any company that went bankrupt? Before iPhones there was the Blackberry, now the company is basically defunct.
the_real_cher · 5h ago
Unless its a monopoly.
easyThrowaway · 2h ago
Mark Fisher's "Capitalist Realism" proved this wrong in 2009.
guappa · 5h ago
Capitalism failed in 1929 but its corpse is still here…
zombot · 5h ago
Depends on who you ask. Today's billionaires would disagree.
pclowes · 55m ago
Why are software engineers paid well to date? Successful software uniquely benefits from operational and financial leverage. Which creates the term "software margins"

Financial leverage: It is cheap to write code, experiments are inexpensive. A million dollars in funding for a B2B SaaS gives many more shots on goal vs a million dollars in funding for drug research or manufacturing. This increases probability of ROI and permits aggressive investment.

Operational leverage: Scaling code is cheap as well. It is free to copy. Solving one problem with software well often enables solving immediately adjacent problems very cheaply.

Do LLMs decrease or increase the leverage here?

Writing code is cheaper, a single engineer can now do much more. Does that endanger engineers? Yes, if their job is "take requirements and implement to spec". No, if their job is "solve important business problems at scale". The former are already typically not valued or paid exceptionally highly. The latter are likely to be valued and paid even more than they already are.

Or put another way, if software engineers are going to be hurt by LLMs who is going to benefit? This is assuming a zero sum game which I would disagree with here. But if not software engineers than who is better positioned to wield LLMs effectively?

Workaccount2 · 45m ago
What is being missed here, and the actual blindside of LLMs, is that software devs will not be the sole gatekeepers of computer utilization in the future.

I cannot overstate this enough. It is the main thing to watch for. Even if LLM progress halts today, there will be 5-10 years of fundamental changes in how end users interact with computers.

dboreham · 40m ago
Meanwhile my phone still can't autocorrect my typing and my email provider still can't identify spam.
jajko · 36m ago
Its cheap to write some code, but still big software projects cost massive amounts of cash, take forever and are often a failure. There are visible bugs or at least big missteps in design in literally all software I use - Windows OS, Office, browsers and specific webs like ebanking, desktop apps. I dare say making software work well consistently is not a skill thats easy to acquire.

Also "solve important business problems at scale" - thats an extremely narrow mostly SV-based view on how most software out there is built, no company I ever worked for in past 20 years across 3 countries ever needed that from me nor anybody around. I'd say a junior view but that would ruffle some feathers. Seniors talk about team and client and budget communication, processes, delivery pipelines, support etc.

Lets put it this way - real world is a tad more complex than your simplification make it look like.

freedomben · 15m ago
> Look closely at every major breakthrough, even those in AI-driven medicine. It’s still humans pointing the AI down the right paths. Human creativity is the spark.

In the short term yes, but we're already seeing nearly autonomous agents get impressive results. It won't be very long until the average person can be that guiding hand, rather than a software engineer who knows how to code by hand and design software. This is good for the world, terrible for the software dev

d3ckard · 12m ago
I call bullshit. This is like saying an average person can be the driving hand for legal documents or medical diagnosis.

The whole point is that as a specialist you vouch for what has been created. Yes, your time moves away from writing code to reviewing it, but it still requires competence to figure out whether what code is doing is exactly what is supposed to be doing.

billy99k · 12h ago
"What I see is a future where AI handles the grunt work, freeing us up to focus on the truly human part of creation: the next step, the novel idea, the new invention. If we don’t have to spend five years of our early careers doing repetitive tasks, that isn’t a threat, it’s a massive opportunity. It’s an acceleration of our potential."

The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.

When these jobs are eliminated, many developers will be out of work.

anilgulecha · 11h ago
IMO, if this ends up occuring, it will follow how other practitioner roles have evolved. Take medicine, and doctors for eg: there's a high bar to reach to be able to do a specialist surgery, until then you hone up your skills and practice. Compensation wise it isn't lucrative from the get-go, but can get so once you reach the specialist level. At that point they are liable for the work done. Hence such roles are typically licensed (CAs, lawyers, etc).

So if I have to make a few 5 year predictions:

1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.

2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.

3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.

4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

gitgud · 39m ago
1. Already true, no company will make the AI agent liable for its output, it’s always the programmer

2. Unlikely, as most software won’t result in death/injury… whereas a structural engineering project is much more life threatening.

3. I actually think entry level engineers will be expected to ramp up to productive levels much much quicker due to the help of AI

4. Already true

AbstractH24 · 2h ago
There’s a sweet spot right now to be in. Early enfough career to have gotten in the door, but young enfough to be mailable and open to new ways.
fn-mote · 55m ago
Had to laugh at #4… that’s where I thought we were now.
chii · 10h ago
> At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.

turbofreak · 11h ago
Canada?? They can’t build a subway station in 5 years nevermind restructure a massive job sector like this lmao
bluefirebrand · 3h ago
You're being downvoted but you're actually spot on

Calgary was supposed to have a new train line, planning has been in motion for years. Back in 2019 when I bought my house, the new train was supposed to open in 2025. As far as I know not a single piece of track has been placed yet. So... Yes

chii · 10h ago
> A much larger percentage will only be able to work on things like CRUD apps and grunt work.

which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

> When these jobs are eliminated, many developers will be out of work.

like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.

AbstractH24 · 2h ago
> This has happened before

When do you think this is most comparable to?

Were there this many software developers around the peak of the dot com era? Im 35 so im old enfough to remember the excess of the time and all the weird products, but nothing about how the sausage was being made.

mooreds · 1h ago
> When do you think this is most comparable to?

I don't think it has happened to the software industry. We've been growing as computers have become more and more capable since the 1960s.

One analogy is farm employment cratering in the first half of the 20th century.

Went from 11.77M in 1910 to 5.88M in 1950 in the USA[0], even as the population went from 92M to 151M[1]. That's going from 12% of the population to 4%.

Another is travel agents, which went from 100k workers in the USA in 2001 to 30k in 2020[2] (though there has been a rebound since).

0: https://ourworldindata.org/employment-in-agriculture#all-cha...

1: https://www.census.gov/data/tables/time-series/dec/popchange...

2: https://fred.stlouisfed.org/series/LEU0254497900A

bugglebeetle · 9h ago
> which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).

AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.

chii · 9h ago
> AI models will never obtain the ability to completely replace “low value work”

if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.

A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.

blackbear_ · 4h ago
> AI models will never obtain the ability to completely replace “low value work"

Maybe, but this is not the meaning of replacement in this context and it need not hold for the "economic" reasoning to work.

All that matters is that AI makes developers more productive, as measured by number of (CRUD or whatever) apps per developer per unit of time. If this is true, then the current supply of apps can be provided by fewer developers, meaning that some of the current developers aren't needed anymore to sustain the current production level. In this scenario lower level engineers still exist, they are just able to do more in the same time by using AI.

scarface_74 · 2h ago
I’m working on a system now where the hard part is the integration, user experience, business requirements, solving XYProblems, etc.

Honestly this is true for most problems and has been forever for most developers.

But between all of the different Lambdas (yes we had to use Lambda to do business logic it’s Amazon Connect), there is probably around 2000 lines of relatively straightforward code and around 1000 lines of infrastructure as code.

I didn’t write a single line of code, I started by giving ChatGPT the diagram and very much did “vibe coding” between the code, the database design and the IAC.

I would have had to have at least one junior dev do the grunt work for me. I don’t think I wrote a single line of code.

Before the gate keeping starts, I started programming in assembly in 1986 and had an official title of “software engineer” or something similar until 2020.

csomar · 9h ago
This. There are millions of software developers. There are hundreds (thousands?) that are working on the cutting edge of things. Think of popular open source projects used by the masses, usually there is one or a handful of developers doing most of the work. If the other side of the puzzle (integration) becomes automated, 95% or more of software developers are redundant.
123yawaworht456 · 11h ago
90% of real grunt work is "stitch an extra appendage to this unholy abomination of God" or "*points at the screen* look at this shit, figure out why is it happening and fix it". LLMs are more or less useless for either of those things.
zeta0134 · 8h ago
An LLM can certainly try to consume an extremely poorly specified bug report, in half English half the user's native language, then consume the entire codebase and guess what the devil they're actually referring to. My guess is the humans are better at this, and I mostly speak from experience on two separate support floors that tried to add AI to that flow. It fails, miserably.

It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.

danielbln · 8h ago
I disagree, agentic LLMs are incredibly useful for both.
csomar · 8h ago
LLMs are very good at fixing bugs. They do lack broader contexts and tools to navigate the codebase/interface. That's why Claude Code was such a breakthrough despite using the very same models you run on the chat.
xarope · 7h ago
very good at fixing bugs like these, which requires a senior developer to address and prompt? https://news.ycombinator.com/item?id=44159166

or this about MS pushing for more internal AI usage, and the resulting hilarity (or tragedy, depending if you are the one having to read the resulting code)? https://news.ycombinator.com/item?id=44404067

Fendy · 5h ago
I think a lot of people are feeling this, not just engineers. Engineering already has a high entry bar, and now with AI moving so fast, it’s honestly overwhelming. Feels like there's no way to avoid it—we either embrace it, actively or passively, whether we like it or not.

Personally, I think this whole shift might actually be better for young people early in their careers. They can change direction more easily, and in a weird way, AI kind of puts everyone back at the starting line. Stuff that used to take years to master, you can now learn—or get help with—in minutes. I’ve had interns solve problems way faster and smarter than me just because they knew how to use AI tools better. That’s been a real wake-up call.

I’m doing my best to treat AI as a teammate. It really does help with productivity. But the world never stop changing, and that’s exhausting sometimes. I try to keep learning, keep practicing, and keep adapting. And yeah, if I ever lose my job because of AI... ok, fine, I’ll have to change and try getting another, maybe different job. Easy to say, harder to do—but that mindset at least helps me not spiral.

skydhash · 5h ago
The true value of experience comes in two ways: Knowing when not to do something; and knowing the shortest path to produce the result needed.

More often, the result of juniors using LLM is a frankenstein ball of mud that is close to its implosion point. Individual features are part of a system and are judged based on how they contribute to its goal, not how they are individually correct.

AstroBen · 2m ago
[delayed]
Fendy · 26m ago
great point
smallstepforman · 11h ago
Google search is giving us a taste of AI summarised results, and for simple things its passable, but ask a serious question and you get good looking garbage. Yes, I know its early days, but looking at the current output quality we have nothing to worry about. It will be used as calculators, offload some menial repetetive task which can be automated, but the next gen of developers will still be tasked to solve complex problems.
erentz · 9h ago
Google AI the other day told me that tinnitus is listed as a potential adverse reaction of Saphnelo.

Only it damn well isn’t. Anywhere. Not even patient reports.

The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?

cwalv · 9h ago
This problem is has already gotten so much better. In my experience it's no longer 10% of the time (I'd estimate more like 1%). In the end, you still need to use judgement; maybe it doesn't matter if it's wrong, and maybe it really does. It could be citing papers, and even then you don't know if the results are reproducible.
bluefirebrand · 3h ago
Has it actually become that much better or have you let your standards and judgment lapse because you want to trust it?

How would you even know to evaluate that?

alectroem · 32m ago
Ya I've had basically this question for a while. My assumption is that most of the time people search the internet to answer questions they DONT know the answer to.

If an LLM gives you a response to that question, how do you know if its right or wrong without already knowing the answer or verifying it some other way? Is everyone just assuming the ai answers are right a majority of the time? Have there been large scale verification of a wide variety of questions that I'm not aware of?

simonw · 10h ago
Google's AI overviews is the single worst AI-driven experience in widespread use today. It's a mistake to make conclusions about how good AI has got based on that.

Have you tried search in ChatGPT with o4-mini or o3?

cwalv · 9h ago
I don't use it that much, but I have noticed these AI overviews still seem hallucinate a lot, compared to others. Meanwhile I hear that Gemini is catching up or surpassing other models, so I wonder if I'm just unlucky (or just haven't used it enough to see how much better it is)
input_sh · 8h ago
One is their state-of-the-art model, the other one's the best model they can run at scale and speed people expect from a search engine.
999900000999 · 10h ago
Case in point.

I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.

Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.

I had to find an actual human written web page.

The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.

Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.

It’s impossible for anything to always be right since it’s impossible to know everything.

RaftPeople · 10h ago
I had a couple of great examples of getting the exact opposite answer depending on how I worded my question, now I can't trust any of the answers.
cwalv · 9h ago
Maybe the answer to your question was subjective?
RaftPeople · 40m ago
The first set of questions was about code review timing in a workflow (background: was in a discussion about different styles of workflows):

Q1.1: "do most developers do code reviews before testing"

A1.1: essentially "yes most before..."

Q1.2: "do most developers do code reviews after testing"

A1.2: answer was essentially "yes most after..."

The 2nd set of questions was related to building a retaining wall. I've never used the type of wall block with a center notch and groove, only the type with a lip on the back. The center type creates a wall straight up but lip in back leans back a little. I was curious if one was more stable than the other:

Q2.1: "is retaining wall block with center groove more stable"

A2.1: essentially "yes, block with center groove is more stable..."

Q2.2: "is retaining wall block with back lip more stable..."

A2.2: essentially "yes, block with lip on back is more stable..."

When I finally found an engineering website with real details, the answer was that the lip on back was a little more stable due to the resulting angle of the wall. It also emphasized that the lip and the center groove are not factored in to stability calcs at all, they are only for alignment, gravity and friction of the blocks is what holds it in place.

csomar · 8h ago
Google Search AI is the worst and considering that AI is not a good alternative to search (the models are compressed data), I am not sure why Google has decided to use LLMs to answer questions.
readthenotes1 · 11h ago
Try asking Perplexity for a real taste. It works far better than google's--good enough to make searching fun again.

Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.

Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."

Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.

disambiguation · 9h ago
> We are all, collectively, building a giant, shared brain.

"Shared" as in shareholder?

"Shared" as in piracy?

"Shared" as in monthly subscription?

"Shared" as in sharing the wealth when you lose your job to AI?

simonw · 10h ago
Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
chii · 10h ago
More like quitting blacksmithing due to the invention of CNC.
globular-toast · 8h ago
CNC produces something no blacksmith could. The same cannot be said of LLMs.
bluefirebrand · 3h ago
I agree, but it doesn't matter what we think

Execs are convinced LLMs produce something that no programmer ever could. Or at least the same thing, faster than any programmer ever could

So they will drive us off a cliff chasing it

globular-toast · 1h ago
Luckily people who know better are allowed to found companies and compete with them. I look forward to large companies with several levels of clueless execs going under, especially because I'll still be here to pick up the pieces in exchange for a tidy sum.
imtringued · 4h ago
Forging and subtractive manufacturing are different techniques.
globular-toast · 1h ago
This comment seems especially good considering it's written by one of the authors of Django. I started my career making websites in raw HTML, CSS and PHP. Stuff like Django, Bootstrap, React etc. came a long and made a lot of the stuff I used to do trivial. Not once did I fear for my career, rather I got excited because now I could do so much more.

I wonder what it is about LLMs that has got people scared rather than excited.

jbs789 · 9h ago
As this started with career advice, two points: the world values certain things (usually making people’s lives easier, one version of that is building useful tools) and the individual has a set of interests and skills. Finding the intersection of that for you should help guide you toward a career that the world values and interests you (if that’s important to you).

I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.

lordnacho · 19m ago
Coding was always a temporary profession.

Back 200 years ago, most people were illiterate. If you could read and write, there was work. You could become part of the machinery of the beginning industrial revolution, actually quite an important cog. Just by knowing how to read and write, someone would need your skills to coordinate stuff, mostly mundane. But it meant you had a way into a business. You might convert your clerkship into accountancy or law, or you would become a manager, knowledgeable about whatever business you were working in.

As time passed, everyone became literate. Knowing how to read and write stopped being the only thing you needed to get on the career ladder.

When I started working, my boss had no degree. He had energy, and he could do arithmetic. This got him a job as a young man running around with slips of paper in the LIFFE pit. Eventually he learned how option trading worked.

He got older and hired me. By this time, you could easily find a highly numerate graduate, and only such people were considered for finance roles. It was enough to have an Oxbridge degree and just sort of be smart enough to figure out coding on the job.

Now, when I look at the new grads, they blow me away. They can already code quite well. They already have internships in the business. They already have an idea of what alpha is, and how to find it. They are well on their way to just being quantitative trading professionals.

We are in an interim period similar to the expansion of literacy. The school system has not ramped up computer literacy in the way it successfully got most kids to be able to read and write.

Until there are lots of people able to code, there will be lots of programming jobs. That is, jobs where the person is in the seat because they can code. Much as in 1825, there were clerking jobs for guys who could read and write.

Or so we thought.

Now there is a tool that allows the business side to make code. It's not even that terrible code in my opinion, and it will only get better. It's here, and if you know what the business needs, you can use it to further the business goals.

The great divide that will open up is that developers who got into business because they could code are now in a bit of a wonderland. They not only know what code is needed, they can implement it without their friends who are further down the chain.

People who are just finishing a course in how to code, well, they face a bit of a struggle. On one hand, it's an important skill. On the other hand, for that skill to pay, you need to jump the gap that was once a stable existence. It might not be its own skill any longer, you might need the domain knowledge on a whole higher level.

zkmon · 7h ago
It's not just the grunt work going to AI. Actually it is the opposite. Grunt work of dealing with mess is the only thing that is left for humans. Think of legacy systems, archaic processes, meaningless workflows, dealing with other teams and people, politics of work, negotiations, team calls, history of technical issues... AI is a new recruit that has massive general abilities, but has no clue about the dingy layers of the corporate mess.
octo888 · 8h ago
I wasted 2 days using Cursor with the 3.7 thinking models to implement a relatively straightforward task (somewhat malicious compliance with being highly encouraged to use the tools, and because a coworker insisted I use their overly complex mini framework instead of just plain code for this task)

It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.

Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc

Yes yes I'm holding it wrong

The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down

bluefirebrand · 3h ago
If it helps any, this is not just you. I'm having the same kinds of problems, both with "being pressured to use the tools" and also being completely run around in circles when I try
asimpletune · 5h ago
Software engineers should unionize. We’re not real engineers until we have professional standards that are enforced (as well as liability for what we make). Virtually every other profession has some mandatory license or other mechanism to bring credibility and standards to their fields. AI just further emphasizes the need for such an institution.
bdcravens · 37m ago
A substantial amount of our field would then be unemployable. A significant number of whoever was left would be complaining about the new standards.
lawgimenez · 11h ago
I just recently inherited a vibe coded project in iOS (fully created with ChatGPT), and not even close to working. This is annoying.
grogenaut · 10h ago
I helped my brother a few times with iOS apps he had folks from upwork build. They also didn't work beyond the initial version. They always wanted to rebuid theapp from scratch for each new requirement.
lawgimenez · 10h ago
Everything's a mess, a thousand commits and PRs from ChatGPT. Code won't compile because Codex it seems doesn't seem to understand static variables.

And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.

GardenLetter27 · 3h ago
I feel exactly the same way.

I just wish they were forced to publish open weights in return for using copyrighted materials in the "brain".

Animats · 10h ago
It's hard to see manual coding of web sites remaining a thing for much longer.
bdcravens · 28m ago
I was building on Geocities in 1996, writing HTML in Notepad and uploading via FTP. I don't think I've manually coded a "web site" in many years.

Even including the broader definition of applications, your average application is very little manual code, being primarily third-party framework and library code with glue and business logic.

sublinear · 4h ago
If you're talking about personal websites I think that ship sailed almost 20 years ago with the rise of social media.

If you mean business websites, they are just about the most volatile code out there with crazy amounts of work that never stops. It's still a form of publishing after all. Every marketing decision has to filter through design agencies, legal and compliance, SEO, etc. before it gets handed off to web devs. Then the web dev has to push back when a ton of details are still wrong. Many decisions after testing are left unresolved by the time it goes live and those pages still need maintenance until its expiry.

Smaller businesses also have these problems with their websites, but with less complexity until they get more attention from the public.

bluefirebrand · 3h ago
> I think that ship sailed almost 20 years ago with the rise of social media

Well forget social media even. Wordpress, and now Shopify have definitely eaten the personal website

sublinear · 46m ago
The best part about a personal website is that you can do whatever you want.

A part of my mind is more convinced by the day that people don't want all this complexity and the AI nonsense might be the final straw.

By comparison, just writing the damn HTML yourself is a breeze. I get that most people don't really want to write code, but I'd compare it to cooking. You don't have to be a chef just to eat a decent meal made the way you like.

No, what's really wrong here is a lack of simple build tools and collections of stable libraries for web that don't try to lock you in. This is what pushes people to use crapware instead.

I can't find a decent pre-made pizza crust at the grocery store either, so sometimes I just order delivery instead of kneading dough.

globular-toast · 8h ago
Most people younger than 30 probably haven't "manually coded a website" anyway.
throwaway0123_5 · 20m ago
I doubt there's any age bracket where most people have manually coded a website tbh.
Apocryphon · 10h ago
AI is a red herring. The current malaise in the industry is caused by interest rates. Certainly, AI has the potential to disrupt things further later down the line, but the present has already been shaken enough by the whiplash between pandemic binging and post-ZIRP purging.
ifwinterco · 4h ago
This is my read as well.

Hard to say for sure because chatGPT came out at almost exactly the same time the post-covid wheels where starting to fall off anyway, but I think it's fair to say that as of right now you can't really replace all (or even many) of your engineers with AI.

What you definitely can do though, is fire 20+% of your engineers and get the same amount done simply because more is not necessarily better

mirsadm · 8h ago
Initially I felt anxiety about AI and it's potential to destroy my career. However now I am much more concerned about what will happen in 5 or 10 years of widespread AI slop. When humans lose motivation to produce content and all we're left is AI continually regenerating the same rubbish over and over again. I suspect there'll be a shortage of programmers in the future as people are hesitant to start a career in programming.
anovikov · 3h ago
Sad truth is that we probably don't see any AI effects on hiring yet, or maybe only minimally so. For now, this is just normal cyclic shit. The worst is yet to come.