'It is a better programmer than me': The reality of being laid off due to AI

22 roboboffin 24 6/16/2025, 9:18:45 AM independent.co.uk ↗

Comments (24)

BrouteMinou · 2h ago
Hey chatgtp, make an article from the following story, at least 500 words: >> "Someone said that a guy said his cousin talked to an AI." said the person we interviewed on the phone.

He added, "AI is good".

Be afraid, Doom and Gloom. <<

Thank you,

-- a reporter still having a job apparently.

ramon156 · 9h ago
I'm sure some people can be replaced by AI. I'm also sure a lot of these stories are just marketing for their crappy GPT wrapper
roboboffin · 8h ago
I think workplaces will have to allow people time to adapt. So that if your particular skill set is replaced by AI, you have the ability to retrain to a part that isn’t.

Ultimately, large part of many jobs are repetitive, and can be replaced by pattern matching. The other side, creating new patterns, is hard and takes time. So, employers will have to take this into account. They may be long periods of “unproductive” time, or more risky evaluation to try new ideas.

izacus · 8h ago
Who's doing all the prompting for people being laid off though? How does that transition look like?

Does the middle manager which before bugged people to do the work now write a prompt and commit code and file documents themselves?

roboboffin · 8h ago
I’m not saying people will be laid off, although this is what the article is about. So, I think people will still be prompting, but if you can prompt an agent and it can happily code away, what are you supposed to be doing ? Watching it do its work ? The only option is that you will have to generate ideas of new work constantly to drive value. This is something that generally happens over time now, but as implementation becomes quicker; idea generation will have to accelerate.
izacus · 8h ago
> So, I think people will still be prompting, but if you can prompt an agent and it can happily code away, what are you supposed to be doing ? Watching it do its work ?

Well, what do managers do once they prompt junior developers? ;)

Also, "tell a prompt and wait for it to finish without intervention" is not something that happens even with magical Claud Code.

I'd really like to see some actual theory (and position, people) surfaced that can be laid off due to AI and who and how then actually runs the LLMs in the company after layoffs. I've never been in a company where new work wouldn't fill up the available developer capacity, so I'm really interested in how the new world would look like.

roboboffin · 7h ago
I just had a thought. It used to be that complex C++ systems used to take so long to compile that developers used to go and have a coffee etc. This was before distributed compiling.

Maybe it will return to that, the job will have a lot of waiting around, and “free” time.

fragmede · 7h ago
> Also, "tell a prompt and wait for it to finish without intervention" is not something that happens even with magical Claud Code.

That is how you interact with OpenAI's Codex though.

shishy · 3h ago
I agree with you but there's something ironic about seeing that comment here, especially considering how many jobs tech has replaced in the last few decades without people having the time to retrain.
joshstrange · 8h ago
It’s the responsibility of individuals to continue learning. Choosing, and to be clear, it is a choice, to stop learning can have dire consequences.

We are now a few years into LLMs being widely available/used, and if someone’s chosen to stick their head in the sand and ignore what’s happening around them, then that’s on them.

> I think workplaces will have to allow people time to adapt.

This feels like a very outdated view to me. Maybe we are worse off for that being the case but by and large that will not happen. The people who take initiative and learn will advance, while the people who refused to learn anything new or change how they’ve been doing the job for XX years will be pushed out.

bluefirebrand · 7h ago
> It’s the responsibility of individuals to continue learning

Using AI is the opposite of learning.

I'm not just trying to be snarky and dismissive, either

That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"

joshstrange · 7h ago
> That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"

I'm sure we are veering into "No true Scotsman" territory but that's not the type of learning/tools I'm suggesting. "Vibe Coding" is a scourge for anything more than a one-off POC but LLMs themselves are very helpful in pinpointing errors, writing common blocks of code (Copilot auto-complete style), and even things like Aider/Claude Code can be used in a good way if and only if you are reviewing _all_ the code it generates.

As soon as you disconnect yourself from the code it's game over. If you find yourself saying "Well it does what I want, commit/ship it" then you're doing it wrong.

On the other hand, there are some people who refuse to use LLMs for a wide range of reasons ranging from silly to absurd. Those people will be passed by and have no one to blame but themselves. LLMs are simply another tool in the tool box.

I am not a horse cart driver, I am a transportation expert. If the means of transport changes/advances then so will I. I will not get bogged down in "I've been driving horses for XX years and that's what I want do till the day I die", that's just silly. You have to change with the times.

bluefirebrand · 6h ago
> As soon as you disconnect yourself from the code it's game over

We agree on this

The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code

Maybe you're right but I have been trying to use them this way and so far I find it makes me completely detached from what I'm building

joshstrange · 5h ago
> The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code

It's a gray area for sure and almost no one online is talking about the same thing when they say "LLM Tools", "LLM", "Vibe Coding", "AI", etc so it makes it even harder to have conversations. It's probably a lot like the joke "Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?".

For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote. Most often I enjoy it for boilerplate-ish things where an abstraction would be premature but I still have to type out a bunch of boilerplate. Being able to write 1-2 examples and have it extrapolate the rest is quite nice.

I've used Aider/Claude Code [0] as well and had success but I don't love the workflow of asking it to do something, then waiting for it to spit out a bunch of code I need to review. I expect this will improve and I have seen some improvement already. For some tasks it has me beat (speed of writing UI) but most logic-type things I have been unable to prompt it well enough or give it enough/the right context to solve the problem. Because of this I mainly use these tools for one-off, POC, or just screwing around.

I also find things like explanation of errors or tracking down what the root cause of an error are useful.

I am very much _not_ a fan of "Vibe Coding" or anything that pretends it can be "no code"/"low code". I don't know if I'll ever be comfortable not reviewing the code directly but we will see. I'm sure assembly developers swore to never use C, who then swore to never use C++, who swore they'd never use python, and so on and so forth. It's not clear to me if LLM-generated code is another step up or just a tool for the current level, I'm leaning heavily towards them just being a tool. I don't think "prompt engineer" is going to be a thing.

[0] And Continue.dev, Cursor, Windsurf, Codeium, Tabnine, Junie, Jetbrains AI, and more

bluefirebrand · 4h ago
> For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote

This is what I would like to use it for, but I have been struggling quite a bit with it

If I have a rough idea of what a 10-line function might look like and Cursor does the Autocomplete suggestion, it is nice when it is basically what I had in mind and I can just accept the suggestion. This happens very rarely for me though

More often I find the suggestion is just wrong enough that I want to change it, so I don't accept it. But this also shoves the idea I had in my head right out of my brain and now I'm in a worse position, having to reconstruct the idea I already had

This happened to me enough that I wound up entirely turning off these suggestions. It was ruining my ability to achieve any kind of flow

> Because of this I mainly use these tools for one-off, POC, or just screwing around.

Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much

It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience

joshstrange · 4h ago
> Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much

I'm sorry to hear this. I have encouraged the developers I manage to try out the tools but we're no where close to "forcing" anyone to use them. It hasn't come up yet but I'll be pushing back hard on any code that is clearly LLM-generated, especially if the developer who "wrote" it can't explain what's happening. Understanding and _owning_ the code the LLMs generate is part of it, "ChatGPT said..." or "Cursor wrote..." are not valid sentence starters to question like "Why did you do it this way?". LLM-washing (or whatever you want to call it) will not be tolerated, if you commit it, you are responsible for it.

> It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience

I hate hearing this because there are plenty of people writing blog posts or making youtube videos about how they are 10000x-ing their workflow. I think most of those people are completely full of it. I do believe it can be done (managing multiple Claude Code or similar instance running) but it turns you into a code reviewer and because you've already ceded so much control to the LLM it's easy to fall into the trap of thinking "One more back and forth and the LLM will get it" (cut to 10+ back and forths later when you need to pull the ripcord and reset back to the start).

Copilot and short suggestions (no prompting from me, just it suggesting in-line the next few lines) are the sweet spot for me. I fear many people are incorrectly extrapolating LLM capability. "Because I prompted my way to a POC then clearly an LLM would have no problem adding a simple feature to my existing code base" - Not so, not by a longshot.

bluefirebrand · 3h ago
> it turns you into a code reviewer

Yes. Not only is this the least enjoyable part of the job in general for me, I think it is a task that a lot of devs, even pretty diligent ones, wind up half assing

I personally don't mind reviewing coworkers code because I think it is an opportunity to mentor and learn, but that is not really the case with LLM generated code. That code review becomes purely "Does this do what I want and does it match the style guide"

I would much rather LLMs review my code than the other way around. Unfortunately even that workflow is more annoying than anything, because the LLM is often not a good reviewer either

I think I just expect more reliability out of the tools I use.

joshstrange · 3h ago
I completely agree, I’m not anti-code review but it’s by far the least enjoyable part of my job. It’s never going to give you same understanding that getting into the code will.

That’s acceptable when there is another human who _does_ understand the code (they wrote it) and someone who can learn and grow via the code review process.

None of that applies for LLM-generate code.

In many cases if it fails at the task, it’s much easier for me to just do it myself than to go a couple more rounds with the LLM (cause it’s almost never as easy as a normally code review, you have to prompt better and be more explicit).

That my biggest annoyance with Claude Code/Aider, always feeling like I’m 1 prompt away from everything slotting into place. When, in reality, each time I get back on the merry-go-round it might fix 1 thing and then break another. Or it’s “fix” might be absurd (“I’ll just cast this so it’s the right type” :facepalm:).

ryandrake · 3h ago
> Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much

This I just don't get. If the tool is actually useful and makes you more productive, then developers will be banging down management's door to let them use it, not the other way around. If the company has to resort to forcing their employees to use a tool, what does that say about the tool?

bluefirebrand · 3h ago
I don't think the tool's effectiveness matters at all honestly, which is a big part of the problem

What matters is they can report some number of AI usage to shareholders who are convinced this is the future no matter what

roboboffin · 8h ago
The difference for me, is that things are changing too fast to keep up. For example, if a large part of your job is taken away seemingly overnight, by a new model, your whole job could change in a heartbeat.

What preparation are you supposed to do for this ? Previously, change was relatively slow and it was reasonable to keep up in your own time. I believe that is no longer possible.

harimau777 · 7h ago
I'm not sure that many people have time to continue learning. Certainly when one is young it is easy, but at some point people are spending their time outside of work building a life, raising a family, etc.
hulitu · 8h ago
> So that if your particular skill set is replaced by AI

Is AI capable of any skill ? I mean, Microsoft or Google SW looks like it is written by AI but this is since 20 years

adrian_b · 3h ago
My opinion is that the so called "AI", when applied to programming, is just a clever trick for avoiding the copyright laws.

Despite the fact that a half of century ago there was a lot of talk about "software reuse", that has never happened at the expected scale, but not for technical reasons. It has never happened because the copyright laws have prevented it.

During the early years of electronic computers, there were computer user groups where the programs of general interest were shared rather freely between different companies, in order to avoid the duplication of programming work. This has changed sharply after the appearance of software as a product that can be sold and bought, separately from computer hardware.

Even when there exists an open-source solution it frequently cannot be incorporated in the program you are writing for a company, due to incompatible copyrights. Therefore much of the programming work consists in rewriting with minor variations programs that have been already written countless times before, but at another company or by some individual.

The "AI" that "writes" a program for you just makes a search instead of you for one of the existing solutions, with the additional advantage that it has searched code bases that you would have been forbidden to search, and it produces a program source that has been detached from its original copyright, allowing it to be inserted in a proprietary program of your company.

A program "written" by an AI will have the highest quality when it almost matches some program that has been present in the training set. Whenever the generated program is more distant from a verbatim reproduction, being a combination of several programs or having some random changes, there is a high probability that the AI has introduced some errors, which must be corrected by a competent programmer in order to get a valid programming solution.

A human could have done exactly the same thing as the "AI", replacing most of the programming with searching then doing copy and paste, with a similar increase in productivity, but this would have been punishable by the existing laws, while when the AI does it, this is legal.

If software reuse would have been possible without copyright restrictions, then indeed less programming jobs would have been needed. So with the "AI" workaround against the laws, it is really expected to see a job number reduction in this domain.