I think a lot of people would disagree with this article on HN, but I’ve yet to see too many people say it’s made their coworkers more productive. That is, do people feel like they’re getting better, more reviewable PRs?
Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work. They often think they do, but then I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”
By no means am I down on AI, but I think proper procedures need to be put into place unless we want a giant bomb in our code base.
aantix · 2h ago
I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.
AI may be multi-threaded, but there's still a human, global interpreter lock in place. :D
If you put the code up for review, regardless of the source, you should fundamentally understand how it works.
This raises a broader point about AI and productivity: while AI promises parallelism, there's still the human in the middle who is responsible for the code.
The promise of "parallelism" is overstated.
100's of PRs should not be trusted. Or at least not without the c-suite understanding such risks. Maybe you're a small startup looking to get out the door as quickly as possible, so.. YOLO.
But it's going to be a hot mess. A "clean up in aisle nine" level mess.
j-bos · 2h ago
> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.
I strongly agree, however manager^x do not and want see report the massive "productivity" gains.
bcrosby95 · 2h ago
This.
I know many who have it on from high that they must use AI. One place even has bonuses tied not to productivity, but how much they use AI.
Meanwhile managers ask if AI is writing so much code why aren't they seeing it on topline productivity numbers.
dingnuts · 2h ago
Sane CTOs think "Claude did that" is invalid. I assure you: those leaders exist. Refuse to work for idiots who think bots can be held accountable. You must must understand every line of code yourself.
"Claude did that" is functionally equivalent to "idk I copied that from r/programming" and is totally unacceptable for a professional
derf_ · 1h ago
> You must must understand every line of code yourself.
I have never seen this standard reached for any real codebase of any size.
Even in projects with a reputation for a strong review culture, people know who the "easy" reviewers are and target them for the dicey stuff (they are often the most overloaded... which only causes them to get more overloaded). I've seen people explicitly state they are just "rubber stamping" PRs. Literally no one reviews every line of third-party dependencies, and especially not when they are updated routinely. I've seen over a million lines of security-sensitive third-party code integrated and pushed out to hundreds of millions of users by a handful of developers in a matter of months. I've seen developers write their new green-field code as a third-party library to circumvent the review process that would have been applied if it had been developed as a series of first-party PRs. None of that had anything to do with AI. It all predated AI coding tools. That is how humans behave.
Does this create ticking time-bombs? It absolutely does. You do the best you can. You triage and deal with the most important things according to your best judgment, and circle back to the rest as time and attention allow. If your judgment is good, it's mostly okay. Some day it might not be. But I do not think that you can argue that the optimal level of risk is zero, outside of a few specialized contexts like space shuttles and nuclear reactors.
I know. It hurts my soul, too. But reality isn't pretty, and worse is better.
jaredcwhite · 6m ago
I think the "you" in the quote is referring to the programmer of the PR, not the reviewer. I agree that it's probably unrealistic to expect reviewers to understand every line of code in a PR. That's why it's crucial that the programmers of said PRs themselves understand every line of code. I'll go one step further:
If you submit a PR and you yourself can not personally vouch for every line of code as a professional…then you are not a professional. You are a hack.
That is why these code generation tools are so dangerous. Sure, it's theoretically possible that a programmer can rely on them for offering suggestions of new code and then "write" that code for a PR such that full human understanding is maintained and true craft is preserved. The reality is, that's not what's happening. At all. And it's a full-blown crisis.
Izikiel43 · 2h ago
You tell them clippy’s revengeance pr caused an outage worth millions of dollars because of push for productivity and they shouldn’t bother you for a couple of months.
corytheboyd · 2h ago
> The promise of "parallelism" is overstated.
100% my takeaway after trying to parallelize using worktrees. While Claude has no problem managing more than one context instance, I sure as hell do. It’s exhausting, to the point of slowing me down.
vouaobrasil · 1h ago
That's an intended effect. It doesn't matter to those in power who know what AI is really for. Once you get so exhausted that you can't work any more, there will be a hundred bright-eyed naïve programmers who will step into your place and who think they can do better. Until they burn out in a few years time.
corytheboyd · 18m ago
I have been wondering when I would start to feel aged out of the tech industry… gosh is it here already?
vouaobrasil · 9m ago
I don't know if it is but I'm certainly glad I left tech a long time ago...
SoftTalker · 2h ago
As someone who doesn't use AI for writing code, why can't you just ask Claude to write up an explanation of each change for code review? Then at least you can look at whether the explanation seems sane.
zahlman · 2h ago
Because the explanations will often not be sane; when they are sane, they will focus on irrelevant details and be maddeningly padded out unless you put inordinate effort into trying to control the AI's writing style.
Ask pretty much any FOSS developer who has received AI-generated (both code and explanations) PRs on GitHub (and when you complain about these, the author will almost always use the same AI to generate responses) about their experiences. It's a huge time sink if you don't cut them off. There are plenty of projects out there now that have explicit policy documentation against such submissions and even boilerplate messages for rejecting them.
threetonesun · 2h ago
It will fairly confidently state changes are "correct" for whatever reason it makes up. This becomes more of an issue with things that might be edge cases or vague requirements, in which case it's better to have AI write tests instead of the code.
ahoef · 2h ago
Claude also doesn't know, because Claude dreamt up changes that didn't work, then "fixed" them, "fixed" them again and in the process left swathes of code that isn't reached.
thegeomaster · 2h ago
This can be dangerous, because Claude doesn't truly understand why it did something. Whatever it writes a post-hoc justification which may or may not be accurate to the "intent". This is because these are still autoregressive models --- they have only the context to go on, not prior intent.
zahlman · 2h ago
Indeed. Watching it (well, Anthropic, really) cheat at Baba Is You and then try to give a rationalization for how it came up with the solution (qv. https://news.ycombinator.com/item?id=44473615) is quite instructive.
bakuninsbart · 1h ago
I've been experimenting with Claude, and feel like it works quite well if I micromanage it. I will ask it: "Ok, but why this way and not the simpler way? And it will go "You are absolutely right" and implement the changes exactly how I want them. At least I think it does. Repeatedly, I've looked at a PR I created (and review myself, as I'm not using it "on production"), and found some pretty useless stuff mixed into otherwise solid PRs. These things are so easily missed.
That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.
bcrosby95 · 2h ago
AI is not a human. If it understands things it doesn't understand things like you or I. This means it can misunderstand things in ways we can't understand.
sundaeofshock · 1h ago
AI is not sentient, so it does not “understand” anything. I don’t expect the autocomplete of my messenger app to understand its output, so why should I expect Claude to understand its output?
No comments yet
kibwen · 2h ago
> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.
That will work, but only until the people filing these PRs go crying to their managers that you refuse to merge any of their code, at which point you'll be given a stern reprimand from your betters to stop being so picky. Have fun vibe-reviewing.
pkdpic · 2h ago
Yeah in a perfect world I absolutely agree. But the reality I'm observing is that everything continues to always be behind schedule (not a new phenomena) and if anything expectations from project leads / management are just getting less realistic leaving Jr devs in even more of a position of "no time to be curious or learn deeply just get it done" and teamleads reviewing PRs in a maybe even worse position of "no time to get into a deep review / mentorship session just figure out if it breaks anything or not". And ultimately clients are still just living in fantasy land in terms of expectations both in terms of build out time for basic features / patches and also how much AI razzle-dazzle they expect for new project proposals.
Nothing can move fast enough to keep up with these hype-fueled TED talk expectations all the way up the chain.
I don't know if there's any solution and I'm sure it's not like this everywhere but I'm also sure I'm not alone. At this point I'm just trying to keep my feet wet on "AI" related projects until the hype dust settles so I can reassess what this industry even is anymore. Maybe it's not too late to get a single subject credential and go teach math or finger painting or something.
ryandrake · 2h ago
It's insane that any company would just be OK with "IDK Claude did that" any more than a 2010 version of that company would be OK with "IDK I copy pasted from StackOverflow." Have engineering managers actually drank this Kool-aid to the point where they're actually OK with their direct reports just chucking PRs over the wall that they don't even understand?
Imustaskforhelp · 2h ago
It is even more funnier when you realize that because Claude and all AI models are trained on data including stackoverflow.
So I guess if you asked Claude why it did that, the truth of it might be "IDK I copy pasted from StackOverflow"
The same stuff pasted with a different sticker. Looks good to me.
jaredcwhite · 4m ago
Ha, I genuinely laughed at that. Thank you!
f1shy · 2h ago
This is exactly how I see it. Is not about the tool, is how it is used. In 1990 that would have been “IDK I got it from a BBS” and in 1980 “got if from a magazine“. It doesn’t matter how you get there, you have to understand it. BTW I had a similar problem as I was manager in HW development, where the value of a resistor had no documented calculation. I would ask: where does it came from? If the answer was “I tried and it worked”, or “tested in lab until I found it” or in the 2000 “I run many simulations and was the best value” I would reject and ask for proper calculations, with WCA.
Andrex · 2h ago
As vibe coding becomes more commonplace you'll see these historical safeguards erode. That is the danger IMO.
You're right, saying you got something off SO would get you laughed out of programming circles back in the day. We should be applying the same shame to people who vibe code, not encourage it, if we want human-parseable and maintainable software.
vkou · 2h ago
> That is the danger IMO.
For whom is this a danger for?
If we're paid to dig ditches and fill them, who are we to question our supreme leaders? They control the purse strings, so of course they know best.
joseda-hg · 2h ago
I don't think it's common, but I've definitely seen it
I've also seen "Ask ChatGPT if you're doing X right?", and basically signing off whatever it recommends without checking
At this point I'm pretty confident I could trojan horse whatever decision I want from certain people by sending enough screenshots of ChatGPT agreeing with me
calebkaiser · 2h ago
I don't think this is an AI specific thing. I work in the field, and so I'm around some of the most enthusiastic adopters of LLMs, and from what I see, engineering cultures surrounding LLM usage typically match the org's previous general engineering culture.
So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.
Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.
throwanem · 2h ago
How long are you spending with a given team, and where per se in their "AI lifecycle?" I would expect (for example) a sales engineer to see this differently than a support engineer, if support engineers still existed.
throwanem · 2h ago
"Look, the build is green and CI belongs to another team, how perfectionist do you need us to be about this?" is the sort of response I would generally expect, and also in the case where AI was used.
siva7 · 2h ago
If it pushes some nice velocity metric, most managers would be ok. Though you have to word it a bit differently of course.
liveoneggs · 2h ago
Of course they are okay with it. They changed the job function to be just that with forced(!) AI adoption.
danielmarkbruce · 1h ago
What about "the compiler did that" ?
throwawaysleep · 2h ago
Depends on your incentives. People anecdotally seem far more impressed with buggy stuff shipped fast than good stuff shipped slowly.
Lots of companies just accept bugs as something that happens.
dontlikeyoueith · 2h ago
Depends on your problem space too.
Calendar app for local social clubs? Ship it and fix it later.
B2B payments software that triggers funds transfers? JFC I hope you PIP people for that.
j45 · 2h ago
The same developers submitting Claude submissions can take 1-2 minutes asking for an explanation of what they're submitting and how it works. Might even learn.
Stack Overflow had enough provenance of copying and pasting. Models may not. Provenance remains a thing or it can add risk to the code.
f1shy · 2h ago
At least I would not accept from my team. Is borderline infuriating. And I would promptly insinuate, if that is the answer, next time I do not need you, I will ask directly Claude, you can stay home!
Herring · 2h ago
Maybe check what's their workload otherwise. Most engineers I've worked with want to do a good job and ship something useful. It's possible they're offloading work to the LLM because they're under a lot of pressure. (And in this case you can't make them stay home)
NegativeLatency · 2h ago
> I don't think "IDK Claude did that" is a valid excuse.
It's not, and yet I have seen that offered as an excuse several times.
grogenaut · 2h ago
Did you push back?
ToucanLoucan · 2h ago
> If you put the code up for review, regardless of the source, you should fundamentally understand how it works.
Inb4 the chorus of whining from AI hypists accusing you of being an coastal elitist intellectual jerk for daring to ask that they might want to LEARN something.
I am so over this anti-intellectual garbage. It's gotten to such a ridiculous place in our society and is literally going to get tons of people killed.
zahlman · 2h ago
I understand and agree with your frustration, but this is not what discourse here is supposed to look like.
whstl · 2h ago
> That is, do people feel like they’re getting better, more reviewable PRs?
No, like you I’m getting more PRs that are less reviewable.
It multiplies what you’re capable of. So you’ll get a LOT of low quality code from devs who aren’t much into quality.
No comments yet
rafaelmn · 2h ago
I've seen the worst impact on mid/junior level devs. Where they would have to struggle through a problem before - AI is like a magic shortcut to something that looks like it works. And then they submit that crap and I can't trust them again - they will give me AI code without even understanding what it does fully. It robbed them of the learning process and made them even less useful, while making it seem to them they were achieving something. I'm seeing these kinds of people getting removed from workforce fast - you can probably prompt the AI better on your own, have one less layer of indirection and it will be faster.
danielmarkbruce · 1h ago
Exactly this. In the right hands the outputs from LLMs are amazing. Not always right, but can be checked. In the wrong hands they are worse than useless because they cannot be checked.
SatvikBeri · 1h ago
A new coworker solved a problem that we'd basically been unable to fix for 2 years, and we were relying on workarounds. This was especially impressive because it was totally outside his expertise. When I asked how he did it, a big part of his process was using LLMs.
For the most part, I've noticed my LLM-using coworkers producing better PRs, but I haven't noticed anyone producing more.
uludag · 2h ago
I've been at the same company both before and after the AI revolution. I've felt something similar. People seem to be more detached, more aloof in their work. I feel like we're discussing our code less and are less able to have coherent big-picture plans concerning the code at-large.
vouaobrasil · 1h ago
The ultimate purpose of AI is indeed to remove cognitive autonomy. Little pieces here and there may seem empowering, but added all up, it pretty much takes control away from people.
nlawalker · 2h ago
> I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”
"Then your job is to go ask Claude and get back to me. On that note, if that's what I'm paying you for now, I might be paying you too much..."
I'm really interested to see how the intersection of AI and accountability develops over the next few years. It seems like everyone's primary job will effectively be taking accountability for the AI they're driving, and the pay will be a function of how much and what kind of accountability you're taking and the overall stakes.
moritzwarhier · 1h ago
I hope so, in reality, people who review code are not always the superiors of the other party :)
Not meaning to put on a holier-than-thou hat here. But I hated this many times already. E.g. having to approve code that contains useless comments that were clearly generated by AI, but on the other hand, it's nitpicking to bring it up during review...
If my colleagues would do that more often and for more parts of the code, that's where I really start getting inclined to look for a new job.
chocolatemario · 2h ago
This just sounds like a low quality bar or PRs that are too wide in large in scope for AI. In any case, it sounds like taking a single stab at it with an LLM and calling it good. I’m not really AI for everything type of dev, but there are some tasks that AI excels at. If you’re doing it for feature work on a high churn product without a tight grip on the reins, I fear for your products future.
pfisherman · 2h ago
> Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work.
Could this be fixed by adjusting how tickets are scoped?
ratelimitsteve · 2h ago
>I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”
I would want someone entirely off of my team if they did that. Anyone who pushes code they don't understand at least well enough to answer "What does that do?" and "Why did you do it that way?" deserves for their PR to be flat out rejected in whole, not just altered.
josefritzishere · 46m ago
I too have seen anecdotes that people "feel" more productive. But science says that they're 19% less productive. I'm going to go with science on this one.
vouaobrasil · 12m ago
Further, the problem with the word "productive" is that it's often measured in quantity and speed, not quality.
qsort · 2h ago
It's extremely hard to measure productivity correctly and self-reports are worthless. I don't think AI tools are a net negative in the average case (people are definitely indexing too much on that goddamn METR article) but "i'm 10x more productive, source trust me bro" is equally nonsense.
Using AI tooling means, at least in part, betting on the future.
didericis · 2h ago
> Using AI tooling means, at least in part, betting on the future.
It means betting on a particular LLM centric vision of the future.
I’m still agnostic on that. I think LLMs allow for the creation of a lot of one off scripts and things for people that wouldn’t otherwise be coding, but I have yet to be convinced that more AI usage in a sufficiently senior software development team is more valuable than the traditional way of doing things.
I think there’s a fundamental necessity for a human to articulate what a given piece of software should do with a high level of specificity that can’t ever be avoided. The best you can do is piggy back off of higher level language and abstractions that guess what the specifics should be, but I don’t think it’s realistic to think all combinations of all business logic and ui can be boiled down to common patterns that an LLM could infer. And even if that were true, people get bored/like novelty enough that they’ll always want new human created stuff to shove into the training set.
danielmarkbruce · 1h ago
An LLM is not a tool to allow you to add a layer of abstraction. It's a worker.
alexander2002 · 2h ago
AI is like a giant calculator you need a formula to make it work for your usecase
liveoneggs · 2h ago
Calculators are mostly deterministic and AI is explicitly not.
When computers give different answers to the same questions it's a fundamental shift in how we work with them.
asciii · 2h ago
That's a great analogy. I recently read about integrating AI similar to the use of calculators in math class -- learn how to do the basic operations first +,-,/,* and then use calculator to scale so you get some theoretical grounding
SamInTheShell · 2h ago
Accurate. It's autocomplete on steroids.
throwawaysleep · 2h ago
A lot depends heavily on how you define productive.
At one of my jobs, the PRs are far less reviewable, but now the devs write tests when they didn’t used to bother. I’ve always viewed reviewing PRs as work without recognition, so I never did much of it anyway, but but now that there are passing tests, I often approve without more than a cursory skim.
So yes, it has made their work more productive for me to get off my plane.
skydhash · 2h ago
But are the test actually useful? You can have a test suite that is actually harmful if it’s not ensuring business rules or domain correctness. Anything else makes for a brittle dev ecosystem.
vasco · 2h ago
I've created hundreds of small scripts that I wouldn't have bothered with before and either do some manual checks or just not possess the information. Just on "small script" productivity it already saved me a lot of time.
The problem is people trying to make the models do things that are too close to their limit. You should use LLMs for things they can ace already, not waste time trying to get it to invent some new algorithm. If I don't 0-3 shot a problem then I will just either do it manually or not do it.
Similarly to giving up on a Google search that you try a few times and nothing useful comes in the first few prompts. You don't keep at it the whole afternoon.
gspencley · 2h ago
Yeah I'm not an AI naysayer but was pretty skeptical that AI could actually do anything for my workflow.
Tedious, repetitive stuff that you'd typically write a throwaway script for is one good use case for AI for me. This doesn't come up very often for me, however. But this morning I needed to update a bunch of import statements in TypeScript files to use relative paths instead of referencing a library name and Cursor did that for me quickly and easily without me needing to write a script.
I've also found that if you're unsure how to solve a problem within an existing codebase, or you're about to go down a rabbit hole of studying documentation to figure out how to wire something up, an LLM can get you up and running with code examples a bit quicker.
But if I already know, high level, how I'm going to solve something then the whole "needing to review the AI code" part of the workflow more than eats up any time savings. And the parallelism thing is just a ton of context shifting to me. Each time I need to go back to an agent to review what it did, it takes me away from the deep focus on a domain problem that I was focused on, and there's a good 20 - 30 minutes of productivity lost by switching back and forth between just two tasks.
timeinput · 2h ago
Agreed, and after you've worked with them for a bit you can start to predict where they are going to fail, and do something silly like delete your code base, or have no trouble, and succeed, and add that feature you were after.
A good code review and edit to remove the excess verbosity, and you got a feature done real fast.
Ask it for something at or above its limit then the code is very difficult to debug, difficult to understand has potentially misleading comments, and more. Knowing how to work with these overly confident coworkers is definitely a skill. I feel it varies from significantly from model to model as well.
Its often difficult to task other programmers with tasks at or above their limits too.
lucianbr · 2h ago
> The problem is people trying to make the models do things that are too close to their limit.
As advertised or at least strongly implied by the companies that own the models.
vasco · 2h ago
The shampoo company also tells you to apply twice, you don't have to believe them.
paulcole · 2h ago
I'm in a similar same boat as you.
I'm not a programmer but from time to time would make automations or small scripts to make my job easier.
LLMs have made much more complex automations and scripts possible while making it dead simple to create the scripts I used to make.
bko · 2h ago
I don't get these articles. First the author claims that AI made us more productive but now the time we saved is spent on more work (!)
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings.
But then a few sentences later, she argues that tools made us less productive.
> when developers used AI tools, they took 19% longer to complete tasks than without AI. Even more telling: the developers estimated they were 20% faster with AI—they were completely wrong about their own productivity.
Then she switches back to saying it saves us time but cognitive debt!
> If an AI tool saves you 30 minutes but leaves you mentally drained and second-guessing everything, that’s not productivity—that’s cognitive debt.
I think you have to pick one, or just admit you don't like AI because it makes you feel icky for whatever reason so you're going to throw every type of argument you can against it.
vouaobrasil · 2h ago
> I don't get these articles. First the author claims that AI made us more productive but now the time we saved is spent on more work (!)
Well, that is a general phenomenon of technology. Technology does indeed often save time at first (in the short-term) but then compensates in a bad way and gives us more work in the long term. Pretty much every technology past a certain point of sophistication does this:
- Smartphones immediately can help us organize stuff, but then they also make it easier for people to call you when you don't want, you get more spam, etc.
- Cars make it easier to get from A to B but then in the long run now we have to spend countless years of work to clean up the climate/environemtn
- Computers make typing and storing information more efficient but now we have to spend countless hours on securing them
The bottom line is that efficiency increases from a technological creation point of view but life simplicity decreases, in general.
bityard · 2h ago
There was also no advice for the main problem posited in the first couple of paragraphs. That is, being asked to do more with less time.
The right answer to this is: speak up for yourself. Dumping your feelings into HN or Reddit or your blog can be a temporary coping mechanism but it doesn't solve the problem. If you are legitimately working your tail off and not able to keep up with your workload, tell your manager or clients. Tactfully, of course. If they won't listen, then its time to move on. There are reasonable people/companies to work with out there, but they sometimes take some effort to find.
bsenftner · 2h ago
It's sloppy click bait writing.
pfisherman · 2h ago
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings. The 30 minutes you saved on data analysis? You’re using it to manage two more AI tools and review their outputs.
This is basically the definition of increased productivity and efficiency. Doing more stuff in the same amount of time. What I tell people who are anxious about whether their job might be automated away by AI is this:
We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.
marknutter · 2h ago
> We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.
And this has always been the case throughout all of human history.
kebman · 2h ago
Here's the kicker: AI was supposed to automate the boring parts so we could “focus on high-leverage, strategic, needle-moving, synergistic core competencies.” Instead, we’re stuck in a recursive loop of prompt engineering, hallucination triage, output validation, re-prompting, Slack channel FOMO, and productivity theater. We’ve basically replaced “doing the work” with “managing the tool that kinda tries to do the work but needs babysitting.” Congrats—we’ve invented Jira for thought. And here's the kicker.
vouaobrasil · 2h ago
AI was never supposed to automate the boring parts. It's what it was advertised to do. Sort of like how those "as seen on TV" weight loss pills are "supposed" to help you lose weight.
The purpose of AI is supposed to *make a few people richer*! Not take away the boredom from tasks. That's only a side effect of it used to sell it.
throwawaysleep · 2h ago
Or we are all just dev leads now managing junior dev swarms.
leptons · 2h ago
Those "junior dev swarms" will never become seniors, so you're perpetually handholding and always getting junior-dev results. It isn't a step forward in any way.
yzhong94 · 8m ago
The time saved just goes to TikTok / Reddit / Instagram etc. There's plenty of time wasted waiting for the AI to finish working
firefoxd · 2h ago
In my team, it's making one dev more prolific, and everybody else work harder.
The most junior dev on my team was tasked with setting up a repo for a new service. The service is not due for many many months so this was an opportunity to learn. What we got was a giant PR with hundreds of new configurations no one has heard of. It's not bad, it's just that we don't know what each conf does. Naturally we asked him to explain or give an overview, he couldn't. Well because he fed the whole thing to an LLM and it spat out the repo. He even had fixes for bugs we didn't know we had in other repos. He didn't know either. But it took the rest of the team digging in to figure out what's going on.
I'm not against using LLM, but now I've added a new step in the process. If anyone makes giant PRs, they'll also have to make a presentation to give everyone an overview. With that in mind, it forces devs to actually read through the code they generate and understand it.
vkou · 2h ago
I think the simpler expectation here is the same one you should have for non-AI code.
Don't allow giant PRs without a damn good reason for them. Incremental steps, with testing (automated or human) to verify their correctness, that take you from a known-good-to-known-good state.
ctoth · 3h ago
Bit of a dunk but ... It must have saved you what ever time you were going to spend on this article because I detect a bunch of LLMisms and very little content.
I'm not sure that Claude saves me time -- I just spent my weekend working on a Claude Code Audio hook with Claude which I obviously wouldn't have worked on elsewise, and that's hardly the gardening I intended to do ... but man it was fun and now my CC sessions are a lot easier to track by ear!
amirhirsch · 2h ago
The blogosphere (am I dating myself?) keeps bringing up the METR study (https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...) without really understanding the result. The guy with experience had a huge boost. You are reading the results wrong if your conclusion is this blog.
And that was before Claude Code.
georgeburdell · 2h ago
The article itself mentions the J-shaped curve (sacrifice productivity now while you learn the tools, then gain that and more later on). It’s really just poor (or perhaps AI) writing
snitzr · 2h ago
I feel like the only thing propping up the US economy right now is AI hype.
leptons · 2h ago
"AI" is sucking up all the investment money, so unless you are working in "AI", you aren't likely to get funding. It's hurting the economy more than helping.
xwowsersx · 1h ago
Well, a few things to consider:
Investment capital isn't zero-sum...if/when AI generates outsized/large returns, it actually brings more money into the entire VC ecosystem. LPs who 10x their AI investments don't exactly hoard that cash, they reinvest and often diversify into other sectors.
Every major tech wave creates huge downstream opportunities. The internet "bubble" didn't just benefit search engines...it spawned e-com, SaaS, fintech, etc. AI is doing the same thing with robotics, new semiconductors, data infrastructure, and I'm sure other categories that don't even exist yet.
Also investors know they need portfolio diversification. Even AI-focused VCs are actively looking for contrarian bets in undervalued sectors precisely because there's less competition there right now.
Plus, AI advancement should (yes, I know there's hype and theses that may not play out) accelerate innovation everywhere else. Example: A biotech startup today has access to AI tools that were Google-only five years ago. This makes non-AI startups more attractive, not less.
We saw identical complaints during the dotcom era about "real businesses" getting ignored, but that period actualy coincided with growth across enterprise software, telco, and tons of other sectors.
hintymad · 1h ago
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings. The 30 minutes you saved on data analysis? You’re using it to manage two more AI tools and review their outputs
I'm not sure if this is caused by AI, or by the nature of business or the culture of many companies. Replacing AI with any automations, wouldn't it be the same result?
vouaobrasil · 11m ago
It's a two-way street: technologies tend to create power imbalances and that in turn creates culture over a long term, especially if innovation also becomes central to culture.
randysalami · 2h ago
Work harder, create datapoints, democratize knowledge. Except that knowledge will be confined eventually and doom the futures of many people. Use AI now to get ahead of your peers by feeding it questions and evaluating responses. Then in 10 years, Insert Field Here will be dominated by models trained by yesterday’s experts. New members of the field will not be able to compete with the collective knowledge of 1000s of their predecessors. Selling the futures of our youth for short-term gains. It’s quite sad and it is what’s happening.
It’s a shame too because it really could have been something so much more amazing. I’d imagine higher education would shift to how it used to be: a past-time for bored elites. We would probably see a large reduction in the middle class and its eventual destruction. First they went for manufacturing with its strong unions, now they go for the white-collar worker who has little solidarity for his common man (see lack of unions and ethics in our STEM field; most likely because we thought we could never be made redundant). Field by field the middle class will be destroyed and the lower class in thrall of addictive social media, substances, and the illusion of selection into the influencer petty-elite (which remain compliant because they don’t offer value proportional to the bribes they receive). The elites will have recreated the dynamic that existed for most of human history. Final point, see the obsession of current elites in using artificial insemination to create a reliable and durable contingent of heirs. Something previous rulers could only dream about in history.
It disgusts me and pisses me off so much.
kachapopopow · 2h ago
AI has allowed me to work on projects I was simply too lazy or haven't attached any importance to. It also allows me to skip the entire process of contacting a designer or doing design work myself (which might actually be a bad thing). The thing I know for sure is that none of my smart home additions I had sitting in a box would be finished if AI didn't exist.
SoftTalker · 2h ago
If they were so unimportant why do them at all?
It's like buying a trinket just because it's cheap. It's still ultimately wasteful.
bityard · 2h ago
I'm not who you are responding to, but I do similar things with AI.
Think about it this way. We all make decisions like this pretty much every day, but I am especially careful with them in my personal life where time is limited and sacred: "What amount of time or money (X) am I willing to spend to get something (Y)?"
There have been many times that X has been "too much," until I later discover some new tool, library, or technique (or simply a price drop) that reduces X below the threshold of pulling the trigger. AI is that new tool for a lot of people and contexts.
If my barrier to some cool new one-off home automation feature is something like, "I would need to know Ruby but I don't know it and don't have time or desire to learn it," then I can have an LLM do the heavy lifting in a tiny fraction of the time it would take me to learn. Of course the feature needs to be something straightforward enough for the LLM to handle, and you have to be able to test it. And it goes without saying that since I can't properly review the code, I wouldn't use it for something that could cause a lot of damage or a security issue. But there are lots of tools/areas where that is not applicable. (Not all code needs to be bullet-proof and in reality, almost none of it is, even when it should be.)
kachapopopow · 2h ago
Because, why not, just because I don't attach importance doesn't mean they don't make my life or those around me more convenient. It's just a good motivation tool in general.
Also if you buy an ultimately useless trinket, well that's just life. Everything we do can be considered 'ultimately' useless.
vouaobrasil · 2h ago
> Also if you buy an ultimately useless trinket, well that's just life. Everything we do can be considered 'ultimately' useless.
Dumb philosophy. Some things in life are at least worthwhile, like spending time with friends and making the lives of others better. But increasing efficiency for intellectual stimulation in a narrow domain truly useless and shows how pathological we have become.
Fact is, there is some level of meaning in life if you accept there to be any meaning at all, and making mindless diversions certainly isn't within that domain.
bityard · 1h ago
> like spending time with friends and making the lives of others better
Those are good things, but they are not the only things. Life is short, but there is room for "mindless diversions," as you phrase it. Without this, there would be no creativity, no craftsmanship, no art. Is it not also important to enrich oneself with hobbies and side-projects? Further, there is no authority who can make the judgement call on what is "worthwhile" to spend time on and what is not.
Your comment reeks of projection.
vouaobrasil · 1h ago
> Without this, there would be no creativity, no craftsmanship, no art. Is it not also important to enrich oneself with hobbies and side-projects? Further, there is no authority who can make the judgement call on what is "worthwhile" to spend time on and what is not.
It's not binary. There is nothing wrong with mindless diversions, but there is a healthy proportion of them. And when it becomes pathological (i.e. when AI allows spending a disproportionate amount of time on them), then it's a serious problem.
kachapopopow · 41m ago
AI is a tool that has to be learned and the research is compeletely flawed in my opinion. For me AI is a sort of a colleague that is always there on demand which helps me see projects to the finish (this is really the fault either a variant of adhd or some other disorder). I don't know if it's healthy seeing AI this way, but I know for sure that I wouldn't have come close to the amount of progress I've made on projects that I have been pushing off for years. (replied to the wrong thread, but oh well)
vouaobrasil · 17m ago
> but I know for sure that I wouldn't have come close to the amount of progress I've made on projects that I have been pushing off for years.
Of course, at its inception, AI will mainly be seen as something that can help improve efficiency. Same with the smartphone: it was an entire optional tool that was mainly beneficial. But after this initial inception, technology tends to grow, and now smartphones for example are often mandatory due to 2FA, or at least difficult to avoid. And they constantly find new ways to bother people.
So for now, AI can be helpful for some, but it will grow, become more entrenched and insidious, and in lots of cases entirely replace people or at least be annoying and difficult to get rid of.
It doesn't make sense to argue for AI by stating its benefits in its nascent stages. Babies are all basically innocent creatures that bring emotional benefits but some can grow up to be killers, and this is what will happen with AI.
6gvONxR4sf7o · 2h ago
We've had technological progress that rapidly shifts the number of person-hours per <output> for generations. We don't have to guess. We've seen this play out many times already.
At first, we spend our time one way (say eight hours, just to pick a number). Then we get the tools to do all of that in six hours. Then when job seeking and hiring, we get one worker willing to work six hours and another willing to work eight, so the eight-hour worker gets the job, all else equal. Labor is a marketplace, so we work as much as we're willing to in aggregate, which is roughly constant over time, so efficiency will never free up individuals' time.
In the context of TFA, it means we just shift our time to "harder" work (in the sense of work that AI can't do yet).
Oras · 2h ago
My experience with AI coding is it might be slower to develop for short term, but it’s saving ton of time for the long term.
Here is an example.
I decided to create a new app, so I write down a brief of what it should do, ask AI to create a longer readme file about the platform along with design, sequence diagram, and suggested technologies.
I review that document, see if there is anything I can amend, then ask AI for the implementation plan.
Up until this point, this would probably increased the time I usually use to describe the platform in writing. But realistically, designing and thinking about systems were never that fast. I would have to think about use cases, imagine workflows in my mind, do pen and paper diagrams which I don’t think any of the productivity reports are covering.
nathan_compton · 2h ago
This isn't surprising to me. My experience is that AI is best suited for a single person to get started rapidly with a new project or, if used artfully, to quickly orchestrate refactoring that the user has planned. I've personally found AI to be good for my productivity (haven't measured, however, my work is not conducive to that kind of thing) but I've also found I use AI primarily to look up documentation and to type code out. I still think about software design as much as I ever have, whether its the initial design step or refactoring.
tcrow · 2h ago
This is admittedly much easier with greenfield projects, but if you can keep the AI focused on tight, modular development, meeting service specs, and not have the AI try to address cross-cutting concerns, you get much better outcomes. It does put more responsibility on humans for proper design and specification, but if you are willing to do that work, the AI can really assist in the raw development aspect during implementation.
pftburger · 2h ago
AI efficiency gains don’t benefit employees, they benefit _employers_, who get more output from the same salary.
When you’re salaried, you’re selling 8 hours of time, not units of work. AI that makes you 20% faster doesn’t mean you work 20% fewer hours or get a 20% raise. It means your employer gets 20% more value from the same labor cost.
Marx: workers sell their capacity to work for a fixed period, and any productivity improvements within that time become surplus value captured by capital.
AI tools are just the latest mechanism for extracting more output from the same wage.
The real issue isn’t the technology—it’s that employees can’t capture gains from their own efficiency improvements. Until compensation models shift from time-based to outcome-based, every productivity breakthrough just makes us more profitable to employ, not more prosperous ourselves.
It’s the Industrial Revolution all over again and we’re the Luddites
kinghajj · 2h ago
> any productivity improvements within that time become surplus value captured by capital.
Not quite right. Total Value remains the same before and after increase in productivity, assuming the labor force remains constant. But more use-value is created in the same period of time.
At the beginning, this is good for the employer, because the new socially necessary labor time has not been internalized, so the output can be sold for a price corresponding to its old Value. Maybe a bit less, to undercut competitors.
Eventually though, as competition adopts the new technique, everyone attempts to undercut each other’s prices, adjusting until prices correspond to the new Value.
dragonwriter · 2h ago
> AI efficiency gains don’t benefit employees—they benefit employers who get more output from the same salary.
So, they also benefit developers that become solopreneurs.
So they increase the next-best alternative for developers compared to work as employees.
What happens when you improve the next-best alternative?
> AI tools are just the latest mechanism for extracting more output from the same wage.
The whole history of software development has been rapid introduction of additional automation (because no field has been more the focus of software development than itself), and looking at the history of developer salaries, that has not been a process of "extracting more output from the same wage". Yes, output per $ wage has gone up, but real wages per hour or day worked for developers have also gone up, and done so faster than wages across the economy generally. It is true and problematic that the degree of capitalism in the structure of the modern mixed economy means that the gains of productivity go disproportionately to capital, but it is simply false to say that they go exclusively to capital across the board, and it is particularly easy to see that this has specifically been false in the case of productivity gains from further automation in software development.
Herring · 2h ago
American workers don't care about "socialism". Look who they elected president. He won the popular vote.
Eventually they will be forced to care when things get bad enough -- and it's definitely trending that way fast [1]. But not today and not tomorrow.
Just like with "automatic checkout systems" at a grocery store. Passing the labor onto the individual, vs. the expert. We don't even get the same infrastructure professionals get. A PLU for a piece of fruit is a mind-blurring whisk of hands over a dial pad for a pro, and a bit of mind-numbingly arduously piss-poor series of taps for the ill-positioned entrant.
darth_avocado · 2h ago
AI is bringing efficiency, and it is making us work harder. It’s because AI marginally improves certain workflows but the management is using that to fire employees and offload their work on the remaining ones. You get 1.2x improvement in efficiency but get 2x the work.
soiltype · 2h ago
sic semper operarius.
you will never be given your time back by an employer. you have to take it. you might be able to ask for it, but it won't be freely given, whether or not you become more efficient. LLM chatbots and agents are, in this sense, just another tool that changes our relationship to the work we do (but never our relationship to work).
esafak · 2h ago
Come on, what did you think was going to happen? The historical record has consistently shown that humans have not worked less when given tools that increased productivity; they simply produced more.
Economist Keynes predicted one century ago that the workweek would drop to 15 hours due to rising productivity. It has not happened for social reasons.
I don't know what's going to happen when humans become redundant; that's an incipient issue we'll have to grapple with.
SoftTalker · 2h ago
Humans will probably be cheaper than machines for a long time for some things. They reproduce themselves, they self-repair (up to a point), they are quite dexterous, they can be powered fairly cheaply by stuff that grows out of the earth, and they learn by example without explicit programming. They will do the menial but deceptively demanding of high dexterity tasks that need to be done. Laundry, dishes, housecleaning. New construction will largely be done by machines but repairs are more unpredictable so human mechanics, plumbers, etc. will continue to be in demand.
Software development as a career will evaporate in the next decade, as will most "knowledge" work such as general medicine, law, and teaching. Surgeons and dentists will continue a bit longer.
Bottom line, most of us will be doing chores while the machines do all the production and creative work.
esafak · 2h ago
I don't think people in democratic societies will stand for being reduced to errand runners; they will vote for governments that promise them relief. And even those chores will be done by robots. A new social contract will be demanded.
mmanfrin · 2h ago
Modern cotton gin.
FrustratedMonky · 2h ago
From AI
"While the cotton gin made the process of cleaning cotton significantly faster, it ultimately led to an increase in the demand for enslaved labor, not a decrease."
A lot of automation leads to more work for someone. Reduces some jobs, and piles on others. Maybe 12 union jobs are removed, but then keeping the robots running falls on 1-2 IT staff.
downrightmike · 2h ago
A doctor can review 50 xrays a day, AI comes in and flags one for re-review, the doctor now can only do 49 reviews a day.
turnsout · 2h ago
It was ever thus. In each technological revolution—from the Industrial Revolution to personal computing revolution—the promise is that the technology will make work so efficient and productive that we can all work less.
Unfortunately, it is always a deliberate lie by the people who stand to gain from the new technology. Anyone who has thought about it for five seconds knows that this is not how capitalism works. Productivity gains are almost immediately absorbed and become the new normal. Firms that operate at the old level of productivity get washed out.
I simply can't believe that we're still falling for this. But let's hold out hope. Maybe AGI is just around the corner, and literally everyone in the world will spend our time sipping margaritas on the beach while we count our UBI. Certainly AI could never accelerate wealth concentration and inequality, right? RIGHT?
GaggiX · 2h ago
This is another article based on the study that had a population of 16 people, most of which never used the tool before.
Edit: "amirhirsch" user probably explained this better than me in an above comment.
isoprophlex · 2h ago
At least AI gave the author an easy think piece without having to write anything in their own style.
"But here's the kicker"
"It's not x. It's y."
"The companies that foo? They bar."
Em-dashes galore.
I'm either hypersensitized, seeing ghosts, or this article got the "yo claude make it pop" treatment. It's sad, but anything overly polished immediately triggers some "is this slop or an original thought actually worth my time" response.
nathan_compton · 2h ago
I find music from the late 80s and 90s almost unlistenable, as digital audio processing created a very samey sound for everything. I think its instructive that even the iconoclastic Devo produced absolute dogshit sounding recordings from this era.
New technology often homogenizes and makes things boring for awhile.
Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work. They often think they do, but then I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”
By no means am I down on AI, but I think proper procedures need to be put into place unless we want a giant bomb in our code base.
AI may be multi-threaded, but there's still a human, global interpreter lock in place. :D
If you put the code up for review, regardless of the source, you should fundamentally understand how it works.
This raises a broader point about AI and productivity: while AI promises parallelism, there's still the human in the middle who is responsible for the code.
The promise of "parallelism" is overstated.
100's of PRs should not be trusted. Or at least not without the c-suite understanding such risks. Maybe you're a small startup looking to get out the door as quickly as possible, so.. YOLO.
But it's going to be a hot mess. A "clean up in aisle nine" level mess.
I strongly agree, however manager^x do not and want see report the massive "productivity" gains.
I know many who have it on from high that they must use AI. One place even has bonuses tied not to productivity, but how much they use AI.
Meanwhile managers ask if AI is writing so much code why aren't they seeing it on topline productivity numbers.
"Claude did that" is functionally equivalent to "idk I copied that from r/programming" and is totally unacceptable for a professional
I have never seen this standard reached for any real codebase of any size.
Even in projects with a reputation for a strong review culture, people know who the "easy" reviewers are and target them for the dicey stuff (they are often the most overloaded... which only causes them to get more overloaded). I've seen people explicitly state they are just "rubber stamping" PRs. Literally no one reviews every line of third-party dependencies, and especially not when they are updated routinely. I've seen over a million lines of security-sensitive third-party code integrated and pushed out to hundreds of millions of users by a handful of developers in a matter of months. I've seen developers write their new green-field code as a third-party library to circumvent the review process that would have been applied if it had been developed as a series of first-party PRs. None of that had anything to do with AI. It all predated AI coding tools. That is how humans behave.
Does this create ticking time-bombs? It absolutely does. You do the best you can. You triage and deal with the most important things according to your best judgment, and circle back to the rest as time and attention allow. If your judgment is good, it's mostly okay. Some day it might not be. But I do not think that you can argue that the optimal level of risk is zero, outside of a few specialized contexts like space shuttles and nuclear reactors.
I know. It hurts my soul, too. But reality isn't pretty, and worse is better.
If you submit a PR and you yourself can not personally vouch for every line of code as a professional…then you are not a professional. You are a hack.
That is why these code generation tools are so dangerous. Sure, it's theoretically possible that a programmer can rely on them for offering suggestions of new code and then "write" that code for a PR such that full human understanding is maintained and true craft is preserved. The reality is, that's not what's happening. At all. And it's a full-blown crisis.
100% my takeaway after trying to parallelize using worktrees. While Claude has no problem managing more than one context instance, I sure as hell do. It’s exhausting, to the point of slowing me down.
Ask pretty much any FOSS developer who has received AI-generated (both code and explanations) PRs on GitHub (and when you complain about these, the author will almost always use the same AI to generate responses) about their experiences. It's a huge time sink if you don't cut them off. There are plenty of projects out there now that have explicit policy documentation against such submissions and even boilerplate messages for rejecting them.
That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.
No comments yet
That will work, but only until the people filing these PRs go crying to their managers that you refuse to merge any of their code, at which point you'll be given a stern reprimand from your betters to stop being so picky. Have fun vibe-reviewing.
Nothing can move fast enough to keep up with these hype-fueled TED talk expectations all the way up the chain.
I don't know if there's any solution and I'm sure it's not like this everywhere but I'm also sure I'm not alone. At this point I'm just trying to keep my feet wet on "AI" related projects until the hype dust settles so I can reassess what this industry even is anymore. Maybe it's not too late to get a single subject credential and go teach math or finger painting or something.
So I guess if you asked Claude why it did that, the truth of it might be "IDK I copy pasted from StackOverflow"
The same stuff pasted with a different sticker. Looks good to me.
You're right, saying you got something off SO would get you laughed out of programming circles back in the day. We should be applying the same shame to people who vibe code, not encourage it, if we want human-parseable and maintainable software.
For whom is this a danger for?
If we're paid to dig ditches and fill them, who are we to question our supreme leaders? They control the purse strings, so of course they know best.
I've also seen "Ask ChatGPT if you're doing X right?", and basically signing off whatever it recommends without checking
At this point I'm pretty confident I could trojan horse whatever decision I want from certain people by sending enough screenshots of ChatGPT agreeing with me
So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.
Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.
Lots of companies just accept bugs as something that happens.
Calendar app for local social clubs? Ship it and fix it later.
B2B payments software that triggers funds transfers? JFC I hope you PIP people for that.
Stack Overflow had enough provenance of copying and pasting. Models may not. Provenance remains a thing or it can add risk to the code.
It's not, and yet I have seen that offered as an excuse several times.
Inb4 the chorus of whining from AI hypists accusing you of being an coastal elitist intellectual jerk for daring to ask that they might want to LEARN something.
I am so over this anti-intellectual garbage. It's gotten to such a ridiculous place in our society and is literally going to get tons of people killed.
No, like you I’m getting more PRs that are less reviewable.
It multiplies what you’re capable of. So you’ll get a LOT of low quality code from devs who aren’t much into quality.
No comments yet
For the most part, I've noticed my LLM-using coworkers producing better PRs, but I haven't noticed anyone producing more.
"Then your job is to go ask Claude and get back to me. On that note, if that's what I'm paying you for now, I might be paying you too much..."
I'm really interested to see how the intersection of AI and accountability develops over the next few years. It seems like everyone's primary job will effectively be taking accountability for the AI they're driving, and the pay will be a function of how much and what kind of accountability you're taking and the overall stakes.
Not meaning to put on a holier-than-thou hat here. But I hated this many times already. E.g. having to approve code that contains useless comments that were clearly generated by AI, but on the other hand, it's nitpicking to bring it up during review...
If my colleagues would do that more often and for more parts of the code, that's where I really start getting inclined to look for a new job.
Could this be fixed by adjusting how tickets are scoped?
I would want someone entirely off of my team if they did that. Anyone who pushes code they don't understand at least well enough to answer "What does that do?" and "Why did you do it that way?" deserves for their PR to be flat out rejected in whole, not just altered.
Using AI tooling means, at least in part, betting on the future.
It means betting on a particular LLM centric vision of the future.
I’m still agnostic on that. I think LLMs allow for the creation of a lot of one off scripts and things for people that wouldn’t otherwise be coding, but I have yet to be convinced that more AI usage in a sufficiently senior software development team is more valuable than the traditional way of doing things.
I think there’s a fundamental necessity for a human to articulate what a given piece of software should do with a high level of specificity that can’t ever be avoided. The best you can do is piggy back off of higher level language and abstractions that guess what the specifics should be, but I don’t think it’s realistic to think all combinations of all business logic and ui can be boiled down to common patterns that an LLM could infer. And even if that were true, people get bored/like novelty enough that they’ll always want new human created stuff to shove into the training set.
When computers give different answers to the same questions it's a fundamental shift in how we work with them.
At one of my jobs, the PRs are far less reviewable, but now the devs write tests when they didn’t used to bother. I’ve always viewed reviewing PRs as work without recognition, so I never did much of it anyway, but but now that there are passing tests, I often approve without more than a cursory skim.
So yes, it has made their work more productive for me to get off my plane.
The problem is people trying to make the models do things that are too close to their limit. You should use LLMs for things they can ace already, not waste time trying to get it to invent some new algorithm. If I don't 0-3 shot a problem then I will just either do it manually or not do it.
Similarly to giving up on a Google search that you try a few times and nothing useful comes in the first few prompts. You don't keep at it the whole afternoon.
Tedious, repetitive stuff that you'd typically write a throwaway script for is one good use case for AI for me. This doesn't come up very often for me, however. But this morning I needed to update a bunch of import statements in TypeScript files to use relative paths instead of referencing a library name and Cursor did that for me quickly and easily without me needing to write a script.
I've also found that if you're unsure how to solve a problem within an existing codebase, or you're about to go down a rabbit hole of studying documentation to figure out how to wire something up, an LLM can get you up and running with code examples a bit quicker.
But if I already know, high level, how I'm going to solve something then the whole "needing to review the AI code" part of the workflow more than eats up any time savings. And the parallelism thing is just a ton of context shifting to me. Each time I need to go back to an agent to review what it did, it takes me away from the deep focus on a domain problem that I was focused on, and there's a good 20 - 30 minutes of productivity lost by switching back and forth between just two tasks.
A good code review and edit to remove the excess verbosity, and you got a feature done real fast.
Ask it for something at or above its limit then the code is very difficult to debug, difficult to understand has potentially misleading comments, and more. Knowing how to work with these overly confident coworkers is definitely a skill. I feel it varies from significantly from model to model as well.
Its often difficult to task other programmers with tasks at or above their limits too.
As advertised or at least strongly implied by the companies that own the models.
I'm not a programmer but from time to time would make automations or small scripts to make my job easier.
LLMs have made much more complex automations and scripts possible while making it dead simple to create the scripts I used to make.
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings.
But then a few sentences later, she argues that tools made us less productive.
> when developers used AI tools, they took 19% longer to complete tasks than without AI. Even more telling: the developers estimated they were 20% faster with AI—they were completely wrong about their own productivity.
Then she switches back to saying it saves us time but cognitive debt!
> If an AI tool saves you 30 minutes but leaves you mentally drained and second-guessing everything, that’s not productivity—that’s cognitive debt.
I think you have to pick one, or just admit you don't like AI because it makes you feel icky for whatever reason so you're going to throw every type of argument you can against it.
Well, that is a general phenomenon of technology. Technology does indeed often save time at first (in the short-term) but then compensates in a bad way and gives us more work in the long term. Pretty much every technology past a certain point of sophistication does this:
- Smartphones immediately can help us organize stuff, but then they also make it easier for people to call you when you don't want, you get more spam, etc.
- Cars make it easier to get from A to B but then in the long run now we have to spend countless years of work to clean up the climate/environemtn
- Computers make typing and storing information more efficient but now we have to spend countless hours on securing them
The bottom line is that efficiency increases from a technological creation point of view but life simplicity decreases, in general.
The right answer to this is: speak up for yourself. Dumping your feelings into HN or Reddit or your blog can be a temporary coping mechanism but it doesn't solve the problem. If you are legitimately working your tail off and not able to keep up with your workload, tell your manager or clients. Tactfully, of course. If they won't listen, then its time to move on. There are reasonable people/companies to work with out there, but they sometimes take some effort to find.
This is basically the definition of increased productivity and efficiency. Doing more stuff in the same amount of time. What I tell people who are anxious about whether their job might be automated away by AI is this:
We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.
And this has always been the case throughout all of human history.
The purpose of AI is supposed to *make a few people richer*! Not take away the boredom from tasks. That's only a side effect of it used to sell it.
The most junior dev on my team was tasked with setting up a repo for a new service. The service is not due for many many months so this was an opportunity to learn. What we got was a giant PR with hundreds of new configurations no one has heard of. It's not bad, it's just that we don't know what each conf does. Naturally we asked him to explain or give an overview, he couldn't. Well because he fed the whole thing to an LLM and it spat out the repo. He even had fixes for bugs we didn't know we had in other repos. He didn't know either. But it took the rest of the team digging in to figure out what's going on.
I'm not against using LLM, but now I've added a new step in the process. If anyone makes giant PRs, they'll also have to make a presentation to give everyone an overview. With that in mind, it forces devs to actually read through the code they generate and understand it.
Don't allow giant PRs without a damn good reason for them. Incremental steps, with testing (automated or human) to verify their correctness, that take you from a known-good-to-known-good state.
I'm not sure that Claude saves me time -- I just spent my weekend working on a Claude Code Audio hook with Claude which I obviously wouldn't have worked on elsewise, and that's hardly the gardening I intended to do ... but man it was fun and now my CC sessions are a lot easier to track by ear!
And that was before Claude Code.
Investment capital isn't zero-sum...if/when AI generates outsized/large returns, it actually brings more money into the entire VC ecosystem. LPs who 10x their AI investments don't exactly hoard that cash, they reinvest and often diversify into other sectors.
Every major tech wave creates huge downstream opportunities. The internet "bubble" didn't just benefit search engines...it spawned e-com, SaaS, fintech, etc. AI is doing the same thing with robotics, new semiconductors, data infrastructure, and I'm sure other categories that don't even exist yet.
Also investors know they need portfolio diversification. Even AI-focused VCs are actively looking for contrarian bets in undervalued sectors precisely because there's less competition there right now.
Plus, AI advancement should (yes, I know there's hype and theses that may not play out) accelerate innovation everywhere else. Example: A biotech startup today has access to AI tools that were Google-only five years ago. This makes non-AI startups more attractive, not less.
We saw identical complaints during the dotcom era about "real businesses" getting ignored, but that period actualy coincided with growth across enterprise software, telco, and tons of other sectors.
I'm not sure if this is caused by AI, or by the nature of business or the culture of many companies. Replacing AI with any automations, wouldn't it be the same result?
It’s a shame too because it really could have been something so much more amazing. I’d imagine higher education would shift to how it used to be: a past-time for bored elites. We would probably see a large reduction in the middle class and its eventual destruction. First they went for manufacturing with its strong unions, now they go for the white-collar worker who has little solidarity for his common man (see lack of unions and ethics in our STEM field; most likely because we thought we could never be made redundant). Field by field the middle class will be destroyed and the lower class in thrall of addictive social media, substances, and the illusion of selection into the influencer petty-elite (which remain compliant because they don’t offer value proportional to the bribes they receive). The elites will have recreated the dynamic that existed for most of human history. Final point, see the obsession of current elites in using artificial insemination to create a reliable and durable contingent of heirs. Something previous rulers could only dream about in history.
It disgusts me and pisses me off so much.
It's like buying a trinket just because it's cheap. It's still ultimately wasteful.
Think about it this way. We all make decisions like this pretty much every day, but I am especially careful with them in my personal life where time is limited and sacred: "What amount of time or money (X) am I willing to spend to get something (Y)?"
There have been many times that X has been "too much," until I later discover some new tool, library, or technique (or simply a price drop) that reduces X below the threshold of pulling the trigger. AI is that new tool for a lot of people and contexts.
If my barrier to some cool new one-off home automation feature is something like, "I would need to know Ruby but I don't know it and don't have time or desire to learn it," then I can have an LLM do the heavy lifting in a tiny fraction of the time it would take me to learn. Of course the feature needs to be something straightforward enough for the LLM to handle, and you have to be able to test it. And it goes without saying that since I can't properly review the code, I wouldn't use it for something that could cause a lot of damage or a security issue. But there are lots of tools/areas where that is not applicable. (Not all code needs to be bullet-proof and in reality, almost none of it is, even when it should be.)
Also if you buy an ultimately useless trinket, well that's just life. Everything we do can be considered 'ultimately' useless.
Dumb philosophy. Some things in life are at least worthwhile, like spending time with friends and making the lives of others better. But increasing efficiency for intellectual stimulation in a narrow domain truly useless and shows how pathological we have become.
Fact is, there is some level of meaning in life if you accept there to be any meaning at all, and making mindless diversions certainly isn't within that domain.
Those are good things, but they are not the only things. Life is short, but there is room for "mindless diversions," as you phrase it. Without this, there would be no creativity, no craftsmanship, no art. Is it not also important to enrich oneself with hobbies and side-projects? Further, there is no authority who can make the judgement call on what is "worthwhile" to spend time on and what is not.
Your comment reeks of projection.
It's not binary. There is nothing wrong with mindless diversions, but there is a healthy proportion of them. And when it becomes pathological (i.e. when AI allows spending a disproportionate amount of time on them), then it's a serious problem.
Of course, at its inception, AI will mainly be seen as something that can help improve efficiency. Same with the smartphone: it was an entire optional tool that was mainly beneficial. But after this initial inception, technology tends to grow, and now smartphones for example are often mandatory due to 2FA, or at least difficult to avoid. And they constantly find new ways to bother people.
So for now, AI can be helpful for some, but it will grow, become more entrenched and insidious, and in lots of cases entirely replace people or at least be annoying and difficult to get rid of.
It doesn't make sense to argue for AI by stating its benefits in its nascent stages. Babies are all basically innocent creatures that bring emotional benefits but some can grow up to be killers, and this is what will happen with AI.
At first, we spend our time one way (say eight hours, just to pick a number). Then we get the tools to do all of that in six hours. Then when job seeking and hiring, we get one worker willing to work six hours and another willing to work eight, so the eight-hour worker gets the job, all else equal. Labor is a marketplace, so we work as much as we're willing to in aggregate, which is roughly constant over time, so efficiency will never free up individuals' time.
In the context of TFA, it means we just shift our time to "harder" work (in the sense of work that AI can't do yet).
Here is an example.
I decided to create a new app, so I write down a brief of what it should do, ask AI to create a longer readme file about the platform along with design, sequence diagram, and suggested technologies.
I review that document, see if there is anything I can amend, then ask AI for the implementation plan.
Up until this point, this would probably increased the time I usually use to describe the platform in writing. But realistically, designing and thinking about systems were never that fast. I would have to think about use cases, imagine workflows in my mind, do pen and paper diagrams which I don’t think any of the productivity reports are covering.
Marx: workers sell their capacity to work for a fixed period, and any productivity improvements within that time become surplus value captured by capital.
AI tools are just the latest mechanism for extracting more output from the same wage. The real issue isn’t the technology—it’s that employees can’t capture gains from their own efficiency improvements. Until compensation models shift from time-based to outcome-based, every productivity breakthrough just makes us more profitable to employ, not more prosperous ourselves.
It’s the Industrial Revolution all over again and we’re the Luddites
Not quite right. Total Value remains the same before and after increase in productivity, assuming the labor force remains constant. But more use-value is created in the same period of time.
At the beginning, this is good for the employer, because the new socially necessary labor time has not been internalized, so the output can be sold for a price corresponding to its old Value. Maybe a bit less, to undercut competitors.
Eventually though, as competition adopts the new technique, everyone attempts to undercut each other’s prices, adjusting until prices correspond to the new Value.
So, they also benefit developers that become solopreneurs.
So they increase the next-best alternative for developers compared to work as employees.
What happens when you improve the next-best alternative?
> AI tools are just the latest mechanism for extracting more output from the same wage.
The whole history of software development has been rapid introduction of additional automation (because no field has been more the focus of software development than itself), and looking at the history of developer salaries, that has not been a process of "extracting more output from the same wage". Yes, output per $ wage has gone up, but real wages per hour or day worked for developers have also gone up, and done so faster than wages across the economy generally. It is true and problematic that the degree of capitalism in the structure of the modern mixed economy means that the gains of productivity go disproportionately to capital, but it is simply false to say that they go exclusively to capital across the board, and it is particularly easy to see that this has specifically been false in the case of productivity gains from further automation in software development.
Eventually they will be forced to care when things get bad enough -- and it's definitely trending that way fast [1]. But not today and not tomorrow.
[1] https://data.worldhappiness.report/chart
you will never be given your time back by an employer. you have to take it. you might be able to ask for it, but it won't be freely given, whether or not you become more efficient. LLM chatbots and agents are, in this sense, just another tool that changes our relationship to the work we do (but never our relationship to work).
This is what the whole four-day workweek movement is about; to reclaim some of that productivity increase as personal time. https://en.wikipedia.org/wiki/Four-day_workweek
Economist Keynes predicted one century ago that the workweek would drop to 15 hours due to rising productivity. It has not happened for social reasons.
I don't know what's going to happen when humans become redundant; that's an incipient issue we'll have to grapple with.
Software development as a career will evaporate in the next decade, as will most "knowledge" work such as general medicine, law, and teaching. Surgeons and dentists will continue a bit longer.
Bottom line, most of us will be doing chores while the machines do all the production and creative work.
"While the cotton gin made the process of cleaning cotton significantly faster, it ultimately led to an increase in the demand for enslaved labor, not a decrease."
A lot of automation leads to more work for someone. Reduces some jobs, and piles on others. Maybe 12 union jobs are removed, but then keeping the robots running falls on 1-2 IT staff.
Unfortunately, it is always a deliberate lie by the people who stand to gain from the new technology. Anyone who has thought about it for five seconds knows that this is not how capitalism works. Productivity gains are almost immediately absorbed and become the new normal. Firms that operate at the old level of productivity get washed out.
I simply can't believe that we're still falling for this. But let's hold out hope. Maybe AGI is just around the corner, and literally everyone in the world will spend our time sipping margaritas on the beach while we count our UBI. Certainly AI could never accelerate wealth concentration and inequality, right? RIGHT?
Edit: "amirhirsch" user probably explained this better than me in an above comment.
"But here's the kicker"
"It's not x. It's y."
"The companies that foo? They bar."
Em-dashes galore.
I'm either hypersensitized, seeing ghosts, or this article got the "yo claude make it pop" treatment. It's sad, but anything overly polished immediately triggers some "is this slop or an original thought actually worth my time" response.
New technology often homogenizes and makes things boring for awhile.