Survey: a third of senior developers say over half their code is AI-generated

82 Brajeshwar 120 8/31/2025, 2:55:56 PM fastly.com ↗

Comments (120)

yodsanklai · 3h ago
Seems about right for me (older developer at a big tech company). But we need to define what it means that the code is AI-generated. In my case, I typically know how I want the code to look like, and I'm writing a prompt to tell the agent to do it. The AI doesn't solve any problem, it just does the typing and helps with syntax. I'm not even sure I'm ultimately more productive.
danielvaughn · 2h ago
Yeah I’m still not more productive. Maybe 10% more. But it alleviates a lot of mental energy, which is very nice at the age of 40.
cmrdporcupine · 53m ago
Strangely I've found myself more exhausted at the end of the week and I think it's because of the constant supervision necessary to stop Claude from colouring outside the lines when I don't watch it like a hawk.

Also I tend to get more done at a time, it makes it easier to get started on "gruntwork" tasks that I would have procrastinated on. Which in turn can lead to burnout quite quickly.

I think in the end it's just as much "work", just a different kind of work and with more quantity as a result.

theonething · 55m ago
> it alleviates a lot of mental energy

For me, this is the biggest benefit of AI coding. And it's energy saved that I can use to focus on higher level problems e.g. architecture thereby increasing my productivity.

ojosilva · 1h ago
I didn't see much mention of tab-completions in the survey and comments here. To me that's the ultimate coding AI is doing at my end, even though it seems to pass unnoticed nowadays. It's massive LOC (and comments!), and that's were I find AI immensely productive.
StrandedKitty · 53m ago
Does it even fall into "AI-generated" category? GitHub Copilot has been around for years, I certainly remember using it long before the recent AI boom, and at that time it wasn't even thought of as any kind of a breakthrough.

And at this point it's not just a productivity booster, it's as essential as using a good IDE. I feel extremely uncomfortable and slow writing any code without auto-completion.

sramam · 2m ago
I think there is a difference between type system or Language Server completions and AI generated completion.

When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.

ottah · 1h ago
Don't you have to keep dismissing incorrect auto-complete? For me I have a particular idea in mind, and I find auto-complete to be incredibly annoying.

It breaks flow. It has no idea my intention, but very eagerly provides suggestions I have to stop and swat away.

apt-apt-apt-apt · 58m ago
Yeah, [autocomplete: I totally agree]

it's so [great to have auto-complete]

annoying to constantly [have to type]

have tons of text dumped into your text area. Sometimes it looks plausibly right, but with subtle little issues. And you have to carefully analyze whatever it output for correctness (like constant code review).

shiandow · 49m ago
That we're trying to replace entry/mediocre/expert level code writing with entry/mediocre/expert level code reading is one of the strangest aspects of this whole AI paradigm.

There's literally no way I can see that resulting in better quality, so either that is not what is happening or we're in for a rude awakening at some point.

gnerd00 · 14m ago
there is no "we" or at least not sufficiently differentiated. Another layer is inserted into .. everything? think MSFT Teams. Your manager's manager is being empowered.. you become a truck driver who must stay on the route, on schedule or be replaced.
verdverm · 1h ago
fwiw, VS Code has a snooze auto complete button. Each press is 5m, a decently designed feature imo
another_twist · 1h ago
Tab completions simply hit the bottleneck problem. I dont want to press tab on every line it makes no sense. I would rather AI generate a function block and then integrate it back. Saves me the typing hassle and I can focus in design and business logic.
spunker540 · 2h ago
I’m not yet up to half (because my corporate code base is a mess that doesn’t lend itself well to AI)

But your approach sounds familiar to me. I find sometimes it may be slower and lower quality to use AI, but it requires less mental bandwidth from me, which is sometimes a worthwhile trade off.

lpapez · 3h ago
This article goes completely against my experience so far.

I teach at an internship program and the main problem with interns since 2023 has been their over reliance on AI tools. I feel like I have to teach them to stop using AI for everything and think through the problem so that they don't get stuck.

Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...

lordnacho · 3h ago
> Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...

When I was new to the business, I used interactive debugging a lot. The more experienced I got, the less I used it. printf() is surprisingly useful, especially if you upgrade it a little bit to a log-level aware framework. Then you can leave your debugging lines in the code and switch it on or off with loglevel = TRACE or INFO, something like that.

cbanek · 2h ago
This is absolutely true. If anything, interactive debuggers are a crutch and actual logging is the real way of debugging. You really can't debug all sorts of things in an interactive debugger, things like timing issues, thread problems, and you certainly can't find the actual hard bugs that are in running services in production, you know, where the bugs actually happen and are found. Or on other people's machines that you can't just attach a debugger. You need good logging with a good logging library that doesn't affect performance too much when it's turned off, and those messages can also provide very useful context to what things are going on, many times as good if not better than a comment, because at least the log messages are compiled in and type checked, as opposed to comments, which can easily go stale.
TheRoque · 1h ago
Both are valid, if your code is slightly complex it's invaluable to run it at least once with a debugger to verify that your logic is all good. And using logs for this is highly inefficient. E.g. if you have huge data structures that are a pain to print, or if after starting the program you notice that you forgot to add some print somewhere needed.

And obviously when you can't hook the debugger, logs are mandatory. Doesn't have to be one or the other.

marssaxman · 8m ago
That's funny. I remember using interactive debuggers all the time back in the '90s, but it's been a long time since I've bothered. Logging, reading, and thinking is just... easier.
VectorLock · 1h ago
Interactive debuggers and printf() are both completely valid and have separate use-cases with some overlap. If you're trying to use, or trying to get people to use, exclusively one, you've got some things to think about.
another_twist · 1h ago
Nitpicking a bit here but theres nothing wrong with printf debugging. Its immensely helpful to debug concurrent programs where stopping one part would mess up the state and maybe even avoid the bug you were trying to reproduce.

As for tooling, I really love AI coding. My workflow is pasting interfaces in ChatGPT and then just copy pasting stuff back. I usually write the glue code by hand. I also define the test cases and have AI take over those laborious bits. I love solving problems and I genuinely hate typing :)

Gigachad · 1h ago
I've tried the interactive debuggers but I'm yet to find a situation where they worked better than just printing. I use an interactive console to test what stuff does, but inline in the app I've never had anything that printing wasn't the straightforward fast solution.
davemp · 1h ago
I’m only found them to be useful in gargantuan OOP piles where the context is really hard to keep in your head and getting to any given point in execution can take minutes. In those cases interactive debugging has been invaluable.
Gigachad · 42m ago
I guess that’s the difference. I do rails dev mostly and it’s just put a print statement in, then run the unit test. It’s a fast feedback loop.
unconed · 33m ago
The old fogeys don't rely on printf because they can't use a debugger, but because a debugger stops the entire program and requires you to go step by step.

Printf gives you an entire trace or log you can glance at, giving you a bird's eye view of entire processes.

INTPenis · 7h ago
I wouldn't say I'm old, but I suddenly fell into the coding agent rabbit hole when I had to write some Python automations against Google APIs.

Found myself having 3-4 different sites open for documentation, context switching between 3 different libraries. It was a lot to take in.

So I said, why not give AI a whirl. It helped me a lot! And since then I have published at least 6 different projects with the help of AI.

It refactors stuff for me, it writes boilerplate for me, most importantly it's great at context switching between different topics. My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range.

So no I don't mind it at all, but I'm not old. The most important lesson I learned is that you never trust the AI. I can't tell you how often it has hallucinated things for me. It makes up entire libraries or modules that don't even exist.

It's a very good tool if you already know the topic you have it work on.

But it also hit me that I might be training my replacement. Every time I correct its mistakes I "teach" the database how to become a better AI and eventually it won't even need me. Thankfully I'm very old and will have retired by then.

baq · 4h ago
I love the split personality vibe here.
JadeNB · 3h ago
Or perhaps the commenter just aged a lot while writing the post.
johnfn · 1h ago
First line: "I wouldn't say I'm old"

Last line: "Thankfully I'm very old"

Hmm.....

LoganDark · 1h ago
You probably jest, but I'm sure some HN users do actually have split personalities. (or dissociative identities, as they're called nowadays)
another_twist · 1h ago
When it comes to dealing with shitty platforms AI is really the best thing ever. I have had the misfortune of writing automations for Atlassian with their weird handling of refresh keys and had AI not pointed out that Atlassian had the genius idea of invalidating refresh keys after single use, I would have wasted a lot more of my time. For this sort of manual labout, AI is the best tool there is.
verdverm · 57m ago
One time use refresh keys is not all that uncommon, probably more so than not, but lots of clients handle that for you
theonething · 51m ago
> invalidating refresh keys after single use

That's called refresh token rotation and is a valid security practice.

marcyb5st · 3h ago
In terms of LOCs maybe, in terms of importance I think is much less. At least that's how I use LLMs.

While I understand that <Enter model here> might produce the meaty bits as well, I believe that having a truck factor of basically 0 (since no-one REALLY understands the code) is a recipe for a disaster and I dare say long term maintainability of a code base.

I feel that you need to have someone in any team that needs to have that level of understanding to fix non trivial issues.

However, by all means, I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere.

epicureanideal · 3h ago
Agreed. If I use an LLM to generate fairly exhaustive unit tests of a trivial function just because I can, that doesn’t mean those lines are as useful as core complex business logic that it would almost certainly make subtle mistakes in.
andsoitis · 3h ago
> If I … generate fairly exhaustive unit tests of a trivial function

… then you are not a senior software engineer

triyambakam · 3h ago
Neither are you if that's your understanding of a senior engineer
mgh95 · 3h ago
I think the parent commentors point was that it is nearly trivial to generate variations on unit tests in most (if not all) unit test frameworks. For example:

Java: https://docs.parasoft.com/display/JTEST20232/Creating+a+Para...

C# (nunit, but xunit has this too): https://docs.nunit.org/articles/nunit/technical-notes/usage/...

Python: https://docs.pytest.org/en/stable/example/parametrize.html

cpp: https://google.github.io/googletest/advanced.html

A belief that the ability of LLMs to generate parameterizations is intrinsically helpful to a degree which cannot be trivially achieved in most mainstream programming languages/test frameworks may be an indicator that an individual has not achieved a substantial depth of experience.

com2kid · 2h ago
The useful part is generating the mocks. The various auto mocking frameworks are so hit or miss I end up having to manually make mocks which is time consuming and boring. LLMs help out dramatically and save literally hours of boring error prone work.
mgh95 · 1h ago
Why mock at all? Spend the time making integration tests fast. There is little reason a database, queue, etc. can't be set up in a per-test group basis and be made fast. Reliable software is built upon (mostly) reliable foundations.
lanstin · 1h ago
hmmmm. I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components. And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
mgh95 · 1h ago
> I do like integration tests, but I often tell people the art of modern software is to make reliable systems on top of unreliable components.

There is a dramatic difference between unreliable in the sense of S3 or other services and unreliable as in "we get different sets of logical outputs when we provide the same input to a LLM". In the first, you can prepare for what are logical outcomes -- network failures, durability loss, etc. In the latter, unless you know the total space of outputs for a LLM you cannot prepare. In the operational sense, LLMs are not a system component, they are a system builder. And a rather poor one, at that.

> And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.

Yeah, it's not that hard to include that in modern testing.

cornel_io · 1h ago
There are thousands of projects out there that use mocks for various reasons, some good, some bad, some ugly. But it doesn't matter: most engineers on those projects do not have the option to go another direction, they have to push forward.
mgh95 · 51m ago
In this context, why not refactor (and have your LLM of choice) write and optimize the integration tests for you? If the crux of the argument for LLMs is that it is capable of producing sufficient quality software and dramatically reduced costs, why not have it rewrite tests?
VectorLock · 1h ago
Parameterized tests are good, but I think he might be talking about exercising all the corner cases in the logic of your function, which to my knowledge almost no languages can auto-generate for but LLMs can sorta-ish figure it out.
mgh95 · 1h ago
We are talking about basic computing for CRUD apps. When you start needing to rely upon "sorta-ish" to describe the efficacy or a tool for such a straightforward and deterministic use case, it may be an indicator you need to rethink your approach.
goosejuice · 2h ago
We're not a licensed profession with universally defined roles. It's whatever the speaker wants it to be given how wildly it varies.
izacus · 2h ago
So how many developers in that survey are those?

They surveyed 791 developers (:D) and "a third of senior developers" do that. That's... generiously, what... 20 people?

It's amazing how everyone can massage numbers when they're trying to sell something.

thegrim33 · 2h ago
The other thing they do is conveniently not mention all the negative stuff about AI that the source article mentions, they only report on the portion of content from the source that's in any way positive of AI.

And of course, its an article based on a source article based on a survey (of a single company), with the source article written by a "content marketing manager", and the raw data of the survey isn't released/published, only some marketing summary of what the results (supposedly) were. Very trustworthy.

philip1209 · 8h ago
I looked at our anthropic bill this week. Saw that one of our best engineers was spending $300/day on Claude. Leadership was psyched about it.
pydry · 3h ago
I was told that I wasnt using it enough by one arm of the company and that I was spending too much by another.

Meanwhile, try as I might I couldnt prevent it from being useless.

I know of no better metaphor than that of what it's like being a developer in 2025.

merlincorey · 3h ago
Claude is making $72k a year for a consistent $300/day spend.
PhantomHour · 3h ago
Bear in mind those are revenue figures, they're costing claude hundreds a day.

One imagines Leadership won't be so pleased after the inevitably price hike (which, given the margins software uses, is going to be in the 1-3 thousands a day) and the hype wears off enough for them to realize they're spending a full salary automating a partial FTE.

ojosilva · 2h ago
But, by the looks of things, models will be more efficient by then and a cheaper-to-run model will produce comparable output. At least that's how it's been with OSS models, or with the Openai api model. So maybe the inevitable price hike (or rate limiting) may lead to switching models / providers and the results being just as good.
manoDev · 9h ago
“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway.

But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.

Gigachad · 1h ago
I find this half state kind of useless. If I have to know and understand the code being generated, it's easier to just write it myself. The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.

Feels like a similar situation to self driving where companies want to insist that you should be fully aware and ready to take over in an instant when things go wrong. That's just not how your brain works. You either want to fully disengage, or be actively doing the work.

platevoltage · 28m ago
> The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.

This is exactly my experience, but I guess generating code with depreciated methods is useful for some people.

dboreham · 8h ago
Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human.

I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.

I say all this as a massive AI skeptic dating back to the 1980s.

svachalek · 1h ago
All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience.
manoDev · 7h ago
> I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature.

That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.

Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.

dingnuts · 3h ago
> you're breaking the task into smaller achievable tasks.

this is the part that I would describe as engineering in the first place. This is the part that separates a script kiddie or someone who "knows" one language and can be somewhat dangerous with it, from someone who commands a $200k/year salary, and it is the important part

and so far there is no indication that language models can do this part at. all.

for someone who CAN do the part of breaking down a problem into smaller abstractions, though, some of these models can save you a little time, sometimes, in cases where it's less effort to type an explanation to the problem than it is to type the code directly..

which is to say.. sometimes.

goosejuice · 2h ago
This is self reported unless I missed something. I bet that skews these results quite a bit. Many are very hesitant to say they use AI, and I suspect that's much more likely to be the case when you are new to the field.

Also, green coding? That's new to me. I guess we'll see optional carbon offset purchasing in our subs soon.

matt3210 · 57m ago
Brute forcing a problem by writing more lines with an LLM instead of designing better code is a step the wrong direction
mcv · 1h ago
I would certainly hope junior developers don't rely too much on AI; they need the opportunity to learn to do this stuff themselves.
binarymax · 9h ago
I guess I’m an older developer.

But I’ve come full circle and have gone back to hand coding after a couple years of fighting LLMs. I’m tired of coaxing their style and fixing their bugs - some of which are just really dumb and some are devious.

Artisanal hand craft for me!

Gigachad · 1h ago
I've also just turned off copilot now. I had several cases where bugs in the generated code slipped through and ended up deployed. Bugs I never would have written myself. Reviewing code properly is so much harder than writing it from scratch.
baq · 9h ago
By all means, if my goal is actually crafting anything.

Usually it isn't, though - I just want to pump out code changes ASAP (but not sooner).

binarymax · 9h ago
Even then I’ve mostly given up. I’ve seen LLMs change from snake case to camel case for a single method and leave the rest untouched. I’ve seen them completely fabricate APIs to non existent libraries. I’ve seen them get mathematical formulae completely wrong. I’ve seen it make entire methods for things that are builtins of a library I’m already using.

It’s just not worth it anymore for anything that is part of an actual product.

Occasionally I will still churn out little scripts or methods from scratch that are low risk - but anything that gets to prod is pretty much hand coded again.

gardnr · 7h ago
This changed my experience significantly:

https://github.com/BeehiveInnovations/zen-mcp-server/blob/ma...

It basically uses multiple different LLMs from different providers to debate a change or code review. Opus 4.1, Gemini 2.5 Pro, and GPT-5 all have a go at it before it writes out plans or makes changes.

crowbahr · 9h ago
The article is saying older devs vibe code: I think you misunderstood
dang · 4h ago
(Article was https://www.theregister.com/2025/08/28/older_developers_ai_c... when this was posted; we've since changed it)
binarymax · 8h ago
I didn’t misunderstand. I tried to vibe code, and now I don’t. Not sure how you misinterpreted that.
oasisaimlessly · 9h ago
Key word: "But"
smusamashah · 3h ago
Article did not say what kind languages/applications thosd 791 developers were working on. I work on a legacy Java code base (which looks more like C than Java, thankfully) and I cant imagine AI doing any of it. It can do small isolated, well formulated chunks (functions that do a very specific task) but even that will require very verbose explanation.

I just can't fathom shipping a big percentage of work using LLMs.

gerdesj · 1h ago
Really?

I'm not a coder but a sysadmin. 35 years or so. I'm conversant with Perl, Python, (nods to C), BASIC, shell, Powershell, AutoIT (int al)

I muck about with CAD - OpenSCAD, FreeCAD, and 3D printing.

I'm not a senior developer - I pay them.

LLMs are handy in the same way I still have my slide rules and calculators (OK kids I use a calc app) but I do still have my slide rules.

ChatGPT does quite well with the basics for a simple OpenSCAD effort but invents functions within libraries. That is to be expected - its a next token decider function and not a real AI.

I find it handy for basics, very basic.

platevoltage · 22m ago
I just got back into OpenSCAD after recently getting my first new 3D Printer in 10 years, so I basically had to relearn it. ChatGPT got the syntax wrong for the most basic of operations.
kachapopopow · 3h ago
I think at this point it's whoever can get the most useful work out of AI which is actually really hard due to their 'incomplete' state. Finding uses which require very little user input is going to be the next big thing in my opinion since it seems that LLMs are currently at a wall where they require technical advancements before they can overcome it.
LarryMade2 · 8h ago
I tried it - didn't like it. Had an LLM work on a backup script since I don't use Bash very often. Took a bunch of learning the quirks of bash to get the code working properly.

While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done. Took me quite a while to figure out why something worked but really it didn't (LLM using command line commands where Bash doesn't interpret the results the same).

If its something I know, probably wont use LLM (as it doesn't do my style). If it's something I don't know, might use it to get me started but I expect that's all I'll it for.

dboreham · 8h ago
Can I ask which agent/model you used? I'm similarly irritated with shell script coding, but find I have to make scripts fairly often. My experience using various models but latterly Claude Code has been quite different -- it churned out pretty much what I was looking for. Also old, fwiw. I'm older than all shells.
hotpotat · 3h ago
Claude writes 99% of my code, I’m just a manager and architect and QC now.
calibas · 8h ago
I think they're being really loose with the term "vibe coding", and what they really mean is AI-assisted coding.

Older devs are not letting the AI do everything for them. Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI, but in small sections with the human giving specific instructions.

Then there's debugging, which I don't really trust the AI to do very well. Too many times I've seen it miss the real problem, then try to rewrite large sections of the code unnecessarily. I do most of the debugging myself, with some assistance from the AI.

kmoser · 7h ago
They're also being really loose with the term "older developers" by describing it as anybody with more than ten years of experience.
9rx · 8h ago
> Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI

I've largely settled on the opposite. AI has become very good at planning what to do and explaining it in plain English, but its command of programming languages still leaves a lot to be desired.

calibas · 8h ago
It's good at checking plans, and helping with plans, but I've seen it make really really bad choices. I don't think it can replace a human architect.
9rx · 6h ago
Yes, much like many of the humans I have worked with, sometimes bad choices are introduced. But those bad choices are caught during the writing of the code, so that's not really that big of a deal when it does happen. It is still a boon to have it do most of the work.

And remains markably better than when AI makes bad choices while writing code. That is much harder to catch and requires pouring over the code with a fine tooth comb to the point that you may as well have just written it yourself, negating all the potential benefits of using it to generate code in the first place.

bluefirebrand · 8h ago
It can't replace a human anything, yet, but that doesn't seem to be stopping anyone from trying unfortunately:(
WalterSear · 4h ago
When debugging, I'll coax the AI to determine what went wrong first - to my satisfaction - and have it go from there. Otherwise it's a descent into madness.
matula · 7h ago
I've been at this for many years. If I want to implement a new feature that ties together various systems and delivers an expected output, I know the general steps that I need to take. About 80% of those steps are creating and stubbing out new files with the general methods and objects I know will be needed, and all the test cases. So... I could either spend the next 4 hours doing that, or spend 3 minutes filling out a CLAUDE.md with the specs and 5 minutes having Claude do it (and fairly well).

I feel no shame in doing the later. I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices. YMMV.

blast · 4h ago
> I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices.

Could you share some examples / tips about this?

bob1029 · 7h ago
I often use LLMs for method level implementation work. Anything beyond the scope of a single function call I have very little confidence in. This is OK though, since everything is a function and I can perfectly control the blast radius as long as I keep my hands on the steering wheel. I don't ever let the LLM define method signatures for me.

If I don't know how to structure functions around a problem, I will also use the LLM, but I am asking it to write zero code in this case. I am just having a conversation about what would be good paths to consider.

zwilliamson · 7h ago
Of course. We are the most well equipped to run with it. Others will quickly create a sloppy mess while wise developers can keep the beast tame.
csbrooks · 9h ago
Is "vibe coding" synonymous with using AI code-generation tools now?

I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM. Which is fun and worth trying, but probably not a valid professional tool.

crazygringo · 8h ago
I think what happened is that a lot of people started dismissing all LLM code creation as "vibe coding" because those people were anti-LLM, and so the term itself became an easy umbrella pejorative.

And then, more people saw these critics using "vibe coding" to refer to all LLM code creation, and naturally understood it to mean exactly that. Which means the recent articles we've seen about how good vibe coding starts with a requirements file, then tests that fail, then tests that pass, etc.

Like so many terms that started out being used pejoratively, vibe coding got reclaimed. And it just sounds cool.

Also because we don't really have any other good memorable term for describing code built entirely with LLM's from the ground up, separate from mere autocomplete AI or using LLM's to work on established codebases.

actsasbuffoon · 8h ago
“Agentic coding” is probably more accurate, though many people (fairly) find the term “Agentic” to be buzz-wordy and obnoxious.

I’m willing to vibe code a spike project. That is to say, I want to see how well some new tool or library works, so I’ll tell the LLM to build a proof of concept, and then I’ll study that and see how I feel about it. Then I throw it away and build the real version with more care and attention.

drooby · 8h ago
I have "vibe coded" a few internal tools now that are very low risk in terms of negative business impact but nonetheless valuable for our team's efficiency.

E.g one tool packages a debug build of an iOS simulator app with various metadata and uploads it to a specified location.

Another tool spits out my team's github velocity metrics.

These were relatively small scripting apps, that yes, I code reviewed and checked for security issues.

I don't see why this wouldn't be a valid professional tool? It's working well, saves me time, is fun, and safe (assuming proper code review, and LLM tool usage).

With these little scripts it creates it's actually pretty quick to validate their safety and efficacy. They're like validating NP problems.

actsasbuffoon · 7h ago
The original definition of vibe coding meant that you just let the agent write everything, and if it works then you commit it. Your code review and security check turned this from vibe coding into something else.

This is complicated by the fact that some people use “vibe coding” to mean any kind of LLM-assisted coding.

ladyprestor · 9h ago
Yeah, for some reason the term has been used interchangeably for a while, which is making it very hard to have a conversation about it since many people think vibe coding is just using AI to assist you.

From Karpathy's original post I understood it to be what you're describing. It is getting confusing.

bonoboTP · 6h ago
The term sounds funny and quirky, so got overused. Also simply the term pushes emotional buttons on a lot of people so it's good for clickbait.
biglyburrito · 7h ago
My personal definition of "vibe coding" is when a developer delegates -- abdicates, really -- responsibility for understanding & testing what AI-generated code is doing and/or how that result is achieved. I consider it something that's separate from & inferior to using AI as a development tool.
flashgordon · 8h ago
I think there is actually a pressure to show thst you are using AI (stories of ceos firing employees who supposedly did not "embrace" ai). So people are over attributing to AI. Though originally VC was meant to be infinite monkey style button smashing, people are attributing to VC just to avoid the cross hairs.
mr90210 · 3h ago
> survey of 791 developers

We have got to stop. In a universe of well over 25 million programmers a sample of 791 is not significant enough to justify such headlines.

We’ve got to do better than this, whatever this is.

spmurrayzzz · 11m ago
I generally agree with this just from a perspective of personal sentiment, it does feel wrong.

But statistically speaking, at a 95% confidence level you'd be within a +/- 3.5% margin of error given the 791 sample size, irrespective of whether the population is 30k or 30M.

recursive · 2h ago
Validity of the sample size is not determined by its fraction of the whole population. I don't know the formulas and I'm not a statistician. Maybe someone can drop some citations.
oasisaimlessly · 2h ago
You should read more about statistical significance. Under some reasonable assumptions, you can confidently certain deduce things with small sample sizes.

From another perspective: we've deduced a lot of things about how atoms work without any given experiment inspecting more than an insignificant fraction of all atoms.

TL;DR: The population size (25e6 total devs, 1e80 atoms in observable universe) is almost entirely irrelevant to hypothesis testing.

daft_pink · 9h ago
Developers are lazy. Anything that makes development faster or easier is going to be welcomed by a good developer.

If you find it is quicker not to use it then you might hate it, but I think it is probably better in some cases and worse in other cases.

invl · 9h ago
as a developer my first priority is whether the software works, not whether it is fast or easy to develop
dang · 4h ago
I think we can assume that what daft_pink means by "development" includes that the software works.

("Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize." - https://news.ycombinator.com/newsguidelines.html)

globnomulous · 9h ago
> Anything that makes development faster or easier is going to be welcomed by a good developer.

I strongly disagree. Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it.

jasonjmcghee · 8h ago
Indeed. This is my biggest fear for engineers as a whole. LLMs can be a great productivity boost in the very short term, but can so easily be abused. If you build a product with it, suddenly everyone is an engineering manager and no one is an expert on it. And growth as an engineer is stunted. It reminds me of abusing energy drinks or grinding to the point of burnout... But worse.

I think we'll find a middle ground though. I just think it hasn't happened yet. I'm cautiously optimistic.

morkalork · 3h ago
Sounds about right in my experience. Not every piece of code has to be elite John Carmack tier quality
mihaitodor · 1h ago
LMFTFY: a third of senior developers who answer surveys say over half of their code is AI-generated
platevoltage · 18m ago
Haha right. I would imagine a "Senior Developer" who is super into AI assisted coding would be more likely to come across this survey and want to participate.
jmull · 8h ago
Apparently vibe coding now just means ai assisted coding beyond immediate code completion?

For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt. The relevant key business and technical concerns come into play, including complexities like balancing somewhat conflicting shorter and longer term concerns.

Juniors are probably all going to have to be learning this kind of stuff at an accelerated rate now (we don't need em cranking out REST endpoints or whatever anymore), but at this point this takes a senior perspective and senior skills.

Anyone can get an LLM and agentic tool to crank out code now. But you really need to have them crank out code to do something useful.

pydry · 7h ago
Im not sure I believe this. It's the exact opposite in my experience - the young'uns are all over vibe coding.
dfxm12 · 9h ago
around a third of senior developers with more than a decade of experience are using AI code-generation tools such as Copilot, Claude, and Gemini to produce over half of their finished software, compared to 13 percent for those devs who've only been on the job for up to two years.

A third? I would expect at least a majority based on the headline and tone of the article... Isn't this saying 66% are down on vibe coding?

dang · 4h ago
(Article was https://www.theregister.com/2025/08/28/older_developers_ai_c... when this was posted; we've since changed it. We also changed the title.)
asveikau · 9h ago
I guess the article was vibe coded.
biglyburrito · 8h ago
"This one developer was down with the vibe coding."
cooloo · 8h ago
So many words to say nothing. Maybe it wan generated by an AI tool?