Ask HN: What is your average acceptance rate for code suggestions from an agent?

3 srameshc 10 9/4/2025, 12:10:41 PM
I'm a developer and often use AI coding assistants, but I find their suggestions for production code aren't quite right. It usually takes me several attempts to get the solution I need. Is this a common experience, or am I missing a key technique to get better results?

Comments (10)

jf22 · 9h ago
100% if I spend time cultivating context and giving it a good prompt.

I'd say 30% of the time wing it and give it broad ideas about what I'm trying to do.

Ar__Aj · 1d ago
Yep, that's completely normal. Your experience is exactly how most developers use AI coding assistants. You're not missing anything. Think of the AI as a super-eager junior developer, not a senior architect. It's great for getting a first draft on the page, but it needs your guidance and context to get the code production-ready. It doesn't know your team's specific style guide or the complex business logic you have in your head. The key is to treat it like a conversation. The first suggestion is just the starting point. The real magic is in the follow-up prompts where you tell it what to refine, fix, or add. You're using it the right way.
ryanchants · 9h ago
This is where I feel like I'm doing something wrong whenever I try these tools. Everyone says things like "just keep telling it what to fix, it might take a few rounds". But that ends up taking just as long if not longer than doing it myself.
al_borland · 1d ago
I found a dashboard my company has from Copilot. Last I saw, it was around 20%. Of course, just because it was an accepted suggestion doesn’t mean it was used. I will sometimes accept a garbage suggestion for my needs, just for a reference so I can pull out one tiny idea.
2rsf · 1d ago
The industry standard (according to some research, articles and my own company experience) is around 30%.

This is as reported by the agent or research so I am not sure what it means, did you scroll through some options or had to re-request one, but it points to a low rate of acceptance

markus_zhang · 1d ago
Yesterday I had an issue with a package so I threw the command and the error message into Cursor. It managed to figure everything out in one shot and did every step for me in its console. I could then scroll up and save the process.
breckenedge · 1d ago
You mean short suggestions like copilot, or something like Claude Code that spitting out diffs?
srameshc · 1d ago
More like Claude Code diffs.
incomingpain · 1d ago
>Is this a common experience,

Yes when you're trying to do too big of things or you're prompt engineering skills need improvement.

>or am I missing a key technique to get better results?

You must be specific for what you want. Think of you being a senior dev and you're giving specific instructions to a junior dev who doesnt know better; who WILL misinterpret and do the very least to achieve what you asked for.

Almost like a Jinn/Geni they'll seemingly try to do the wrong thing if you arent clear and specific in your prompt.

Worse yet, sometimes, the AI will run off into left field doing all kinds of dumb stuff. hugely breaking the app and then you cry and start reverting hours of git changes. or worse HARD RESET.

paulcole · 22h ago
100% and replaced the work of 1 or 2 junior people that I would’ve had to hire otherwise. But most of the time I do spend a few minutes fiddling with it by asking for updates to make it better after the first go round