Ask HN: What will humans do when AI writes and reviews code?

2 pomarie 4 6/2/2025, 5:31:50 PM
I’m noticing more and more teams are using cubic (an code reviewer – I’m the founder) to review PRs written by AIs (eg. Devin, Codex…).

Some teams already run an entirely AI-driven pipeline where AI writes the code, reviews it, and humans just click Merge at the end.

Interestingly, the AI reviewer often finds errors in AI-generated code. For now, AIs can review AIs code with OK results – probably better than a junior-mid level dev.

It got me thinking of the future…

1. Junior developer growth*:* Traditionally, code reviews are how junior developers learn. Have you noticed changes in how they learn since codegen has become a thinh? 2. Society expects self-driving cars to be dramatically safer (e.g. 10x better) before accepting them. Do you expect similar standards from AI code reviewers, or is slightly better than human good enough? 3. As AI surpass humans at writing and reviewing code (at least for technical correctness), how do you see the role of code reviews changing? 4. Philosophically, do you think code generation AIs will ever become so effective that specialized AI code reviewers won’t find any issues?

Comments (4)

TheMongoose · 14h ago
Charge a lot more to come in and re-write the spaghetti mess that the vibe coder left the business with into a product that actually works.
pomarie · 14h ago
Circle of life!
sebnado · 14h ago
We’ve been running an AI-first dev loop in production for ~2 years (disclaimer: I help build Ze1 and Sandscape, they are both Ai driven products). A few things we’ve learned:

Instead of cranking boilerplate, they spend Day 1 reviewing AI diffs. We pair them with a senior for a “why did the agent choose this?” teardown.

What matters is mean-time-to-rollback. If the agent + test harnesses catch breakage faster than a human pair can, 2 × better is already good economics. Reliability engineering beats perfection thresholds.

Syntax, style and most unit-level bugs are now linted or auto-fixed. Humans zoom out to architecture, data contracts, threat models, perf budgets, and “does this change make sense for the product?”. So even juniors are now a lot more involved on the sujective elements of development

So I think that as the easy bugs vanish, new failure modes show up: latency cliffs, subtle privacy leaks, energy use, fairness. The goal posts moves

coffeecoders · 14h ago
Just like IDEs with autocomplet, auto generators (getter/setter etc.), CI/CD pipelines, and Docker changed how we build software, LLMs code generators and reviewers will be another tool to transform our role and boost productivity. It will handle the repetitive parts so we can focus on higher-level thinking.