Ask HN: What will humans do when AI writes and reviews code?
Some teams already run an entirely AI-driven pipeline where AI writes the code, reviews it, and humans just click Merge at the end.
Interestingly, the AI reviewer often finds errors in AI-generated code. For now, AIs can review AIs code with OK results – probably better than a junior-mid level dev.
It got me thinking of the future…
1. Junior developer growth*:* Traditionally, code reviews are how junior developers learn. Have you noticed changes in how they learn since codegen has become a thinh? 2. Society expects self-driving cars to be dramatically safer (e.g. 10x better) before accepting them. Do you expect similar standards from AI code reviewers, or is slightly better than human good enough? 3. As AI surpass humans at writing and reviewing code (at least for technical correctness), how do you see the role of code reviews changing? 4. Philosophically, do you think code generation AIs will ever become so effective that specialized AI code reviewers won’t find any issues?
Instead of cranking boilerplate, they spend Day 1 reviewing AI diffs. We pair them with a senior for a “why did the agent choose this?” teardown.
What matters is mean-time-to-rollback. If the agent + test harnesses catch breakage faster than a human pair can, 2 × better is already good economics. Reliability engineering beats perfection thresholds.
Syntax, style and most unit-level bugs are now linted or auto-fixed. Humans zoom out to architecture, data contracts, threat models, perf budgets, and “does this change make sense for the product?”. So even juniors are now a lot more involved on the sujective elements of development
So I think that as the easy bugs vanish, new failure modes show up: latency cliffs, subtle privacy leaks, energy use, fairness. The goal posts moves