Ask HN: How Do You Review Your Code in the Age of AI?
Traditionally, code reviews have been about readability, maintainability, performance, and correctness. But now with AI-generated code (from GitHub Copilot, ChatGPT, etc.), I'm noticing new patterns and challenges:
The code "works," but the logic is sometimes unfamiliar or subtly incorrect.
It's harder to gauge intent when the code is AI-assisted.
Reviewers often assume the AI got it right—dangerous!
AI can write very clever code—but should it?
So, I'm curious:
How do you personally approach reviewing code in this AI-assisted era?
Do you or your team have specific rules or red flags for AI-generated code?
Are there tools or processes you're using to catch silent failures, hallucinations, or overly complex solutions?
Do you still prioritize peer review or rely more on automated/static analysis tools?
Looking to learn from how others are adapting—especially for production-grade systems.