What bothered me for a long time with code reviews is that almost all useful things they catch (i.e. not nit-picking about subjective minor things that doesn't really matter) are much too late in the process. Not rarely the only (if any) useful outcome of a review is that everything has to be done from scratch in a different ways (completely new design) or that it is abandoned since it turns out it should never have been done at all.
It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change. There may be some discussion earlier on in a jira ticket or meeting, and with some luck someone even wrote a design spec, but there will still often be someone from a different team or distant part of the organization that only hears about the change when they see the code review. This includes me. I often only notice that some other team implemented something stupid because I suddenly get a notification that someone posted a code review for some part of the code that I watch for changes.
Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you? I don't know. That doesn't seem to ever happen anyway. At university in the 1990's in a course about development processes there wasn't only code reviews but also design reviews, and that isn't something I ever encountered in the wild (in any formal sense) but I don't know if even a design review process would be able to catch all the things you would want to catch BEFORE starting to implement something.
epolanski · 34m ago
> and that isn't something I ever encountered in the wild (in any formal sense)
Because in the software engineering world there is very little engineering involved.
That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.
Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not
Different stakes and patching opportunities lend to different practices.
DixieDev · 18m ago
Where I work we tend to write RFCs for fundamental design decisions. Deciding what counts as a "fundamental design decision" is sometimes self-moderated in the moment but we also account for it when making long term plans. For example when initially creating epics in Jira we might find it hard to flesh out as we don't really know how we're going to approach it, so we just start it off with a task to write an RFC.
These can be written either for just our team or for the eyes of all other software teams. In the latter case we put these forward as RFCs for discussion in a fortnightly meeting, which is announced well in advance so people can read them, leave comments beforehand, and only need to attend the meeting if there's an RFC of interest to them up for discussion.
This has gone pretty well for us! It can feel like a pain to write some of these, and at times I think we overuse them somewhat, but I much prefer our approach to any other place I've worked where we didn't have any sort of collaborative design process in place at all.
teiferer · 20m ago
Seems like your organization is lacking structure and communication.
Where I work, the structure is such that most parts of the codebase have a team that is responsible for it and does the vast majority of changes there. If any "outsider" plans a change, they come talk to the team and coordinates.
And we also have strong intra-team communication. It's clear who is working on what and we have design reviews to agree on the "how" within the team.
It's rare that what you describe happens. 95% of the code reviews I do are without comments or only with minor suggestions for improvement. Mainly because we have developed a culture of talking to each other about major things beforehand and writing the code is really just the last step in the process. We also have developed a somewhat consistent style within the teams. Not necessarily across the teams, but that's ok.
TL;DR: It's certainly possible to do things better that what you are experiencing. It's a matter of structure, communication and culture.
maunke · 3m ago
Very nice to read.
Sourcehut is missing in the list; it’s built on the classical concept of sending patches / issues / bugs / discussion threads via email and it integrates this concept into its mailing lists and ci solution that also sends back the ci status / log via email.
Drew Devault published helpful resources for sending and reviewing patches via email on git-send-email.io and git-am.io
tomasreimers · 4h ago
Just taking a step back, it is SO COOL to me to be reading about stacked pull requests on HN.
When we started graphite.dev years ago that was a workflow most developers had never heard of unless they had previously been at FB / Google.
Fun to see how fast code review can change over 3-4yrs :)
benreesman · 3h ago
I'm a pre-mercurial arcanist refugee who tends to promote Graphite in teams that are still struggling with mega-PRs and merge commits and other own goal GitHub-isms. Big fan in general even with the somewhat rocky scaling road we've been on :)
And I very much appreciate both the ambition and results that come from making it interop with PRs, its a nightmare problem and its pretty damned amazing it works at all, let alone most of the time.
I would strongly lobby for a prescriptive mode where Graphite initializes a repository with hardcore settings that would allow it to make more assumptions about the underlying repo (merge commits, you know the list better than I do).
I think that's what could let it be bulletproof.
tomasreimers · 3h ago
We've talked about a "safe mode" where we initialize it similar to JJ - such that you can no longer directly run git commands without funneling them thru graphite, but which would make it bulletproof. Would that be interesting?
benreesman · 2h ago
I think jujitsu is interesting it it's own right!
It seems non-obvious that you would have to prohibit git commands in general, they're already "buyer beware" with the current tool (and arcanist for that matter). Certainly a "strict mode" where only well-behaved trees could interact with the tool creates scope for all kinds of performance and robustness optimizations (and with reflog bisecting it could even tell you where you went off script).
I was more referring to the compromises that gt has to make to cope with arbitrary GitHub PRs seem a lot more fiddly than directly invoking git, but that's your area of expertise and my anecdote!
Broad strokes I'm excited for the inevitable decoupling of gt from GitHub per se, it was clearly existential for zero to one, but you folks are a first order surface in 2025.
Keep it up!
foota · 4h ago
I miss the fig workflow :-(
kyrra · 2h ago
Try `jj`, as others have mentioned. It's being built by the team that built/maintains fig, and the are porting all their learnings into that.
vinnymac · 35m ago
Just signed up, looks like everything I ever wanted
jacobegold · 4h ago
hell yeah
paulddraper · 2h ago
Dude, I love Graphite.
Best AI code review, hands down. (And I’ve tried a few.)
Areibman · 2h ago
The biggest grip I have with Github is the app is painfully slow. And by slow, I mean browser tab might freeze level slow.
Shockingly, the best code review tool I've ever used was Azure DevOps.
wenc · 1h ago
When I worked at a Microsoft shop, I used Azure DevOps. To be honest, it's actually not bad for .NET stuff. It fits the .NET development life cycle like Visual Studio fits C#.
benrutter · 1h ago
What did you like so much about DevOps?
I use it every day and don't have any issues with the review system, but to me it's very similar to github. If anything, I miss being able to suggest changes and have people click a button to integrate them as commits.
Areibman · 48m ago
Commenting feels so much better. You can comment on entire files, and you can leave review comments that actually "block" (rather than just get appended to the conversation)
opium_tea · 57m ago
That suggestion feature actually exists on ADO. It was introduced in the last year or so.
awesome_dude · 2h ago
nit: gripe, not grip :-P
echelon · 2h ago
> The biggest grip I have with Github is the app is painfully slow. And by slow, I mean browser tab might freeze level slow.
Javascript at scale combined with teams that have to move fast and ship features is a recipe for this.
At least it's not Atlassian.
lmm · 1h ago
Stash (now BitBucket Server) had the best code review going, head and shoulders above GitHub to the point I thought GitHub would obviously adopt their approach. But I imagine Atlassian has now made it slow and useless like they do with all their products and acquisitions.
dotancohen · 39m ago
Bit Bucket had a git-related tool called Stash? I love Bit Bucket, but I'm glad I did not know about that.
ivanjermakov · 5h ago
I find the idea of using git for code reviews directly quite compelling. Working with the change locally as you were the one who made it is very convenient, considering the clunky read-only web UI.
I didn't get why stick with the requirement that review is a single commit? To keep git-review implementation simple?
I wonder if approach where every reviewer commits their comments/fixes to the PR branch directly would work as well as I think it would. One might not even need any additional tools to make it convenient to work with. This idea seems like a hybrid of traditional github flow and a way Linux development is organized via mailing lists and patches.
teiferer · 6m ago
> I didn't get why stick with the requirement that review is a single commit
Yeah that is pretty weird. If 5 people review my code, do they all mangle the same review commit? We don't do that with code either, feels like it's defeating the point.
Review would need to be commits on top of the reviewed commit. If there are 5 reviews of the same commit, then they all branch out from that commit. And to address them, there is another commit which also lives besides them. Each commit change process becomes a branch with stacked commits beinf branches chained on top of one another. Each of the commits in those chained branches then has comment commits attached. Those comment commits could even form chains if a discussion is happening. Then when everybody is happy, each branch gets squashed into a single commit and those then get rebased on the main branch.
You likely want to make new commits for that though to preserve the discussions for a while. And that's the crux: That data lives outside the main branch, but needs to live somewhere.
spike021 · 3h ago
is github's PR considered read-only?
i've had team members edit a correction as a "suggestion" comment and i can approve it to be added as a commit on my branch.
koolba · 4h ago
> When I review code, I like to pull the source branch locally. Then I soft-reset the code to mere base, so that the code looks as if it was written by me.
This is eerily similar to how I review large changes that do not have a clear set of commits. The real problem is working with people that don’t realize that if you don’t break work down into small self contained units, everybody else is going to have to do it individually. Nobody can honestly say they can review tons of diffs to a ton of files and truly understand what they’ve reviewed.
The whole is more than just the sum of the parts.
stitched2gethr · 2h ago
For those that want an easy button. Here ya go.
```
review () {
if [[ -n $(git status -s) ]]
then
echo 'must start with clean tree!'
return 1
fi
git checkout pristine # a branch that I never commit to
git rebase origin/master
branch="$1"
git branch -D "$branch"
git checkout "$branch"
git rebase origin/master
git reset --soft origin/master
git reset
nvim -c ':G' # opens neovim with the fugitive plugin - replace with your favorite editor
git reset --hard
git status -s | awk '{ print $2 }' | xargs rm
git checkout pristine
git branch -D "$branch"
}
```
aitchnyu · 41m ago
We were a team with highly parallizable data science tasks, we checked out 3 copies of the repo, one for our branch, two for branches where we are the reviewer.
cedws · 1h ago
Crafting good commits, and good PRs out of those commits is a skill just like how writing good code is. Unfortunately, too many people suck at the former.
Maxion · 59m ago
This does also tie in directly with tickets and the overall workflow the team has. I find this to have a huge effect on how managable PRs are. I feel the majority of devs are quite oblivious to the code they produce, they simply keep coding untill they fill the acceptence criteria. No matter if the result is 200 lines in 1 file, or 1 000 lines in 30 files.
brabel · 23m ago
Author has not tried IDE plugin for GitHub / Bitbucket reviews!? We review code directly in IntelliJ with full support for commenting, navigation, even merging without leaving the IDE. Solves all problems the author has.
jbmsf · 1h ago
Recently, I've been wondering about the point of code review as a whole.
When I started my career, no one did code review. I'm old.
At some point, my first company grew; we hired new people and started to offshore. Suddenly, you couldn't rely on developers having good judgement... or at least being responsible for fixing their own mess.
Code review was a tool I discovered and made mandatory.
A few years later, everyone converged on GitHub, PRs, and code review. What we were already doing now became the default.
Many, many years layer, I work with a 100% remote team that is mostly experienced and 75% or more of our work is writing code that looks like code we've already written. Most code review is low value. Yes, we do catch issues in review, especially with newer hires, but it's not obviously worth the delay of a review cycle.
Our current policy is to trust the author to opt-in for review. So far, this approach works, but I doubt it will scale.
My point? We have a lot of posts about code review and related tools and not enough about whether to review and how to make reviews useful.
Feeble · 1h ago
I am very much in the same position right now. My dev team has introduced mandatory code reviews for every change and I can see their output plummeting. It also seems that most code reviews done are mostly syntax and code format related - noone actually seems to run the code or look at the actual logic if it makes sense.
I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.
mgaunard · 29m ago
You need to validate things like syntax upfront so that such things don't make it to review to begin with.
I'm not a fan of automatic syntax formatting but you can have some degree of pre-commit checks.
benrutter · 1h ago
Interesting take! Personally I'd never throw out code review, for a couple reasons.
1. It's easy to optimise for talented, motivated people in your team. You obviously want this, and it should be the standard, but you also want it to be the case that somebody who doesn't care about their work can't trash the codebase.
2. I find even people just leaving 'lgtm' style reviews for simple things, does a lot to make sure folks keep up with changes. Even if there's nothing caught, you still want to make sure there aren't changes that only one person knows about. That's how you wind up with stuff like, the same utility functions written 10 times.
mgaunard · 32m ago
The way I like to do it, is that some projects may have an owner (can only be a single person).
The owner is allowed to make changes without review.
_kidlike · 47m ago
me and my team have been doing code reviews purely within IntelliJ, for something like 6 years. We started doing it "by hand", by checking out the branch and comparing with master, then using Github for comments.
Now there's official support and tooling for reviews (at least in IDEA, but probably in the others too), where you also get in-line highlighting of changed lines, comments, status checks, etc...
I feel sorry for anyone still using GitHub itself (or GitLab or whatever). It's horrible for anything more than a few lines of changes here and there.
germandiago · 26m ago
I am happy with Gerrit but I am sure I do not know even how to use 20% of its capacity.
The patchsets get stacked up and you know where you left off if there are different changes and that is very cool.
jacobegold · 4h ago
It's so cool that Git is considering first class change IDs!! That's huge! This sounds similar to what we had at Facebook to track revisions in Phabricator diffs. Curious if anyone knows the best place to read about this?
3036e4 · 34m ago
The fundamental problem is that git doesn't track branches in any sane way. Maybe it would be better to fix that? Fossil remembers what branch a commit was committed on, so the task branch itself is a change ID. That might be tricky to solve while also allowing git commands to mess with history of course. Fossil doesn't have that problem.
pjmlp · 56m ago
I never did proper code review, other than when being lucky that we got a team of top devs in specific projects.
More often than not, it either doesn't exist, or turns out in a kind of architecture fetishism that the lead devs/architects have from conferences or space ship enterprise architecture.
Already without this garbage it feels so much better, than arguing about SOLID, clean code, hexagonal architecture, member functions being with an underscore, explicit types or not,...
aitchnyu · 36m ago
Tangential, long ago I wanted to use a repo-backed (IIRC tied to Mercurial) backend for issues. They were also flat files. We put too many plugins into Redmine and it died frequently.
hydroxideOH- · 6h ago
I use the GitHub Pull Request extension in VSCode to do the same thing (reviewing code locally in my editor). It works pretty well, and you can add/review comments directly in the editor.
ivanjermakov · 5h ago
It's better, but still quite deep vendor lock-in (in both GitHub and VSCode).
hydroxideOH- · 5h ago
Well my employer chooses to use GitHub so I don’t have a choice there. And it’s vendor lock-in VSCode but that’s already my primary editor so it means there’s no need to learn another tool just for code review.
NortySpock · 4h ago
GitHub may be dominant, but it's not like it doesn't have competitors nipping at its heels (GitLab, BitBucket come to mind).
VSCode is open source, and there are plenty of IDEs...
I guess I'm just focused on different lock-in concerns than you are.
cyberax · 4h ago
JetBrains IDEs can do the same.
plonq · 3h ago
Unfortunately it’s not feature complete - you can’t paste images in review comments, for example.
Still very useful for large PRs though.
cebert · 6h ago
I use this a lot too. Also, if you open a PR on the GitHub website and press the “.” key, it opens the review in VSCode, which I consider a much better web experience.
reilly3000 · 5h ago
TIL thanks.
MutedEstate45 · 5h ago
Agree with your pain points. One thing id add is GitHub makes you reapprove every PR after each push. As an OSS contributor it’s exhausting to chase re-approvals for minor tweaks.
irjustin · 5h ago
mmmm this is up to each repo/maintainer's settings.
To be fair you don't know if one line change is going to absolutely compromise a flow. OSS needs to maintain a level of disconnect to be safe vs fast.
o11c · 1h ago
Adding fixup commits (specifying the specific commit they will be squashed into), to be squashed by the bot before merge, handles that.
MutedEstate45 · 4h ago
Good to know! Never been a maintainer before so I thought that was required.
pie_flavor · 5h ago
This is a security setting that the author has chosen to enable.
Ar-Curunir · 5h ago
Hm that’s not the case for my repositories? Maybe you have a setting enabled for that?
6LLvveMx2koXfwn · 1h ago
> But modifying code under review turned out to be tricky.
GitLab enables this - make the suggestion in-line which the original dev can either accept or decline.
shayief · 2h ago
Gitpatch attempts to solve this. Supports versioned patches and patch stacks (aka stacked PRs). Also handles force-pushes in stacks correctly even without Change-IDs using heuristics based on title, author date etc. It should also be unusually fast. Disclosure: I'm the author.
I'm not convinced that review comments as commits make thing easier, but I think storing them in git in some way is a good idea (i.e. git annotations or in commit messages after merge etc)
faangguyindia · 5h ago
Essentially, you are turning fork/branch induced changes to "precommit" review like workflow which is great.
I was on a lookout for best "precommit" review tool and zeroed on Magit, gitui, Sublime Merge.
I am not an emac user, so i'll have to learn this.
xeonmc · 4h ago
In theory this functionality would be best suited as a git subcommand.
I suggest `git-precom` for conciseness.
faangguyindia · 4h ago
Git already has `git add -p` but demands a lot from user.
I've used Reviewboard and Phabricator and both seem "fine" to me. Superior to Github (at the time, anyway).
cbryant91 · 23m ago
good fine
godelski · 3h ago
While I like the post and agree with everything the author talked about I find that this is not my problem. Despite having a similar workflow (classic vim user). The problem I have and I think a lot of others have too is that review just doesn't actually exist. LGTMs are not reviews, yet so common.
I'm not sure there's even a tech solution to this class of problems and it is down to culture. LGTMs exist because it satisfies the "letter of the law" but not the spirit. Classic bureaucracy problem combined with classic engineer problems. It feels like there are simple solutions but LGTMs are a hack. You try to solve this by requiring reviews but LGTMs are just a hack to that. Fundamentally you just can't measure the quality of a review[0]. Us techie types and bureaucrats have a similar failure mode: we like measurements. But a measurement of any kind is meaningless without context. Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews. Us engineers are used to looking at metrics without context and get lulled into a false sense of security, or convince ourselves that we can find a tech solution to this stuff. I'm sure someone's going to propose a LLM reviewer and hey, it might help, but it won't address the root problems. The only way to get good code reviews is for them to be done by someone capable of writing the code in the first place. Until the LLMs can do all the coding they won't make this problem go away, even if they can improve upon the LGTM bar. But that's barely a bar, it's sitting on the floor.
The problem is cultural. The problem is that code reviews are just as essential to the process as writing the code itself. You'll notice that companies that do good code review already do this. Then it is about making this easier to do! Reducing friction is something that should happen and we should work on, but you could make it all trivial and it wouldn't make code reviews better if they aren't treated as first class citizens.
So while I like the post and think the tech here is cool, you can't engineer your way out of a social problem. I'm not saying "don't solve engineering problems that exist in the same space" but I'm making the comment because I think it is easy to ignore the social problem by focusing on the engineering problem(s). I mean the engineering problems are magnitudes easier lol. But let's be real, avoiding addressing this, and similar, problems only adds debt. I don't know what the solution is[1], but I think we need to talk about it.
[0] Then there's the dual to LGTM! Code reviews exist and are detailed but petty and overly nitpicky. This is also hacky, but in a very different way. It is a misunderstanding of what review (or quality control) is. There's always room for criticism as nothing you do, ever, will be perfect. But finding problems is the easy part. The hard part is figuring out what problems are important and how to properly triage them. It doesn't take a genius to complain, but it does take an expert to critique. That's why the dual can even be more harmful as it slows progress needlessly and encourages the classic nerdy petty bickering over inconsequential nuances or over unknowns (as opposed to important nuances and known unknowns). If QC sees their jobs as finding problems and/or their bosses measure their performance based on how many problems they find then there's a steady state solution as the devs write code with the intentional errors that QC can pick up on, so they fulfill their metric of finding issues, and can also easily be fixed. This also matches the letter but not the spirit. This is why AI won't be able to step in without having the capacity of writing the code in the first place, which solves the entire problem by making it go away (even if agents are doing this process).
[1] Nothing said here actually presents a solution. Yes, I say "treat them as first class citizens" but that's not a solution. Anyone trying to say this, or similar things, is a solution is refusing to look at all the complexities that exist. It's as obtuse as saying "creating a search engine is easy. All you need to do is index all (or most) of the sites across the web." There's so much more to the problem. It's easy to over simplify these types of issues, which is a big part of why they still exist.
jrowen · 2h ago
Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews.
I've been out of the industry for a while but I felt this way years ago. As long as everybody on the team has coding tasks, their review tasks will be deprioritized. I think the solution is to make Code Reviewer a job and hire and pay for it, and if it's that valuable the industry will catch on.
I would guess that testing/QA followed a similar trajectory where it had to be explicitly invested in and made into a job to compete for or it wouldn't happen.
sfink · 1h ago
I don't see a lot of value in generic code reviewers. I want the reviewers to be actively engaged in writing somewhat related code themselves, otherwise the value of their opinions will decline over time.
As for prioritization... isn't it enough knowing that other people are blocked on your review? That's what incentivizes me to get to the reviews quickly.
I guess it's always going to depend a lot on your coworkers and your organization. If the culture is more about closing tickets than achieving some shared goal, I don't know what you could do to make things work.
godelski · 2h ago
I can be totally wrong, but I feel like that's a thing that sounds better on paper. I'm sure there's ways to do this correctly but every instance I've seen has created division and paid testers/QC less. Which I'd say the lower pay is a strong signal of it being considered second class. Has anyone seen this work successfully?
I also think there's benefits to review being done by devs. They're already deep in the code and review does have a side benefit of broadening that scope. Helping people know what others are doing. Can even help serve as a way to learn and improve your development. I guess the question is how valuable these things are?
shmerl · 3h ago
I was recently looking for something that at least presents a nice diff that resembles code review one in neovim.
On the branch that you are reviewing, you can do something like this:
:DiffviewOpen origin/HEAD...HEAD
kissgyorgy · 2h ago
putting the review into git notes might have worked better. It's not attached to tje lines directly, but the commit and it can stay as part of the repo
moonlion_eth · 4h ago
ersc.io
citizenpaul · 4h ago
I got into using Jujutsu this year. I'm liking it so far. Is there a beta access in the works?
loeg · 4h ago
Say more.
kjgkjhfkjf · 3h ago
If you want to remain relevant in the AI-enabled software engineering future, you MUST get very good at reviewing code that you did not write.
AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.
ZYbCRq22HbJ2y7 · 3h ago
> AI can already write very good code
Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task
netghost · 3h ago
How about AI can write large amounts of code that might look good out of context.
ZYbCRq22HbJ2y7 · 3h ago
Yeah, LLMs can do that very well, IMO. As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code. In my experience, LLMs tend to conflate shape and correctness.
dragonwriter · 53m ago
> As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code.
For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.
LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.
This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.
h4ny · 2h ago
> you MUST get very good at reviewing code that you did not write.
I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.
> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.
If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.
> Educational establishments MUST prioritize teaching code review skills
Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.
> and other high-level leadership skills.
Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.
No comments yet
nop_slide · 3h ago
I’m considered one of the stronger code reviewers on the team, what grinds my gears is seeing large, obviously AI heavy PRs and finding a ton of dumb things wrong with them. Things like totally different patterns and even bugs. I’ve lost trust that the person putting up the PR has even self reviewed their own code and has verified it does what they intend.
If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.
kubectl_h · 2h ago
Same. I work at a place that has gone pretty hard into AI coding, including onboarding managers into using it to get them into the dev lifecycle, and it definitely puts an inordinate amount of pressure on senior engineers to scrutinize PRs much more closely. This includes much more thorough reviews of tests as well since AI writes both the implementation and tests.
It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.
faangguyindia · 2h ago
if you are seeing that it just means they are not using the tool properly or using the wrong tool.
AI assisted commits on my team are "precise".
abenga · 47m ago
No True AI …
jonahx · 3h ago
There is no reason to think that code review will magically be spared by the AI onslaught while code writing falls, especially as devs themselves lean more on the AI and have less and less experience coding every day.
There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.
And in the end the whole paradigm itself may change.
No comments yet
gf000 · 1h ago
> AI can write better code than most of them can at this point
So where is your 3 startups?
wfhrto · 2h ago
AI can review code. No need for human involvement.
gf000 · 56m ago
For styling and trivial issues, sure. And if it's free, do make use of it.
But it is just as unable to properly reason about anything slightly more complex as when writing code.
It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change. There may be some discussion earlier on in a jira ticket or meeting, and with some luck someone even wrote a design spec, but there will still often be someone from a different team or distant part of the organization that only hears about the change when they see the code review. This includes me. I often only notice that some other team implemented something stupid because I suddenly get a notification that someone posted a code review for some part of the code that I watch for changes.
Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you? I don't know. That doesn't seem to ever happen anyway. At university in the 1990's in a course about development processes there wasn't only code reviews but also design reviews, and that isn't something I ever encountered in the wild (in any formal sense) but I don't know if even a design review process would be able to catch all the things you would want to catch BEFORE starting to implement something.
Because in the software engineering world there is very little engineering involved.
That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.
Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not
Different stakes and patching opportunities lend to different practices.
These can be written either for just our team or for the eyes of all other software teams. In the latter case we put these forward as RFCs for discussion in a fortnightly meeting, which is announced well in advance so people can read them, leave comments beforehand, and only need to attend the meeting if there's an RFC of interest to them up for discussion.
This has gone pretty well for us! It can feel like a pain to write some of these, and at times I think we overuse them somewhat, but I much prefer our approach to any other place I've worked where we didn't have any sort of collaborative design process in place at all.
Where I work, the structure is such that most parts of the codebase have a team that is responsible for it and does the vast majority of changes there. If any "outsider" plans a change, they come talk to the team and coordinates.
And we also have strong intra-team communication. It's clear who is working on what and we have design reviews to agree on the "how" within the team.
It's rare that what you describe happens. 95% of the code reviews I do are without comments or only with minor suggestions for improvement. Mainly because we have developed a culture of talking to each other about major things beforehand and writing the code is really just the last step in the process. We also have developed a somewhat consistent style within the teams. Not necessarily across the teams, but that's ok.
TL;DR: It's certainly possible to do things better that what you are experiencing. It's a matter of structure, communication and culture.
Sourcehut is missing in the list; it’s built on the classical concept of sending patches / issues / bugs / discussion threads via email and it integrates this concept into its mailing lists and ci solution that also sends back the ci status / log via email.
Drew Devault published helpful resources for sending and reviewing patches via email on git-send-email.io and git-am.io
When we started graphite.dev years ago that was a workflow most developers had never heard of unless they had previously been at FB / Google.
Fun to see how fast code review can change over 3-4yrs :)
And I very much appreciate both the ambition and results that come from making it interop with PRs, its a nightmare problem and its pretty damned amazing it works at all, let alone most of the time.
I would strongly lobby for a prescriptive mode where Graphite initializes a repository with hardcore settings that would allow it to make more assumptions about the underlying repo (merge commits, you know the list better than I do).
I think that's what could let it be bulletproof.
It seems non-obvious that you would have to prohibit git commands in general, they're already "buyer beware" with the current tool (and arcanist for that matter). Certainly a "strict mode" where only well-behaved trees could interact with the tool creates scope for all kinds of performance and robustness optimizations (and with reflog bisecting it could even tell you where you went off script).
I was more referring to the compromises that gt has to make to cope with arbitrary GitHub PRs seem a lot more fiddly than directly invoking git, but that's your area of expertise and my anecdote!
Broad strokes I'm excited for the inevitable decoupling of gt from GitHub per se, it was clearly existential for zero to one, but you folks are a first order surface in 2025.
Keep it up!
Best AI code review, hands down. (And I’ve tried a few.)
Shockingly, the best code review tool I've ever used was Azure DevOps.
I use it every day and don't have any issues with the review system, but to me it's very similar to github. If anything, I miss being able to suggest changes and have people click a button to integrate them as commits.
Javascript at scale combined with teams that have to move fast and ship features is a recipe for this.
At least it's not Atlassian.
I didn't get why stick with the requirement that review is a single commit? To keep git-review implementation simple?
I wonder if approach where every reviewer commits their comments/fixes to the PR branch directly would work as well as I think it would. One might not even need any additional tools to make it convenient to work with. This idea seems like a hybrid of traditional github flow and a way Linux development is organized via mailing lists and patches.
Yeah that is pretty weird. If 5 people review my code, do they all mangle the same review commit? We don't do that with code either, feels like it's defeating the point.
Review would need to be commits on top of the reviewed commit. If there are 5 reviews of the same commit, then they all branch out from that commit. And to address them, there is another commit which also lives besides them. Each commit change process becomes a branch with stacked commits beinf branches chained on top of one another. Each of the commits in those chained branches then has comment commits attached. Those comment commits could even form chains if a discussion is happening. Then when everybody is happy, each branch gets squashed into a single commit and those then get rebased on the main branch.
You likely want to make new commits for that though to preserve the discussions for a while. And that's the crux: That data lives outside the main branch, but needs to live somewhere.
i've had team members edit a correction as a "suggestion" comment and i can approve it to be added as a commit on my branch.
This is eerily similar to how I review large changes that do not have a clear set of commits. The real problem is working with people that don’t realize that if you don’t break work down into small self contained units, everybody else is going to have to do it individually. Nobody can honestly say they can review tons of diffs to a ton of files and truly understand what they’ve reviewed.
The whole is more than just the sum of the parts.
``` review () { if [[ -n $(git status -s) ]] then echo 'must start with clean tree!' return 1 fi
} ```When I started my career, no one did code review. I'm old.
At some point, my first company grew; we hired new people and started to offshore. Suddenly, you couldn't rely on developers having good judgement... or at least being responsible for fixing their own mess.
Code review was a tool I discovered and made mandatory.
A few years later, everyone converged on GitHub, PRs, and code review. What we were already doing now became the default.
Many, many years layer, I work with a 100% remote team that is mostly experienced and 75% or more of our work is writing code that looks like code we've already written. Most code review is low value. Yes, we do catch issues in review, especially with newer hires, but it's not obviously worth the delay of a review cycle.
Our current policy is to trust the author to opt-in for review. So far, this approach works, but I doubt it will scale.
My point? We have a lot of posts about code review and related tools and not enough about whether to review and how to make reviews useful.
I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.
I'm not a fan of automatic syntax formatting but you can have some degree of pre-commit checks.
1. It's easy to optimise for talented, motivated people in your team. You obviously want this, and it should be the standard, but you also want it to be the case that somebody who doesn't care about their work can't trash the codebase.
2. I find even people just leaving 'lgtm' style reviews for simple things, does a lot to make sure folks keep up with changes. Even if there's nothing caught, you still want to make sure there aren't changes that only one person knows about. That's how you wind up with stuff like, the same utility functions written 10 times.
The owner is allowed to make changes without review.
Now there's official support and tooling for reviews (at least in IDEA, but probably in the others too), where you also get in-line highlighting of changed lines, comments, status checks, etc...
I feel sorry for anyone still using GitHub itself (or GitLab or whatever). It's horrible for anything more than a few lines of changes here and there.
The patchsets get stacked up and you know where you left off if there are different changes and that is very cool.
More often than not, it either doesn't exist, or turns out in a kind of architecture fetishism that the lead devs/architects have from conferences or space ship enterprise architecture.
Already without this garbage it feels so much better, than arguing about SOLID, clean code, hexagonal architecture, member functions being with an underscore, explicit types or not,...
VSCode is open source, and there are plenty of IDEs...
I guess I'm just focused on different lock-in concerns than you are.
To be fair you don't know if one line change is going to absolutely compromise a flow. OSS needs to maintain a level of disconnect to be safe vs fast.
GitLab enables this - make the suggestion in-line which the original dev can either accept or decline.
I'm not convinced that review comments as commits make thing easier, but I think storing them in git in some way is a good idea (i.e. git annotations or in commit messages after merge etc)
I was on a lookout for best "precommit" review tool and zeroed on Magit, gitui, Sublime Merge.
I am not an emac user, so i'll have to learn this.
I suggest `git-precom` for conciseness.
https://youtu.be/Qscq3l0g0B8
I'm not sure there's even a tech solution to this class of problems and it is down to culture. LGTMs exist because it satisfies the "letter of the law" but not the spirit. Classic bureaucracy problem combined with classic engineer problems. It feels like there are simple solutions but LGTMs are a hack. You try to solve this by requiring reviews but LGTMs are just a hack to that. Fundamentally you just can't measure the quality of a review[0]. Us techie types and bureaucrats have a similar failure mode: we like measurements. But a measurement of any kind is meaningless without context. Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews. Us engineers are used to looking at metrics without context and get lulled into a false sense of security, or convince ourselves that we can find a tech solution to this stuff. I'm sure someone's going to propose a LLM reviewer and hey, it might help, but it won't address the root problems. The only way to get good code reviews is for them to be done by someone capable of writing the code in the first place. Until the LLMs can do all the coding they won't make this problem go away, even if they can improve upon the LGTM bar. But that's barely a bar, it's sitting on the floor.
The problem is cultural. The problem is that code reviews are just as essential to the process as writing the code itself. You'll notice that companies that do good code review already do this. Then it is about making this easier to do! Reducing friction is something that should happen and we should work on, but you could make it all trivial and it wouldn't make code reviews better if they aren't treated as first class citizens.
So while I like the post and think the tech here is cool, you can't engineer your way out of a social problem. I'm not saying "don't solve engineering problems that exist in the same space" but I'm making the comment because I think it is easy to ignore the social problem by focusing on the engineering problem(s). I mean the engineering problems are magnitudes easier lol. But let's be real, avoiding addressing this, and similar, problems only adds debt. I don't know what the solution is[1], but I think we need to talk about it.
[0] Then there's the dual to LGTM! Code reviews exist and are detailed but petty and overly nitpicky. This is also hacky, but in a very different way. It is a misunderstanding of what review (or quality control) is. There's always room for criticism as nothing you do, ever, will be perfect. But finding problems is the easy part. The hard part is figuring out what problems are important and how to properly triage them. It doesn't take a genius to complain, but it does take an expert to critique. That's why the dual can even be more harmful as it slows progress needlessly and encourages the classic nerdy petty bickering over inconsequential nuances or over unknowns (as opposed to important nuances and known unknowns). If QC sees their jobs as finding problems and/or their bosses measure their performance based on how many problems they find then there's a steady state solution as the devs write code with the intentional errors that QC can pick up on, so they fulfill their metric of finding issues, and can also easily be fixed. This also matches the letter but not the spirit. This is why AI won't be able to step in without having the capacity of writing the code in the first place, which solves the entire problem by making it go away (even if agents are doing this process).
[1] Nothing said here actually presents a solution. Yes, I say "treat them as first class citizens" but that's not a solution. Anyone trying to say this, or similar things, is a solution is refusing to look at all the complexities that exist. It's as obtuse as saying "creating a search engine is easy. All you need to do is index all (or most) of the sites across the web." There's so much more to the problem. It's easy to over simplify these types of issues, which is a big part of why they still exist.
I've been out of the industry for a while but I felt this way years ago. As long as everybody on the team has coding tasks, their review tasks will be deprioritized. I think the solution is to make Code Reviewer a job and hire and pay for it, and if it's that valuable the industry will catch on.
I would guess that testing/QA followed a similar trajectory where it had to be explicitly invested in and made into a job to compete for or it wouldn't happen.
As for prioritization... isn't it enough knowing that other people are blocked on your review? That's what incentivizes me to get to the reviews quickly.
I guess it's always going to depend a lot on your coworkers and your organization. If the culture is more about closing tickets than achieving some shared goal, I don't know what you could do to make things work.
I also think there's benefits to review being done by devs. They're already deep in the code and review does have a side benefit of broadening that scope. Helping people know what others are doing. Can even help serve as a way to learn and improve your development. I guess the question is how valuable these things are?
This is a pretty cool tool for it: https://github.com/sindrets/diffview.nvim
On the branch that you are reviewing, you can do something like this:
:DiffviewOpen origin/HEAD...HEAD
AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.
Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task
For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.
LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.
This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.
I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.
> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.
If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.
> Educational establishments MUST prioritize teaching code review skills
Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.
> and other high-level leadership skills.
Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.
No comments yet
If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.
It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.
AI assisted commits on my team are "precise".
There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.
And in the end the whole paradigm itself may change.
No comments yet
So where is your 3 startups?
But it is just as unable to properly reason about anything slightly more complex as when writing code.