Writing Code Is Easy. Reading It Isn't

32 jnord 14 9/8/2025, 12:29:12 PM idiallo.com ↗

Comments (14)

ppeetteerr · 32m ago
This is not unique to the age of LLMs. PR reviews are often shallow because the reviewer is not giving the contribution the amount of attention and understanding it deserves.

With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.

We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

gyomu · 21m ago
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.

90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.

foxfired · 24m ago
One thing to take into account is that PR reviews aren't there for just catching errors in the code. They also ensure that the business logic is correct. For example, you can have code that pass all tests, and look good, but they don't align with the business logic.
HuwFulcher · 7h ago
This is a challenge which I don't think AI tools like Cursor have cracked yet. They're great for laying "fresh pavement" but it's akin to being a project manager contracting the work out.

Even if I use Cursor (or some other equivalent) and review the code I find my mental model of the system is much more lacking. It actually had a net negative on my productivity as it gave me anxiety at going back to the codebase.

If an AI tool could help a user interactively learn the mental model I think that would be a great step in the right direction.

ivape · 1h ago
but it's akin to being a project manager contracting the work out.

And that's probably the difference between those who are okay with vibe coding and those who aren't. A leader of a company that doesn't care about code quality (elegant code, good tradeoffs, etc) would never have cared if 10 monkeys outputted the code pre-AI or if 10 robot monkeys outputted the code with AI. It's only a developer, of a certain type, that would care to say "pause" in either of those situations.

Out of principal I would not share or build coding tools for these people. They literally did not care all these years about code quality, and the last thing I want to do is enable them on any level.

catigula · 1h ago
An AI tool can both navigate a legacy codebase and help explain it to you successfully currently, right now, if you're doing it correctly.

I've contracted some of this understanding of pieces/intellectual work out to Claude code many, many times successfully.

HuwFulcher · 53m ago
Yes it’s definitely possible now. My point was that people need to move past “vibe coding” to using the AI as what it should be, an assistant
tptacek · 39m ago
This article makes points that are valid in general but not apposite to code generation with agents.

It is indeed difficult to verify a piece of code that is either going to ship as-is, or with the specific modifications your verification identifies. In cryptography engineering the rule of thumb (never followed in practice) is 10x verification cost to x implementation cost. Verification is hard and expensive.

But qualifying agent-generated code isn't the verification problem, in the same way that validating an eBPF program in the kernel isn't solving the halting problem.

That's because the agentic scenario gives us an additional outcome: we can allow the code as-is, we can make modifications to the code, or we can throw out the code and re-prompt --- discarding (many) probably-valid programs in a search for the subset of programs that are easy to validate.

In practice, most of what people generate is boring and easy to validate: you know within a couple minutes whether it's the right shape, whether anything sticks out the wrong way, the way a veteran chess player can quickly pattern-match a whole chessboard. When it isn't boring, you read carefully (and expensively), or you just say "no, try again, give me something more boring", or you break your LLM generation into smaller steps that are easier to pattern match and recurse.

What professionals generally don't (I think) do with LLMs is generate large gnarly PRs, all at once, and then do a close-reading of those gnarly PRs. They read PRs that are easy (not gnarly). They reject the gnarly ones, and compensate for gnarliness by approaching the problem at a smaller level of granularity. Or, you know, just write those bits by hand!

vivzkestrel · 1h ago
I am really bad at reading code to be honest (especially other people's code). Any tips on how I can go about becoming good at this like starting from baby steps?
Night_Thastus · 1h ago
Practice, context and domain-specific knowledge.

#1 is easy, #2 requires some investigation, #3 requires studying.

If you're looking at say, banking code - but you know nothing about finance - you may struggle to understand what it's doing. You may want to gain some domain expertise. Being an SME makes reading the related code a heck of a lot easier.

Context comes down to learning the code base. What code calls the part you're looking at? What user actions trigger it? Look at the comments and commit messages - what was the intention? This just takes time and a lot of trawling around, looking for patterns and common elements. User manuals and documentation can also help. This part can't be rushed - it just comes to passing over it again and again and again. If you have access to people very familiar with the code - ask them! They may be able to kick start your intro.

#1 will come naturally with time.

aeturnum · 46m ago
I find it useful to open the code in an editor and make running notes in the comments about what I think the state should be. As long as the code has good tests you can use debugging statements to confirm your understanding.

As a bonus you can just send that whole block of code - notes and all - to a colleague if you get stuck. They can read through the code and your thoughts and give feedback.

gerad · 1h ago
When you're debugging issues, read the code for the libraries you're using before going to their documentation. It's a great way to get exposed to other people's code.
hashbig · 1h ago
Like everything else, practice. I like to clone repositories of open source tools I use and try to understand how a particular feature is built end to end. I find that reading code aimlessly is not that helpful. Try to read it with a goal in mind. When starting out, pick a tool/application that is very simple and lean on LLMs to explain only the bits you don't understand.
jameskilton · 1h ago
Software's job is to tell other people what the computer is doing.