Writing code is easy, reading it isn't

50 jnord 32 9/8/2025, 12:29:12 PM idiallo.com ↗

Comments (32)

s_Hogg · 26m ago
You need to be twice as smart to debug code as you need to be to write it. So if you write the smartest code you can, then you by definition are too dumb to debug it

Just write simple code

lukan · 4m ago
Thank you, I will be using this.

Not sure if there is actually data for the assumption to be twice as smart to debug than to write code, but it sounds about right.

And also, if you write the smartest code you can, while you are at your peak, you also won't be able to read it, when you are just a bit tired.

So yes, yes, yes. Just write simple code.

(I also was also initially messed up a bit by teachers filling me up with the idea to aim for clever code)

EGreg · 25m ago
what exactly is "the smartest code you can" :)
brudgers · 18m ago
Actual quote:

Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

havefunbesafe · 14m ago
In this case, it sounds like "The code that will most impress your parents"
ppeetteerr · 1h ago
This is not unique to the age of LLMs. PR reviews are often shallow because the reviewer is not giving the contribution the amount of attention and understanding it deserves.

With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.

We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

gyomu · 1h ago
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.

90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.

No comments yet

prybeng · 27m ago
I wonder if the paradigm shift is the adoption of a higher level language. Akin to what python did to blackboxing C libraries.
foxfired · 1h ago
One thing to take into account is that PR reviews aren't there for just catching errors in the code. They also ensure that the business logic is correct. For example, you can have code that pass all tests, and look good, but they don't align with the business logic.
HuwFulcher · 8h ago
This is a challenge which I don't think AI tools like Cursor have cracked yet. They're great for laying "fresh pavement" but it's akin to being a project manager contracting the work out.

Even if I use Cursor (or some other equivalent) and review the code I find my mental model of the system is much more lacking. It actually had a net negative on my productivity as it gave me anxiety at going back to the codebase.

If an AI tool could help a user interactively learn the mental model I think that would be a great step in the right direction.

ivape · 2h ago
but it's akin to being a project manager contracting the work out.

And that's probably the difference between those who are okay with vibe coding and those who aren't. A leader of a company that doesn't care about code quality (elegant code, good tradeoffs, etc) would never have cared if 10 monkeys outputted the code pre-AI or if 10 robot monkeys outputted the code with AI. It's only a developer, of a certain type, that would care to say "pause" in either of those situations.

Out of principal I would not share or build coding tools for these people. They literally did not care all these years about code quality, and the last thing I want to do is enable them on any level.

marcosdumay · 40m ago
Well, now they can have their way without a pesky developer second-guessing every decision of them.

I don't want to participate on it either, but I'm glad they'll have the chance to make things their way with all the consequences it brings unfiltered.

catigula · 2h ago
An AI tool can both navigate a legacy codebase and help explain it to you successfully currently, right now, if you're doing it correctly.

I've contracted some of this understanding of pieces/intellectual work out to Claude code many, many times successfully.

HuwFulcher · 1h ago
Yes it’s definitely possible now. My point was that people need to move past “vibe coding” to using the AI as what it should be, an assistant
lukaslalinsky · 21m ago
I don't think people are realistically "vibe coding" production apps. You see a lot of hype, and I'm sure there is growing number of people without software development skills using Claude Code, but those are not the people writing important software. Even the people from Anthropic encourage programmers to first use CC as analytics/research tool, not for writing code. Once you get a feel what it can do, you will feel comfortable feeding it chunks of work you want to delegate, but realistically never more than you can review. Yes, you can ask it to build the next Instagram, but you will very quickly find out it's not going to work.
freed0mdox · 1h ago
I have the opposite experience. After years in appsec and pentesting, I can read any codebase and quickly understand its parts, but I wouldn’t be able to write anything of production quality. LLMs speed the comprehension process up for me even further. I guess it comes down to practice, if you practice reading code, you get good at reading code.
dingnuts · 57m ago
reading production code that is known to work can be done with faith and skimming. You don't have to understand every function call because they've each been tested and battle hardened, so it's easy to get an overview of what is happening.

LLM code is NOT like this at all, but it's like a skilled liar writing something that LOOKS plausible, that's what they're trained to do.

People like you do not have the ability to evaluate the LLM output; it's not the same as reading code that was carefully written at ALL. If you think it's the same, that is only evidence that you can't tell the difference between working code and misleading buggy code.

What you've learned to do is read the intent of code. That's fine when it's been written and tested by a person. It's useless when it comes to evaluating LLM slop.

You're being gaslit.

danielmarkbruce · 54m ago
You are being gaslit if you think "production code that is known to work" covers any reasonable proportion of code in production.
dingnuts · 45m ago
well played, but of course inevitably whatever it's doing in production (whether to spec or not) is "working" for somebody.

Obligatory XKCD https://xkcd.com/1172/ "Workflow" reference

vivzkestrel · 2h ago
I am really bad at reading code to be honest (especially other people's code). Any tips on how I can go about becoming good at this like starting from baby steps?
lukaslalinsky · 18m ago
Read code, read code, read code. You will get better.

When looking at a piece of code, keep asking questions like: what does this return, what are the side effects, what can go wrong, what happens if this goes wrong, where do we exit, can this get stuck, where do we close/save/commit this, what's the input, what if the input is wrong/missing, where are we checking if the input is OK, can this number underflow/overflow, etc

All these questions are there to complete the picture, so that instead of function calls and loops, you are looking at the graph of interconnected "things". It will become natural after some time.

It helps if you read the code with some interest, e.g. if you want to find a bug in an open source project that you have never seen the code for.

Night_Thastus · 2h ago
Practice, context and domain-specific knowledge.

#1 is easy, #2 requires some investigation, #3 requires studying.

If you're looking at say, banking code - but you know nothing about finance - you may struggle to understand what it's doing. You may want to gain some domain expertise. Being an SME makes reading the related code a heck of a lot easier.

Context comes down to learning the code base. What code calls the part you're looking at? What user actions trigger it? Look at the comments and commit messages - what was the intention? This just takes time and a lot of trawling around, looking for patterns and common elements. User manuals and documentation can also help. This part can't be rushed - it just comes to passing over it again and again and again. If you have access to people very familiar with the code - ask them! They may be able to kick start your intro.

#1 will come naturally with time.

__alias · 22m ago
It's far easier to read diagrams then it is to read code.

To get a good mental model, I'll often get an LLM to generate a few mermaid diagrams to help create a mental model of how everything pieces together

aeturnum · 1h ago
I find it useful to open the code in an editor and make running notes in the comments about what I think the state should be. As long as the code has good tests you can use debugging statements to confirm your understanding.

As a bonus you can just send that whole block of code - notes and all - to a colleague if you get stuck. They can read through the code and your thoughts and give feedback.

danielmarkbruce · 58m ago
Use an LLM.

It's not a joke answer. This entire article is silly. LLMs are great for helping you understanding code.

__alias · 20m ago
Agreed!

Especially getting them to generate sequence / flow charts I find is a hack to figure out how everything fits together well.

Claude code is fantastic at quickly tracing through code and building visualisations of how code works together

gerad · 2h ago
When you're debugging issues, read the code for the libraries you're using before going to their documentation. It's a great way to get exposed to other people's code.
maxverse · 56m ago
You're not alone!
hashbig · 2h ago
Like everything else, practice. I like to clone repositories of open source tools I use and try to understand how a particular feature is built end to end. I find that reading code aimlessly is not that helpful. Try to read it with a goal in mind. When starting out, pick a tool/application that is very simple and lean on LLMs to explain only the bits you don't understand.
m3kw9 · 20m ago
With LLMs you need to read well, they can introduce “later bugs”
tptacek · 1h ago
This article makes points that are valid in general but not apposite to code generation with agents.

It is indeed difficult to verify a piece of code that is either going to ship as-is, or with the specific modifications your verification identifies. In cryptography engineering the rule of thumb (never followed in practice) is 10x verification cost to x implementation cost. Verification is hard and expensive.

But qualifying agent-generated code isn't the verification problem, in the same way that validating an eBPF program in the kernel isn't solving the halting problem.

That's because the agentic scenario gives us an additional outcome: we can allow the code as-is, we can make modifications to the code, or we can throw out the code and re-prompt --- discarding (many) probably-valid programs in a search for the subset of programs that are easy to validate.

In practice, most of what people generate is boring and easy to validate: you know within a couple minutes whether it's the right shape, whether anything sticks out the wrong way, the way a veteran chess player can quickly pattern-match a whole chessboard. When it isn't boring, you read carefully (and expensively), or you just say "no, try again, give me something more boring", or you break your LLM generation into smaller steps that are easier to pattern match and recurse.

What professionals generally don't (I think) do with LLMs is generate large gnarly PRs, all at once, and then do a close-reading of those gnarly PRs. They read PRs that are easy (not gnarly). They reject the gnarly ones, and compensate for gnarliness by approaching the problem at a smaller level of granularity. Or, you know, just write those bits by hand!

jameskilton · 2h ago
Software's job is to tell other people what the computer is doing.