You need to be twice as smart to debug code as you need to be to write it. So if you write the smartest code you can, then you by definition are too dumb to debug it
Just write simple code
Ferret7446 · 57m ago
This is not just about writing code, but designing systems. If you design the smartest (most complex) system you can, you won't be able to debug/fix/extend/maintain it.
tracker1 · 3h ago
I hold KISS above most "Enterprise Patterns" with YAGNI as a close second. I think abstractions should be used to reduce complexity with code as opposed to making it harder to reason about. If a pattern increases complexity in understanding, then it should make sense in more cases than not.
I'm also a fan of feature-oriented project structures. I want the unit test file in or next to the code it's testing. For UI projects, similar with React it's about the component or feature not the type of thing. For APIs I will put request handlers with the feature along with models and other abstractions that go together based on what they fulfill, not the type of class they are.
I consider this practice more intuitively discoverable. You go into a directory for "Users" and you will see functionality related to users... this can be profile crud or the endpoint handlers. Security may or may not be a different feature depending on how you grow your app (Users, Roles, Permissions, etc). For that matter, I'd more often rather curate a single app that does what it needs vs. dozens of apps in a singular larger project. I've seen .Net web projects strewn across 60+ applications in two different solutions before. It took literally weeks to do what should take half a day at most.
All for one website/app to get published. WHY?!? I'm not opposed to smaller/micro services where they make sense either. But keep it all as simple as you possibly can. Try to make what you create/use/consume/produce as simple as you can too. Can you easily use/consume/interact with what you make from a system in $NewLanguage without too much headache? I don't like to have to rely on special libraries being available everywhere.
thomasikzelf · 7h ago
Writing simple code is also much harder then writing complicated code. If you write some complicated code at the limit of your mental capabilities you can not debug it, but you might also not be smart enough to write the simple code.
I guess this means that one should solve the appropriate problems for a given skill level
api · 6h ago
My #1 belief about engineering, and one I harp on constantly, is that simplicity is harder than complexity.
I use it as a heuristic. If my work is getting more complex, it's a warning sign that I might be doing something wrong or using the wrong approach. If it's getting simpler it means I might be headed in the right direction.
vjvjvjvjghv · 4h ago
“Just” writing simple code is not well defined. Sometimes it’s about avoiding abstractions, sometimes it’s about creating the right abstractions.
I guess it’s best to take a look at the code once something works and then see if it can be simplified. A lot of people seem to skip that step.
flykespice · 5h ago
This resonates so much with my upbringing.
When I was a kid learning programming, I would skim through the whole book teaching Python and type the code using as much keywords as I learned each day, just to boast on my parents and my non-programmers peers about the obfuscated mess that came after.
As I grew I started to contribute to other open-source projects and I came across every kind of unmaitanable spaghetti code, so that I just gave up contribuiting on said project, that is when I gained the consciousness about being zealous on keeping the code as simple as possible so that the next person who comes after me to change the code don't have as much trouble understanding the code, even myself when I revisit the code later.
That altruistic mindset about caring how others read your code, you don't acquire easily unless you get experience how your previous peers did feel.
lukan · 7h ago
Thank you, I will be using this.
Not sure if there is actually data for the assumption to be twice as smart to debug than to write code, but it sounds about right.
And also, if you write the smartest code you can, while you are at your peak, you also won't be able to read it, when you are just a bit tired.
So yes, yes, yes. Just write simple code.
(I also was also initially messed up a bit by teachers filling me up with the idea to aim for clever code)
EGreg · 7h ago
what exactly is "the smartest code you can" :)
brudgers · 7h ago
Actual quote:
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
OnionBlender · 7h ago
Known as Kernighan's Law.
havefunbesafe · 7h ago
In this case, it sounds like "The code that will most impress your parents"
Has anyone read The Programmer's Brain and have an opinion about it? I'd like to improve my ability to read and understand code and was thinking about reading it.
This is not unique to the age of LLMs. PR reviews are often shallow because the reviewer is not giving the contribution the amount of attention and understanding it deserves.
With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.
We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
gyomu · 8h ago
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.
90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.
No comments yet
Bukhmanizer · 4h ago
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
This seems silly to me. In most cases, the least amount of work you can possibly do is logically describe the process you want and the boundaries, and run that logic over the input data. In other words, coding.
The idea that we should, to avoid coding or reading code, come up with a whole new process to keep generated code on track - would almost certainly take more effort than just getting the logical incantations correct the first time.
foxfired · 8h ago
One thing to take into account is that PR reviews aren't there for just catching errors in the code. They also ensure that the business logic is correct. For example, you can have code that pass all tests, and look good, but they don't align with the business logic.
vjvjvjvjghv · 4h ago
“ With LLMs, the volume of code has only gotten larger ”
It’s even worse with offshore devs. They produce a ton of code you have to review every morning.
prybeng · 7h ago
I wonder if the paradigm shift is the adoption of a higher level language. Akin to what python did to blackboxing C libraries.
fzeroracer · 4h ago
Can you define what an "error" is?
ppeetteerr · 3h ago
Logic error, for instance
fzeroracer · 3h ago
Well, it depends on the logic error doesn't it? And it depends on how the system is intended to behave. A method that does 2+2=5 is a logic error, but it could be a load-bearing method in the system that blows up when changed to be correct.
Something like blowing up the stack or going out of bounds is more obviously a bug, but detecting those will often require inferences from how a code behaves during runtime to identify. LLMs might work for detecting the most basic cases because those appear most often in their data set, but whenever I see people suggest that they're good at reviewing I think it's from people that don't deeply review code.
HuwFulcher · 15h ago
This is a challenge which I don't think AI tools like Cursor have cracked yet. They're great for laying "fresh pavement" but it's akin to being a project manager contracting the work out.
Even if I use Cursor (or some other equivalent) and review the code I find my mental model of the system is much more lacking. It actually had a net negative on my productivity as it gave me anxiety at going back to the codebase.
If an AI tool could help a user interactively learn the mental model I think that would be a great step in the right direction.
catigula · 10h ago
An AI tool can both navigate a legacy codebase and help explain it to you successfully currently, right now, if you're doing it correctly.
I've contracted some of this understanding of pieces/intellectual work out to Claude code many, many times successfully.
HuwFulcher · 9h ago
Yes it’s definitely possible now. My point was that people need to move past “vibe coding” to using the AI as what it should be, an assistant
lukaslalinsky · 7h ago
I don't think people are realistically "vibe coding" production apps. You see a lot of hype, and I'm sure there is growing number of people without software development skills using Claude Code, but those are not the people writing important software. Even the people from Anthropic encourage programmers to first use CC as analytics/research tool, not for writing code. Once you get a feel what it can do, you will feel comfortable feeding it chunks of work you want to delegate, but realistically never more than you can review. Yes, you can ask it to build the next Instagram, but you will very quickly find out it's not going to work.
ivape · 9h ago
but it's akin to being a project manager contracting the work out.
And that's probably the difference between those who are okay with vibe coding and those who aren't. A leader of a company that doesn't care about code quality (elegant code, good tradeoffs, etc) would never have cared if 10 monkeys outputted the code pre-AI or if 10 robot monkeys outputted the code with AI. It's only a developer, of a certain type, that would care to say "pause" in either of those situations.
Out of principal I would not share or build coding tools for these people. They literally did not care all these years about code quality, and the last thing I want to do is enable them on any level.
woah · 4h ago
Or maybe if you are good at delegating and reviewing code and stepping in to do a deep dive by hand when needed to maintain understanding, then you can use LLMs to greatly increase your speed.
marcosdumay · 7h ago
Well, now they can have their way without a pesky developer second-guessing every decision of them.
I don't want to participate on it either, but I'm glad they'll have the chance to make things their way with all the consequences it brings unfiltered.
ottaborra · 4h ago
I think the same could be said of anything resembling technical writing. As an example aside from code writing, I think more than half of the machine learning papers out there are horribly written in the sense they rush a point or give no rhyme or reason for certain parts
And the best part, most people shallow read all of them and decide the details are needless till they are forced to deal with the details and then their understanding falls apart in front of them
freed0mdox · 8h ago
I have the opposite experience. After years in appsec and pentesting, I can read any codebase and quickly understand its parts, but I wouldn’t be able to write anything of production quality. LLMs speed the comprehension process up for me even further. I guess it comes down to practice, if you practice reading code, you get good at reading code.
GuB-42 · 5h ago
Maybe you are used to read high quality code. I suspect that the simple fact that you are auditing some code means that someone actually cares, making it higher quality than average.
High quality code is generally hard to write and easy to read.
dingnuts · 8h ago
reading production code that is known to work can be done with faith and skimming. You don't have to understand every function call because they've each been tested and battle hardened, so it's easy to get an overview of what is happening.
LLM code is NOT like this at all, but it's like a skilled liar writing something that LOOKS plausible, that's what they're trained to do.
People like you do not have the ability to evaluate the LLM output; it's not the same as reading code that was carefully written at ALL. If you think it's the same, that is only evidence that you can't tell the difference between working code and misleading buggy code.
What you've learned to do is read the intent of code. That's fine when it's been written and tested by a person. It's useless when it comes to evaluating LLM slop.
You're being gaslit.
freed0mdox · 6h ago
Code is code, it's not a piece of art where we all can have different perspective about what it means or does, so from appsec perspective it doesn't matter who wrote it, just what it does. Also you seem to be interpreting "reading" as one would read a novel, but here "reading" is about finding and exploiting security flaws. So yeah, dunno what you are on about.
danielmarkbruce · 8h ago
You are being gaslit if you think "production code that is known to work" covers any reasonable proportion of code in production.
dingnuts · 8h ago
well played, but of course inevitably whatever it's doing in production (whether to spec or not) is "working" for somebody.
I am really bad at reading code to be honest (especially other people's code). Any tips on how I can go about becoming good at this like starting from baby steps?
Night_Thastus · 9h ago
Practice, context and domain-specific knowledge.
#1 is easy, #2 requires some investigation, #3 requires studying.
If you're looking at say, banking code - but you know nothing about finance - you may struggle to understand what it's doing. You may want to gain some domain expertise. Being an SME makes reading the related code a heck of a lot easier.
Context comes down to learning the code base. What code calls the part you're looking at? What user actions trigger it? Look at the comments and commit messages - what was the intention? This just takes time and a lot of trawling around, looking for patterns and common elements. User manuals and documentation can also help. This part can't be rushed - it just comes to passing over it again and again and again. If you have access to people very familiar with the code - ask them! They may be able to kick start your intro.
#1 will come naturally with time.
rramadass · 1h ago
Very good advice!
To add to the above; IME, #3 comes first. Study the domain to understand the concepts and their relationships. Read some books/Articles, Watch some Videos, Read Documentation etc. to come up to speed on the terminology/jargon and the general concepts/ideas. Then, in order to understand their mapping to the specific application at hand, sit with the local "guru" (there is always at least one in every group) and pick his/her brain for a few sessions (getting them brown bag lunches works great for this) on the overall architecture of the System. Next sit with testing and use the app as an end-user to understand use-case scenarios which brings all of the above together.
During all the above stages, take copious notes, draw diagrams/graphs/etc. use source code analysis/documentation/browsing/etc. tools eg. doxygen/cscope/opengrok/etc. tools to navigate the codebase and cement understanding. Note also that the above stages are to be done both iteratively and parallelly until you are somewhat comfortable and not necessarily know/understand everything.
With the above in hand, pick one use-case scenario, preferably the most complicated, critical and important one and walk through the code from beginning to end for that path. Remember that you are trying to get the overall picture and hence treat all irrelevant details as blackbox abstractions during initial phases. Over time as you iterate and review the code again and again you can slowly add in the details for a more comprehensive understanding.
Finally, there is no shortcut to the above; it takes time and self-effort. We Humans are natural-born, trial-and-error, continuous-learning problem solvers and so trust to your intelligence and commonsense to find a path to move ahead when stuck at something.
lukaslalinsky · 7h ago
Read code, read code, read code. You will get better.
When looking at a piece of code, keep asking questions like: what does this return, what are the side effects, what can go wrong, what happens if this goes wrong, where do we exit, can this get stuck, where do we close/save/commit this, what's the input, what if the input is wrong/missing, where are we checking if the input is OK, can this number underflow/overflow, etc
All these questions are there to complete the picture, so that instead of function calls and loops, you are looking at the graph of interconnected "things". It will become natural after some time.
It helps if you read the code with some interest, e.g. if you want to find a bug in an open source project that you have never seen the code for.
aeturnum · 9h ago
I find it useful to open the code in an editor and make running notes in the comments about what I think the state should be. As long as the code has good tests you can use debugging statements to confirm your understanding.
As a bonus you can just send that whole block of code - notes and all - to a colleague if you get stuck. They can read through the code and your thoughts and give feedback.
danielmarkbruce · 8h ago
Use an LLM.
It's not a joke answer. This entire article is silly. LLMs are great for helping you understanding code.
__alias · 7h ago
Agreed!
Especially getting them to generate sequence / flow charts I find is a hack to figure out how everything fits together well.
Claude code is fantastic at quickly tracing through code and building visualisations of how code works together
ivanjermakov · 7h ago
For me the trickiest part is getting how code is interconnected (class composition, abstraction with functions, module dependencies, etc.).
Code navigation should be instant and effortless. Get good tooling and train muscle memory for it.
gerad · 9h ago
When you're debugging issues, read the code for the libraries you're using before going to their documentation. It's a great way to get exposed to other people's code.
__alias · 7h ago
It's far easier to read diagrams then it is to read code.
To get a good mental model, I'll often get an LLM to generate a few mermaid diagrams to help create a mental model of how everything pieces together
maxverse · 8h ago
You're not alone!
hashbig · 9h ago
Like everything else, practice. I like to clone repositories of open source tools I use and try to understand how a particular feature is built end to end. I find that reading code aimlessly is not that helpful. Try to read it with a goal in mind. When starting out, pick a tool/application that is very simple and lean on LLMs to explain only the bits you don't understand.
vjvjvjvjghv · 4h ago
I find reading code mentally much more draining than writing it. I admire open source maintainers who mostly handle pull requests from others. This must be very hard. Linux cones to mind here. I assume Thorvalds and the other maintainers don’t get to write much code themselves.
bcrosby95 · 4h ago
Basically when you read code you're mentally doing the same thing as when you write code. But your mental model is more likely to be wrong when you're reading rather than producing the code.
jmsfltchruk · 3h ago
Struggling with making an algorithm in a big architecture once I felt the same way - just couldn't keep the current state of the code in my head and also imagine the new solution at the same time. Ever since, have been working on http://etchpad.dev to solve this pain (shameless plug). We're trying to make an interface that works with an engineering brain, so we can think in terms of blueprints and wiring, idk just something more tangible than hallucinating wildly in a multidimensional space like we have to right now.
Really agree with the article, ultimately the typing and thinking speed issues can be solved with AI, but trusting it and auditing what it does seems like a job for humans for the foreseeable, you know, so maybe we avert a classic sci-fi AI apocalypse and whatnot
tptacek · 9h ago
This article makes points that are valid in general but not apposite to code generation with agents.
It is indeed difficult to verify a piece of code that is either going to ship as-is, or with the specific modifications your verification identifies. In cryptography engineering the rule of thumb (never followed in practice) is 10x verification cost to x implementation cost. Verification is hard and expensive.
But qualifying agent-generated code isn't the verification problem, in the same way that validating an eBPF program in the kernel isn't solving the halting problem.
That's because the agentic scenario gives us an additional outcome: we can allow the code as-is, we can make modifications to the code, or we can throw out the code and re-prompt --- discarding (many) probably-valid programs in a search for the subset of programs that are easy to validate.
In practice, most of what people generate is boring and easy to validate: you know within a couple minutes whether it's the right shape, whether anything sticks out the wrong way, the way a veteran chess player can quickly pattern-match a whole chessboard. When it isn't boring, you read carefully (and expensively), or you just say "no, try again, give me something more boring", or you break your LLM generation into smaller steps that are easier to pattern match and recurse.
What professionals generally don't (I think) do with LLMs is generate large gnarly PRs, all at once, and then do a close-reading of those gnarly PRs. They read PRs that are easy (not gnarly). They reject the gnarly ones, and compensate for gnarliness by approaching the problem at a smaller level of granularity. Or, you know, just write those bits by hand!
jameskilton · 9h ago
Software's job is to tell other people what the computer is doing.
ivanjermakov · 7h ago
Great saying. When I got into the industry I was surprised how programmers is just a small fraction of a company's head count.
jongjong · 5h ago
Makes sense. Once you become really good at writing code, it becomes increasingly obvious that the real challenge of software development is the social problem of pointing out subtle contradictions in requirements and suggesting resolutions/trade-offs in a way which earns you respect instead of hatred.
With some stakeholders, this is an almost impossible problem; sometimes this is because they lack vision and so their requirements are littered with impossible contradictions; other times, their ego is too big to accommodate any kind of push-back; even if you try to drip-feed the suggestions as gently as possible, they begin to resent you because they start to associate you with negative feelings such as self-doubt.
Schopenhauer explained this phenomena succinctly:
"A man must be still a greenhorn in the ways of the world, if he imagines that he can make himself popular in society by exhibiting intelligence and discernment. With the immense majority of people, such qualities excite hatred and resentment, which are rendered all the harder to bear by the fact that people are obliged to suppress — even from themselves — the real reason of their anger. What actually takes place is this. A man feels and perceives that the person with whom he is conversing is intellectually very much his superior. He thereupon secretly and half unconsciously concludes that his interlocutor must form a proportionately low and limited estimate of his abilities. That is a method of reasoning — an enthymeme — which rouses the bitterest feelings of sullen and rancorous hatred."
This is a really big problem because people who attain management positions are often very good at understanding and then manipulating what other people think about them; this is how they were able to rise to their current ranks. They are exactly the kinds of people who build these reflective mental maps/models of who thinks what about them; and they are good at plotting against those people who they believe may harbor negative thoughts about them.
m3kw9 · 7h ago
With LLMs you need to read well, they can introduce “later bugs”
flykespice · 6h ago
"ChatGPT write me some unit tests for this piece of code you just generated"
yoyohello13 · 5h ago
"Proceeds to write tests that pass, but don't actually test the underlying functionality."
Just write simple code
I'm also a fan of feature-oriented project structures. I want the unit test file in or next to the code it's testing. For UI projects, similar with React it's about the component or feature not the type of thing. For APIs I will put request handlers with the feature along with models and other abstractions that go together based on what they fulfill, not the type of class they are.
I consider this practice more intuitively discoverable. You go into a directory for "Users" and you will see functionality related to users... this can be profile crud or the endpoint handlers. Security may or may not be a different feature depending on how you grow your app (Users, Roles, Permissions, etc). For that matter, I'd more often rather curate a single app that does what it needs vs. dozens of apps in a singular larger project. I've seen .Net web projects strewn across 60+ applications in two different solutions before. It took literally weeks to do what should take half a day at most.
All for one website/app to get published. WHY?!? I'm not opposed to smaller/micro services where they make sense either. But keep it all as simple as you possibly can. Try to make what you create/use/consume/produce as simple as you can too. Can you easily use/consume/interact with what you make from a system in $NewLanguage without too much headache? I don't like to have to rely on special libraries being available everywhere.
I guess this means that one should solve the appropriate problems for a given skill level
I use it as a heuristic. If my work is getting more complex, it's a warning sign that I might be doing something wrong or using the wrong approach. If it's getting simpler it means I might be headed in the right direction.
I guess it’s best to take a look at the code once something works and then see if it can be simplified. A lot of people seem to skip that step.
When I was a kid learning programming, I would skim through the whole book teaching Python and type the code using as much keywords as I learned each day, just to boast on my parents and my non-programmers peers about the obfuscated mess that came after.
As I grew I started to contribute to other open-source projects and I came across every kind of unmaitanable spaghetti code, so that I just gave up contribuiting on said project, that is when I gained the consciousness about being zealous on keeping the code as simple as possible so that the next person who comes after me to change the code don't have as much trouble understanding the code, even myself when I revisit the code later.
That altruistic mindset about caring how others read your code, you don't acquire easily unless you get experience how your previous peers did feel.
Not sure if there is actually data for the assumption to be twice as smart to debug than to write code, but it sounds about right.
And also, if you write the smartest code you can, while you are at your peak, you also won't be able to read it, when you are just a bit tired.
So yes, yes, yes. Just write simple code.
(I also was also initially messed up a bit by teachers filling me up with the idea to aim for clever code)
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
https://www.manning.com/books/the-programmers-brain
With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.
We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.
90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.
No comments yet
This seems silly to me. In most cases, the least amount of work you can possibly do is logically describe the process you want and the boundaries, and run that logic over the input data. In other words, coding.
The idea that we should, to avoid coding or reading code, come up with a whole new process to keep generated code on track - would almost certainly take more effort than just getting the logical incantations correct the first time.
It’s even worse with offshore devs. They produce a ton of code you have to review every morning.
Something like blowing up the stack or going out of bounds is more obviously a bug, but detecting those will often require inferences from how a code behaves during runtime to identify. LLMs might work for detecting the most basic cases because those appear most often in their data set, but whenever I see people suggest that they're good at reviewing I think it's from people that don't deeply review code.
Even if I use Cursor (or some other equivalent) and review the code I find my mental model of the system is much more lacking. It actually had a net negative on my productivity as it gave me anxiety at going back to the codebase.
If an AI tool could help a user interactively learn the mental model I think that would be a great step in the right direction.
I've contracted some of this understanding of pieces/intellectual work out to Claude code many, many times successfully.
And that's probably the difference between those who are okay with vibe coding and those who aren't. A leader of a company that doesn't care about code quality (elegant code, good tradeoffs, etc) would never have cared if 10 monkeys outputted the code pre-AI or if 10 robot monkeys outputted the code with AI. It's only a developer, of a certain type, that would care to say "pause" in either of those situations.
Out of principal I would not share or build coding tools for these people. They literally did not care all these years about code quality, and the last thing I want to do is enable them on any level.
I don't want to participate on it either, but I'm glad they'll have the chance to make things their way with all the consequences it brings unfiltered.
And the best part, most people shallow read all of them and decide the details are needless till they are forced to deal with the details and then their understanding falls apart in front of them
High quality code is generally hard to write and easy to read.
LLM code is NOT like this at all, but it's like a skilled liar writing something that LOOKS plausible, that's what they're trained to do.
People like you do not have the ability to evaluate the LLM output; it's not the same as reading code that was carefully written at ALL. If you think it's the same, that is only evidence that you can't tell the difference between working code and misleading buggy code.
What you've learned to do is read the intent of code. That's fine when it's been written and tested by a person. It's useless when it comes to evaluating LLM slop.
You're being gaslit.
Obligatory XKCD https://xkcd.com/1172/ "Workflow" reference
#1 is easy, #2 requires some investigation, #3 requires studying.
If you're looking at say, banking code - but you know nothing about finance - you may struggle to understand what it's doing. You may want to gain some domain expertise. Being an SME makes reading the related code a heck of a lot easier.
Context comes down to learning the code base. What code calls the part you're looking at? What user actions trigger it? Look at the comments and commit messages - what was the intention? This just takes time and a lot of trawling around, looking for patterns and common elements. User manuals and documentation can also help. This part can't be rushed - it just comes to passing over it again and again and again. If you have access to people very familiar with the code - ask them! They may be able to kick start your intro.
#1 will come naturally with time.
To add to the above; IME, #3 comes first. Study the domain to understand the concepts and their relationships. Read some books/Articles, Watch some Videos, Read Documentation etc. to come up to speed on the terminology/jargon and the general concepts/ideas. Then, in order to understand their mapping to the specific application at hand, sit with the local "guru" (there is always at least one in every group) and pick his/her brain for a few sessions (getting them brown bag lunches works great for this) on the overall architecture of the System. Next sit with testing and use the app as an end-user to understand use-case scenarios which brings all of the above together.
During all the above stages, take copious notes, draw diagrams/graphs/etc. use source code analysis/documentation/browsing/etc. tools eg. doxygen/cscope/opengrok/etc. tools to navigate the codebase and cement understanding. Note also that the above stages are to be done both iteratively and parallelly until you are somewhat comfortable and not necessarily know/understand everything.
With the above in hand, pick one use-case scenario, preferably the most complicated, critical and important one and walk through the code from beginning to end for that path. Remember that you are trying to get the overall picture and hence treat all irrelevant details as blackbox abstractions during initial phases. Over time as you iterate and review the code again and again you can slowly add in the details for a more comprehensive understanding.
Finally, there is no shortcut to the above; it takes time and self-effort. We Humans are natural-born, trial-and-error, continuous-learning problem solvers and so trust to your intelligence and commonsense to find a path to move ahead when stuck at something.
When looking at a piece of code, keep asking questions like: what does this return, what are the side effects, what can go wrong, what happens if this goes wrong, where do we exit, can this get stuck, where do we close/save/commit this, what's the input, what if the input is wrong/missing, where are we checking if the input is OK, can this number underflow/overflow, etc
All these questions are there to complete the picture, so that instead of function calls and loops, you are looking at the graph of interconnected "things". It will become natural after some time.
It helps if you read the code with some interest, e.g. if you want to find a bug in an open source project that you have never seen the code for.
As a bonus you can just send that whole block of code - notes and all - to a colleague if you get stuck. They can read through the code and your thoughts and give feedback.
It's not a joke answer. This entire article is silly. LLMs are great for helping you understanding code.
Especially getting them to generate sequence / flow charts I find is a hack to figure out how everything fits together well.
Claude code is fantastic at quickly tracing through code and building visualisations of how code works together
Code navigation should be instant and effortless. Get good tooling and train muscle memory for it.
To get a good mental model, I'll often get an LLM to generate a few mermaid diagrams to help create a mental model of how everything pieces together
Really agree with the article, ultimately the typing and thinking speed issues can be solved with AI, but trusting it and auditing what it does seems like a job for humans for the foreseeable, you know, so maybe we avert a classic sci-fi AI apocalypse and whatnot
It is indeed difficult to verify a piece of code that is either going to ship as-is, or with the specific modifications your verification identifies. In cryptography engineering the rule of thumb (never followed in practice) is 10x verification cost to x implementation cost. Verification is hard and expensive.
But qualifying agent-generated code isn't the verification problem, in the same way that validating an eBPF program in the kernel isn't solving the halting problem.
That's because the agentic scenario gives us an additional outcome: we can allow the code as-is, we can make modifications to the code, or we can throw out the code and re-prompt --- discarding (many) probably-valid programs in a search for the subset of programs that are easy to validate.
In practice, most of what people generate is boring and easy to validate: you know within a couple minutes whether it's the right shape, whether anything sticks out the wrong way, the way a veteran chess player can quickly pattern-match a whole chessboard. When it isn't boring, you read carefully (and expensively), or you just say "no, try again, give me something more boring", or you break your LLM generation into smaller steps that are easier to pattern match and recurse.
What professionals generally don't (I think) do with LLMs is generate large gnarly PRs, all at once, and then do a close-reading of those gnarly PRs. They read PRs that are easy (not gnarly). They reject the gnarly ones, and compensate for gnarliness by approaching the problem at a smaller level of granularity. Or, you know, just write those bits by hand!
With some stakeholders, this is an almost impossible problem; sometimes this is because they lack vision and so their requirements are littered with impossible contradictions; other times, their ego is too big to accommodate any kind of push-back; even if you try to drip-feed the suggestions as gently as possible, they begin to resent you because they start to associate you with negative feelings such as self-doubt.
Schopenhauer explained this phenomena succinctly:
"A man must be still a greenhorn in the ways of the world, if he imagines that he can make himself popular in society by exhibiting intelligence and discernment. With the immense majority of people, such qualities excite hatred and resentment, which are rendered all the harder to bear by the fact that people are obliged to suppress — even from themselves — the real reason of their anger. What actually takes place is this. A man feels and perceives that the person with whom he is conversing is intellectually very much his superior. He thereupon secretly and half unconsciously concludes that his interlocutor must form a proportionately low and limited estimate of his abilities. That is a method of reasoning — an enthymeme — which rouses the bitterest feelings of sullen and rancorous hatred."
This is a really big problem because people who attain management positions are often very good at understanding and then manipulating what other people think about them; this is how they were able to rise to their current ranks. They are exactly the kinds of people who build these reflective mental maps/models of who thinks what about them; and they are good at plotting against those people who they believe may harbor negative thoughts about them.