Cognitive load is what matters

1147 nromiun 436 8/30/2025, 12:58:48 PM github.com ↗

Comments (436)

defanor · 3h ago
I think most programmers agree that simpler solutions (generally matching "lower cognitive load") are preferred, but the disagreements start about which ones are simpler: often a lower cognitive load comes with approaches one is more used to, or familiar with; when the mental models one has match those in the code.

For instance, the article itself suggests to use early/premature returns, while they are sometimes compared to "goto", making the control flow less obvious/predictable (as paxcoder mentioned here). Intermediate variables, just as small functions, can easily complicate reading of the code (in the example from the article, one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once, and an "is secure" comment could be used to assist skimming). As for HTTP codes, those are standardized and not dependent on the content, unlike custom JSON codes: most developers working with HTTP would recognize those without additional documentation. And it goes on and on: people view different things as good practices and being simpler, depending (at least in part) on their backgrounds. If one considers simplicity, perhaps it is best to also consider it as subjective, taking into account to whom it is supposed to look simple. I think sometimes we try to view "simple" as something more objective than "easy", but unless it is actually measured with something like Kolmogorov complexity, the objectivity does not seem to be there.

Cthulhu_ · 15m ago
Likewise, some people prefer ternary statements for short checks; I want to agree because ternaries are one of the first things you learn after if/else/while/for, but at the same time... they're a shorthand, and shorthand is short but not necessarily more readable.

For one-off things like value = condition ? a : b I don't mind much, but I will make an issue as soon as it spans more than one line or if it's nested.

whilenot-dev · 1h ago
> one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once

You would feel the need to look up a variable called isSecure, but would not need to look up condition4 or condition5? I think the point TFA was making is that one could read isSecure and assume what kind of implementation to expect, whereas with condition4 I wouldn't even know what to look for, or I'd even struggle to hold any assumption.

  /* this one needs to make sense in the end */
  isSecure = user.role == 'admin'
  
  /* these two do not */
  condition4 = user.id <= 4
  condition5 = session.licenseId == 5
> and an "is secure" comment could be used to assist skimming

Those are exactly the kind of comments I'd rather see written out as intermediate variables. Such comments are not explaining to you any of the Why?s anyway, and I also tend to trust the executing code much more than any non-type-related annotating code, as comments are rarely helpful and sometimes even describe wishful thinking by straight-up lying.

Intermediate variables assist in skimming too.

defanor · 1h ago
> You would feel the need to look up a variable called isSecure, but would not need to look up condition4 or condition5?

I assume that those "conditions" are placeholders, not to be read literally in the example (since the example is not about poorly named variables, but about complex conditions), so I did not mean them literally, either. Supposedly those would be more informative names, such as "channel_encrypted", "checksum_verified".

> [...] describe wishful thinking by straight-up lying

This was what I had in mind upon seeing that "isSecure" bit, too: could easily be a lie (or understood differently by different people). But taking a little more effort to check then, and/or having to remember what those variables actually mean. It is a commonly debatable topic though, where the good balance is, similarly to splitting code into small functions: people tend to balance between spaghetti code and extreme splitting.

My point though is not to argue with those particular points here, but that we have no such practices/rules universally considered simple and formally stated/verifiable.

brucehoult · 2h ago
> For instance, the article itself suggests to use early/premature returns

I like premature returns and think they reduce complexity, but as exclipy writes (I think quoting Ousterhout) 'complexity is defined as "how difficult is it to make changes to it"'.

If premature returns are the only premature exit your language has then they add complexity in that you can't then add code (in just one place) that is always executed before returning.

A good language will also have "break" from any block of code, such that the break can also carry a return value, AND the break can be from any number of nested blocks, which would generally mean that blocks can be labelled / named. And also of course that any block can have a return value.

So you don't actually need a distinguished "return" but only a "break" that can be from the main block of the function.

A nice way to do this is the "exit function", especially if the exit function is a first class value and can also exit from called functions. (of course they need to be in a nested scope or have the exit function passed to them somehow).

It is also nice to allow each block to have a "cleanup" section, so that adding an action to happen on every exit doesn't require wrapping the block in another block, but this is just a convenience, not a necessity.

Note that this is quite different to exception handling try / catch / finally (Java terms) though it can be used to implement exception handling.

derf_ · 2h ago
> A good language will also have "break" from any block of code, such that the break can also carry a return value, AND the break can be from any number of nested blocks, which would generally mean that blocks can be labelled / named. And also of course that any block can have a return value.

Even in a language that is not "good" by your definition... you have basically just described a function. A wrapper function around a sub-function that has early returns does everything you want. I use this pattern in C all of the time.

brucehoult · 1h ago
It is more inconvenient to make a wrapper function for a function than to make a wrapper block for a block, especially in C where you can't lexically nest functions (not counting GNU extensions).

Naturally all programming languages are equivalent, but some are more convenient than others. See the title of this post "Cognitive load is what matters".

ppsreejith · 1h ago
A lot of comments mention John Ousterhout's book Philosophy of software design and it's definition of complexity of a system being cognitive load (I.e the number of disparate things one has to keep in mind when making a change). However IIRC from the book, complexity of a system = Cognitive load * Frequency of change.

The second component, frequency of change is equally important as when faced with tradeoffs, we can push high cognitive load to components edited less frequently (eg: lower down the stack) in exchange for lower cognitive load in the most frequently edited components.

exclipy · 14h ago
This was my main takeaway from A Philosophy Of Software Design by John Ousterhout. It is the best book on this subject and I recommend it to every software developer.

Basically, you should aim to minimise complexity in software design, but importantly, complexity is defined as "how difficult is it to make changes to it". "How difficult" is largely determined by the amount of cognitive load necessary to understand it.

YZF · 11h ago
The problem is no set of rules can replace taste, judgement, experience and intuition. Every rule can be used to argue anything.

You can't win architecture arguments.

I like the article but the people who need it won't understand it and the people who don't need it already know this. As we say, it's not a technical problem, it's always a people and culture problem. Architecture just follows people and culture. If you have Rob Pike and Google you'll get Go. You can't read some book and make Go. (whether you like it or not is a different question).

safety1st · 7h ago
The approach that I am trialing with my team now, so far to good results, is as follows.

* Our coding standards require that functions have a fairly low cyclomatic complexity. The goal is to ensure that we never have a a function which is really hard to understand.

* We also require a properly descriptive header comment for each function and one of the main emphases in our code reviews is to evaluate the legibility and sensibility of each function signature very carefully. My thinking is the comment sort of describes "developer's intent" whereas the naming of everything in the signature should give you a strong indication of what the function really does.

Now is this going to buy you good architecture for free, of course not.

But what it does seem to do is keep the cognitive load manageable, pretty much all of the time these rules are followed. Understanding a particular bit of the codebase means reading one simple function, and perhaps 1-2 that are related to it.

Granted we are building websites and web applications which are at most medium fancy, not solving NASA problems, but I can say from working with certain parts of the codebase before and after these standards, it's like night and day.

One "sin" this set of rules encourages is that when the logic is unavoidably complex, people are forced to write a function which calls several other functions that are not used anywhere else; it's basically do_thing_a(); do_thing_b(); do_thing_c();. I actually find this to be great because it's easy to notice and tells us what parts of the code are sufficiently complex or awkward as to merit more careful review. Plus, I don't really care that people will say "that's not the right purpose for functions," the reality is that with proper signatures it reads like an easy "cliffs notes" in fairly plain English of exactly what's about to happen, making the code even easier to understand.

hakunin · 4h ago
I found this type of approach (where you try to meet subjective readability goals with objective/statistical metrics) to not produce clear code in practice. Instead, I suggest this one weird trick: if your colleagues are confused in code review, then rewrite and comment the code until they aren't confused anymore. Don't just explain it to them ad-hoc, make the code+comments become the explanation. There is no better linter than subjective reading by your colleagues. Nothing else works nearly as well. Optimize to your team's understanding, that's it. Somehow, this tends to keep working great even as the team changes.
necovek · 4h ago
It's one message I struggle to convey to people I do code reviews for: don't make me understand it, make it more self explanatory so every reader does. (And, yes, I ask for it explicitly too)

(I sometimes "ask" questions for something it took me a few back and forths through code to get so they'd think about how it could be made clearer)

Unfortunately, most people focus on explaining their frame of mind (insecurity?) instead of thinking how can they be the best "teacher".

hakunin · 3h ago
Yeah, not easy, but it helps to build some rapport first, so people learn what you’re after. The way I tend to do that is by leaving a review comment with an example code snippet that makes me understand it better, and a question “what do you think about this version? I tried to clarify a few things here.”. + Explain what was clarified. I find the effort usually pays off.
necovek · 1h ago
I found that to be a double edged sword: some copy and paste it verbatim without thinking it through and adjusting at all.

It's a delicate balance we need to keep in mind between many of:

- maintainable code

- getting things done

- feeling of accomplishment

- feedback loop speed

- coaching

- motivation and emotional state ("why are they pestering me, the code works, I just want to feel productive and useful: this was hard enough to get right as it is")

...and more!

At the same time, some do get the point, but getting readable code is really an art/craft in itself, and nothing but experience and learning to look at it from outside is the main driver to learning.

radiator · 2h ago
But this might require too much effort from the reviewer
cncjchsue7 · 5h ago
This sounds like hell to me.

Not everything is complicated, most functions don't need comments, why require it? Just fix complexity when it arises. Don't mandate that you can't make any complexity.

bornfreddy · 4h ago
Agreed. If you need a comment to tell you what the function does, you should think deep about naming, and if this fails, consider if this is the correct abstraction. Comments are a way to kick the can down the road - "I was unable to make this code clear enough, so here is the hint to help you".

Edit: sometimes the comments are the best of all evils, and you should use them to explain the constraints that led to this code - they just shouldn't be mandatory.

cco · 4h ago
What is a function supposed to do and why?
serpix · 1h ago
These points are about organising code and workflow. Even if you have organised your functions to the lowest possible unit of work you can still have a mess of async queue microservice hell which is the actual architecture.

Architecture is another topic entirely and the scope is higher abstractions across multiple systems.

awesome_dude · 5h ago
> Our coding standards require that functions have a fairly low cyclomatic complexity. The goal is to ensure that we never have a a function which is really hard to understand.

https://github.com/fzipp/gocyclo

> * We also require a properly descriptive header comment for each function and one of the main emphases in our code reviews is to evaluate the legibility and sensibility of each function signature very carefully. My thinking is the comment sort of describes "developer's intent" whereas the naming of everything in the signature should give you a strong indication of what the function really does.

https://github.com/mgechev/revive

> Now is this going to buy you good architecture for free, of course not.

It's not architecture to tell people to comment on their functions.

Also FTR, people confuse cyclomatic complexity for automagically making code confusing to the weirdest example I have ever had to deal with - a team had unilaterally decided that the 'else' keyword could never be used in code.

arbol · 4h ago
I can understand why else is sometimes not needed. JS linters will remove unnecessary else statements by default.

https://eslint.org/docs/latest/rules/no-else-return#rule-det...

But never using it is crazy.

awesome_dude · 4h ago
In a similar vein to how I just responded to the other person, maybe eventually we'll abstract `else` away so that it's use is hidden, and the abstraction ensures that it's only being used where we all collectively decide it can/should be used.
jonahx · 5h ago
> he weirdest example I have ever had to deal with - a team had unilaterally decided that the 'else' keyword could never be used in code.

Not weird at all:

https://medium.com/@matryer/line-of-sight-in-code-186dd7cdea...

awesome_dude · 4h ago
Well, I found it weird - the else keyword has been a stalwart of programming for... several decades now.

Maybe one day we will abstract it away like the goto keyword (goto is a keyword in Go, and other languages still, but I have only seen it used in the wild once or twice in my 7 or 8 years of writing Go)

Goto is still used in almost every language, but it's abstracted away, hidden in loops, and conditionals (which Djikstra said was a perfectly acceptable use of goto), presumably to discourage its direct use to jump to arbitrary points in the code

bogdanoff_2 · 8h ago
>the people who need it won't understand it

That's not true. There's plenty of beginner programmers who will benefit from this.

mnsc · 3h ago
> You can't win architecture arguments.

I feel this in my soul. But I'm starting to understand this and accept it. Acceptance seem to lessen my frustration on discussing with architects that seemingly always take the opposite stance to me. There is no right or wrong, just always different trade offs depending on what rule or constraint you are prioritizing in your mind.

berkes · 3h ago
I've found that listening and asking questions is the key to accepting other people's architectural choices.

Why do they insist on A over B? What trade offs were considered? Why are these trade offs less threatening than other trade offs? What previous failures or difficulties led them to put such weight on this problem over others?

Sometimes it's just ego or stubbornness or routine¹. That can and should be dismissed IMO. Even if through these misguided reasons they choose the "right" architecture, even if the outcome turns out good, that way of working is toxic and bad for any long term project.

More often, there are good, solid reasons behind choices, though. Backed with data or science even. Things I didn't know, or see different, or have data and scientific papers for that "prove" the exact opposite. But it doesn't matter that much, as long as we all understand what we are prioritizing, what the trade offs are and how we mitigate the risks of those trade offs, it's fine.

¹ The worst, IMO, is the "we've always done it like this" trench. An ego can be softened or taken off the team. But unwillingness to learn and change, instilled in team culture is an almost guaranteed recipe for disaster

KronisLV · 1h ago
> Acceptance seem to lessen my frustration on discussing with architects that seemingly always take the opposite stance to me. There is no right or wrong, just always different trade offs depending on what rule or constraint you are prioritizing in your mind.

That’s a stance of acceptance, however I’d say that there are people who are absolutely wrong by most metrics sometimes and also stubborn to the point that you’ll never convince them. Ergo, the frustration is inevitable when faced with them.

ruraljuror · 7h ago
Software developers don’t arrive fully formed. Rob Pike benefitted from reading a book or two.
YZF · 5h ago
Fair enough. But most of the forming is by doing. Someone gave an analogy to music. You can't become a great musician by reading books. Some great musicians have never read a book about music. But yes, reading can be (a great!) part of the learning process. My point was more about rules. The article says things like replacing complex conditionals with intermediate variables. The idea that a certain construct always have higher cognitive load and should be replaced with another is too simplistic IMO.

In order to get a sense of what code is harder to understand you will do better to read code and have others read your code. A good takeaway is to keep this in mind (amongst many other factors) and to understand code needs to be maintained, extended, adapted etc.

The ideas are still useful. The danger is blindly applying rules. As long as the reader knows not to apply any of the suggestions if they don't understand why and have relevant experience ;)

braebo · 11h ago
I’m accustomed to this principle as a musician, so it’s been interesting to see it withstand my journey into software.
dlivingston · 10h ago
Can you expand on this?
hakunin · 8h ago
Not the commenter, but also had experience with making music and writing software. I think the same applies to any creative endeavor. It’s super hard to consume what you produce as “someone else” (I.e. read what you write, listen to what you compose with fresh perspective). Usually it takes time to forget and disassociate from your work, because you get too used to it while producing it. Coming back to it another day can work sometimes, but very quickly you’ll get used to it again. I think this is one of the most effective ways to achieve quality tasteful results in anything. If you can train yourself to read your own code with fresh eyes almost as soon as you write it, you’d be unlocking a powerful shortcut, a cheat code to life. It’ll make the biggest impact on your code’s (and any other creative work’s) quality. This is also why sometimes you can spend hours painstakingly trying to design something, and it comes out terrible, nobody likes it. And you can do something in 20 minutes just improvising your way through, and it comes out an elegant masterpiece. That’s because you never gave yourself time to “get used to” your work such that you couldn’t perceive the problems with it anymore. You maintained that fresh impatient perspective the entire time.
teiferer · 4h ago
> If you can train yourself to read your own code with fresh eyes almost as soon as you write it, you’d be unlocking a powerful shortcut, a cheat code to life.

This is really a key takeaway here: Always keep your audience in mind. When programming, you have two audiences: the machine executing the code, and fellow programmers maintaining the code. Both are important, but the latter is often neglected and is what the article is about. Optimize for your human audience. What will make it easier for the next person to understand this? Do that.

Like public speaking or writing an article. A great talk or a article happen when the speaker/author knew exactly how the audience would perceive them.

hakunin · 4h ago
Agreed, I wrote more in depth about it a few years ago: https://max.engineer/maintainable-code
bb88 · 8h ago
> Every rule can be used to argue anything.

Unless it's a rule prohibiting complexity by removing technologies. Here's a set of rules I have in my head.

1. No multithreading. (See Mozilla's "You must be this high" sign)

2. No visitor pattern. (See grug oriented development)

3. No observer pattern. (See django when signals need to run in a particular order)

4. No custom DSL's. (I need to add a new operator, damnit, and I can't parse your badly written LALR(1) schema).

5. No XML. (Fight me, I have battle scars.)

ferguess_k · 7h ago
> 2. No visitor pattern. (See grug oriented development)

This one is my particular pet-peeve. But I often think that the reason is because I suck. I'm going to read "grug".

I also hate one-liner functions.

bb88 · 7h ago
The real geniuses of our times can convert complexity into simplicity. The subgeniuses use complexity to flex over the common developer.

Sometimes things need to be complex -- well that's okay. The real trick is to not put complexity into places it doesn't belong.

cyberax · 5h ago
Visitor pattern is extremely useful in some areas, such as compiler development.
brabel · 2h ago
That’s only true in languages that do not have Algebraic Data Types and pattern matching, which nowadays is a minority of languages (even Java has it).
cyberax · 2h ago
Visitors additionally allow you to decouple graph traversal from the processing. It is still needed even in the languages with pattern matching.

There's also the question of exhaustiveness checking. With visitors, you can typically opt-in to either checking that you handle everything. Or use the default no-ops for anything that you're not interested in.

So if you look at compilers for languages with pattern matching (e.g. Rust), you still see... visitors! E.g.: https://github.com/rust-lang/rust/blob/64a99db105f45ea330473...

zakirullin · 10h ago
> I like the article but the people who need it won't understand it

That's true. One doesn't change his mindset just after reading. Even after some mentorship the results are far from satisfying. Engineers can completely agree with you on the topic, only to go and do just the opposite.

It seems like the hardest thing to do is to build a feedback loop - "what decisions I made in past -> what it led to". Usually that loop takes a few years to complete, and most people forget that their architecture decisions led to a disaster. Or they just disassociate themselves.

wreath · 3m ago
In an industry where most people stay for around 2 years (at least pre 2022), people arent even there to see the results of their decisions.
lll-o-lll · 7h ago
One of the big troubles is that if you join a big org you won’t get to do any architecture until you are at least “senior” or “lead”. Maybe that’s not true everywhere, but I have seen a fair bit of it. You need several iterations of “I built a thing” “oh, the thing evolved in horrible ways”, before the instincts for good architecture are developed.

I think Big Orgs need to develop younger promising talent by letting them build small green fields projects. Essentially fostering startups inside the organisation proper. Let them build and learn from mistakes (while providing the necessary knowledge; you can actually learn most of this from books, but experience is the ultimate teacher). Otherwise you end up with 5 year experienced people who cannot design themselves out of a paper bag.

lokar · 11h ago
I found the book helpful as a way to organize and express what I already knew
bsenftner · 13h ago
Which is why I consider DRY (Don't Repeat Yourself) to be an anti-rule until an application is fairly well understood and multiple versions exist. DO repeat yourself, and do not create some smart version of what you think the problem is before you're attempting the 3rd version. Version 1 is how you figure out the problem space, version 2 is how you figure out your solution as a maintainable dynamic thing within a changing tech landscape, and version 3 is when DRY is look at for the first time for that application.
zahlman · 13h ago
DRY isn't about not reimplementing things; it's about not literally copying and pasting code. Which I have seen all the time, and which some might find easier now but will definitely make the system harder to change (correctly) at some point later on.
ryeats · 12h ago
This is a trap junior devs fall into DRY isn't free it can be premature optimization since in order to avoid copying code you often add both an abstraction AND couple components together that are logically separate. The issues are at some point they may have slightly different requirements and if done repeatedly you can get to a point that you have all these small layers of abstraction that are cross cutting concerns and making changes have a bigger blast radius than you can intuit easily.
zahlman · 12h ago
If you notice that two parts of the code look similar, but have a good reason not to merge or refactor, that deserves a signpost comment.

If you're copying and pasting something, there probably isn't a good reason for that. (The best common reason I can think of is "the language / framework demands so much boilerplate to reuse this little bit of code that it's a net loss" — which is still a bad feeling.)

If you rewrite something without noticing that you're doing so, something has definitely gone wrong.

If a client's requirements change to the point where you can't accommodate them in the nicely refactored function (or to the point where doing so would create an abomination) — then you can make the separate, similar looking version.

chipsrafferty · 8h ago
I don't think it's as cut and dry as that. In my team we require 100% test coverage. Every file requires an accompanying test file, and every test file is set up with a bunch of mocks.

Sure, we could take the Foo, Bar, and Baz tables that share 80-90% of common logic and have them inherit from a common, shared, abstract component. We've discussed it in the past. Maybe it's the better solution, maybe not. But it would mean that instead of maintaining 3 component files and 3 test file, which are very similar, and when we need to change something it is often a copy-paste job, instead we'd have to maintain 2 additional files for the shared component, and when that has to change, it would require more work as we then have to add more to the other 3 files.

Such setups can often cause a cascade of tests that need updated and PRs with dozens of files changed.

Also, there are many parts of our project where things could be done much better if we were making them from scratch. But, 6 years of changing requirements and new features and this is what we have - and at this point, I'm not sure that having a shared component would actually make things easier unless we rewrite a huge amount of the codebase, for which there is no business reason.

matijsvzuijlen · 5h ago
I can understand requiring 100% test coverage, but it seems to me that requiring a test file for every file is preventing your team from doing useful refactoring.

What made your team decide on that rule? Could your team decide to drop it since it hinders improving the design of your code?

smallnamespace · 11h ago
> If you're copying and pasting something, there probably isn't a good reason for that.

I would embrace copying and pasting for functionality that I want to be identical in two places right now, but I’m not sure ought to be identical in the future.

fenomas · 9h ago
All my younger colleagues have heard my catchphrase:

Copy-paste is free; abstractions are expensive.

wilkystyle · 8h ago
One of the many great takeaways from Sandi Metz's talk at Railsconf 2014: "Duplication is far cheaper than the wrong abstraction."

https://www.youtube.com/watch?v=8bZh5LMaSmE

Worth watching in its entirety, but the quote is from ~13:59 in that video.

drivers99 · 6h ago
The related blog post (I just found thanks to watching that and then searching for her site) is great too: https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

It explains so much of what has been bothering me about what I work on at work, and now I understand why and some of what to do about it.

rkomorn · 12h ago
The reverse of that is people introducing bugs because code that wasn't DRY enough was only changed in some of the places that needed to be changed instead of all the places.

To me, it's the things that are specifically intended to behave the same should be kept DRY.

pkolaczk · 3h ago
An obvious example of that is defining named constants and referring them by name instead of repeating the same value in N places. This is also DRY and good kind of DRY.
sroerick · 8h ago
This is the correct take - if you're getting this type of bug, it's now past time for DRY
seadan83 · 3h ago
Indeed a trap. I'd say DRY is all about not duplicating logical components. Just because two pieces of code look similar, does not mean they need to be combined.

As an analogy, when writing a book, it's the difference of not repeating the opening plot of the story multiple times vs replacing every instance of the with a new symbol.

kragen · 6h ago
DRY isn't an optimization of any kind, so it can't be a premature optimization. "Premature optimization" is a specific failure mode of programmers, not just a meaningless term you can use to attack anything you don't like. "Optimization" is refactoring to reduce the use of resources (which are specifically cycles and bytes) and it's "premature" when you don't yet know that you're doing it where it matters.

Otherwise I mostly agree.

ori_b · 10h ago
Not copy pasting code also makes it harder to change the system correctly at some point later on, because you transformed a local decision ("does this code do what the caller needs?") onto a global one ("does this code do what any possible caller needs, including across code maintained by other teams?")

There's no one rule. It takes experience and taste to make good guesses, and you'll often be wrong even so.

n4r9 · 3h ago
It depends greatly on the situation. If you have five different methods for fetching WidgetInfo from the database and a requirement comes in to add TextProperty to Widget in all views, you're more likely to accidentally miss one of the places that needed a change.

Likewise if someone notices a bug in the method, you then have to go through and figure out which copies have the same bug, and fix each one, and QA test each one separately.

The proper approach is to make a judgement call based on how naturally generic the method is, and whether or not the existing use cases require custom behaviours of it (now or in the near future).

YZF · 11h ago
But sometimes you should copy and paste code because those difference pieces of code can evolve independently. Knowing when to do this and when not to do this is what we do and no rule can blindly say one way or the other.

Even the most obvious of functions like sin() and cos() may in some circumstances warrant a specialized implementation. Sure, for most stuff you should not have 10 copies of those all over the place. But sometimes you might.

DRY is a bad rule. The more appropriate rule is avoid duplicating code when not doing so results something better. I.e. judgement always trumps rules.

hansvm · 7h ago
A subtlety still exists there. Copy-pasting is fine. What you're trying to prevent with DRY is two physical locations in your codebase referring to the same semantic context (i.e., when you should change "the thing" you have to remember to change "all the places").

Somewhat off-topic, that's one usual failure mode of "DRY" code. Code is de-duplicated at a visual level rather than in terms of relevant semantics, so that changes which should only affect one path either affect both or are very complicated to reason about because of the unnecessary coupling.

nicoburns · 12h ago
Yeah, I've seen codebases where you have several hundred line components copy-pasted multiple times with say 10-20 lines changed, and you literally have to diff the files to find out why there are several.

This is unhelpful even if the design is a complete mess.

stevage · 11h ago
Copying and pasting code is often fine, particularly when you make a change to one of the copies.

Over time I have come to prefer having two near copies that are each more concretely expressive of their task than a more abstract version that caters to both.

MrDarcy · 9h ago
I’ll bite. We’re expanding into Europe. I literally copied and pasted our entire infrastructure into a new folder named “Europe”

Now there’s a new requirement that only applies to Europe and nowhere else and it’s super easy and straight forward to change the infrastructure.

I don’t see how it was a poor choice to literally copy and paste configs that result in hundreds of thousands of lines of yaml and I have 25 yoe.

tasuki · 2h ago
> I don’t see how it was a poor choice to literally copy and paste configs that result in hundreds of thousands of lines of yaml

Perhaps one day you will. I'm a dev who worked with infra people who had your philosophy: many copy pasted config files.

Sometimes I needed to add an env var to a service. Expressing "default to false and only set it to true in these three environments" took changing about 30 files. I always made mistakes (usually of omission), and the infra people only ever caught them at deployment time. It was hell.

chipsrafferty · 8h ago
I think the most common way to approach that problem would be to have a "default config", and overrides. Could you go into more detail about why you didn't do this instead?

Downsides with your approach is:

1. Now whenever you want to change something both in Europe and (assuming) USA you have to do it in 2 places. If the change is the same for both, in my system, you could just update the default/shared config. If the change is different for both it's equally easy, but faster, since the overrides are smaller files.

2. It's not clear what the difference is between Europe and USA if there is 1 line different amongst thousands. If there are more differences in the future, it becomes increasingly difficult to tell the difference easily.

3. If in the future you also need to add Africa, you just compounded the problems of 1. and 2.

MrDarcy · 7h ago
I don’t do this because with a complete copy I get progressive rollouts across regions without the complexity of if statements and feature flags. That is to say, making the change twice is a feature not a bug when the changes are staggered in time.

From an operational perspective it’s much more important to ensure the code is clear and readable during an incident.

Overrides are like inheritance. They are themselves complex and add unnecessary cognitive load.

Composition is better for the common pieces that never change across regions. Think of an import statement of a common package into both the Europe and North America folders.

I easily see the one line diff among hundreds of thousands using… diff.

Regarding Africa, we’ve established 1 is a feature and 2 is a non issue, so I’d copy it again.

This approach scales both as the team scales and as the infrastructure scales. Teammates can read and comprehend much more easily than hierarchies of overrides, and changes are naturally scoped to pieces of the whole.

seadan83 · 2h ago
The "rules" for config are different. Code, test code, and config are different, their complexity scales in different ways of course.

By way of analogy for why the two configs are different, for example Two beaches are not the same because they both have very similar sand.

You really have two different configs.. You also have one set of configs. You didn't set up an application that also fetches some config that is already provided. It would be like having a test flag in both config and database, sane flag - two places.

Where config duplication goes bad is when repeatedly the same change is made across all N, local variations have to be reconciled each time and it is N sets of testing you need to do. Something like that in code is potentially more complex, more obviously a duplication of a module, just more likely to be a problem overall.

AstroBen · 10h ago
DRY is about concepts, not characters. Don't have multiple implementations of a concept

If you choose to not copy paste the code you better be damn sure the two places that use it are relying on the same concept, not just superficially similar code thats yet to diverge

drbojingle · 11h ago
I completely disagree. Sometimes it makes things harder but not 100% of the time.

Sometimes things are only the same temporarily and shouldn't be brought together.

martinpw · 12h ago
Closely related to the Rule of Three - ok to duplicate once, but if it is needed a third time, consider refactoring: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

I think it's a pretty good compromise. I have tried in the past not to duplicate code at all, and it often ends up more pain than gain. Allow copy/paste if code is needed in two different places, but refactor if needed in three or more, is a pretty good rule of thumb.

rekrsiv · 11h ago
On the other hand, just because you know you're going to have to refactor, doesn't mean you should start refactoring once you reach three; you might not yet know the ideal shape for this code until many more duplications.
stevage · 11h ago
Agreed, it works pretty well for me.

The hard edge case is when you have a thing that needs to be duplicated along two axes. So now you have two pairs of things, four total. Four simple things or one complex thing.

jaredsohn · 5h ago
also called WET (write everything twice or write everything thrice)
boredtofears · 7h ago
And just like the rule that it replaced, the rule of three is now often interpreted as the "correct" approach always, while I still find reality to be more nuanced.

Sometimes you do have the domain expertise to make the judgment call.

A recent example that comes to mind is a payment calculation. You can go ahead and tie that up in a nice reusable function from the get go - if you've ever dealt with a bug where payment calculations appeared different in some places and it somehow made it in front of a customer you're well aware of how painful this can be. For some things having a single source of truth outweighs any negatives associated with refactoring.

hinkley · 13h ago
Some people use a gardening metaphor for code, and I think that since code is from and for humans, that’s not a terrible analogy. It’s organic by origin if not by nature.

When you’re dealing with perennial plants, there’s only so much control you actually have, and there’s a list of things you know you have to do with them but you cannot do them all at once. There is what you need to do now, what you need to do next year, and a theory of what you’ll do over the next five years. And two years into any five year plan, the five year plan has completely changed. You’re hedging your bets.

Traditional Formal English and French gardens try to “master” the plants. Force them to behave to an exacting standard. It’s only possible with both a high degree of skill and a vast pool of labor. They aren’t really about nature, or food. They’re displays of opulence. They are conspicuous consumption. They are keeping up with the Joneses. Some people love that about them. More practical people see it as pretentious bullshit.

I think we all know a few companies that make a bad idea work by sheer force of will and overwhelming resources.

ahartmetz · 3h ago
What you say seems much more true about traditional French than English gardens tbh. The French style is a very simplistic demonstration of bending nature to the human will. The English style seems to be more about reproducing an overly quaint image of "natural" landscapes (there are very few of these in Europe), which I find much more pleasant in idea and result.
tialaramex · 12h ago
I think more than a few people have recommended waiting until the 3rd or 4th X before you say OK, Don't Repeat Yourself we need to factor this out. That's where my rule of thumb is too.

Deliberately going earlier makes sense if experience teaches you there will be 3+ of this eventually, but the point where I'm going to pick "Decline" and write that you need to fix this first is when I see you've repeated something 4-5 times, that's too many, we have machines to do repetition for us, have the machine do it.

An EnableEditor function? OK, meaningful name. EnablePublisher? Hmm, yes I understand the name scheme but I get a bad feeling. EnableCoAuthor? Approved with a stern note to reconsider, are we really never adding more of these, is there really some reason you can't factor this out? EnableAuditor. No. Stop, this function is named Enable and it takes a Role, do not copy-paste and change the names.

cyberax · 9h ago
DRY means something completely different. It means that there should be just one source of truth.

Example: you have a config defined as Java/Go classes/structures. You want to check that the config file has the correct syntax. Non-DRY strategy is to describe its structure in an XSD schema (ok, ok JSON schema) and then validate the config. So you end up with two sources of truth: the schema and Java/Go classes, they can drift apart and cause problems.

The DRY way is to generate the classes/structures that define the config from that schema.

hinkley · 13h ago
It’s a pain in the ass to source a copy of this book without giving Jeff Bezos all the money. If anyone reading this thread knows John, could you bring this to his attention?

I even tried calling the bookstore on his campus and they said try back at the beginning of a semester, they didn’t have any copies.

My local book store could not source me a copy, and neither IIRC could Powell’s.

ashurov · 3h ago
tialaramex · 11h ago
That sucks. Ordinarily although a weird volume there's no demand for won't be fast a bookshop should be able to get anything in print. Is there some reason it's specific to this book do you think?
zakirullin · 14h ago
That's best book on the topic! The article was inspired by this exact book. And John is a very good person, we discussed a thing or two about the article.
exclipy · 13h ago
Oh! I was surprised you didn't link or mention the book
zakirullin · 13h ago
It is mentioned/quoted in Deep Modules section: https://github.com/zakirullin/cognitive-load?tab=readme-ov-f...

Maybe I should make it more visible.

ferguess_k · 7h ago
I'm struggling with the amount of complexity. As an inexperienced SWE, I found it difficult to put everything into my head when the # of function calls (A) + # of source code files (B) to navigate reach N. In particular, if B >= 3 or A >= 3 -- because, B equals the number of screens I need to view all source code files without Command+Tab/Alt+Tab, and cognitive load increases when A increases, especially when some "patterns" are involved.

But I'm not experienced enough to tell, whether it is my inexperience that causes the difficulty, or it is indeed that the unnecessary complexity tha causes it.

brabel · 2h ago
You should not need to read every line of code in every file and function to understand what’s going on to the level you need to solve a particular problem. You must make a decision to NOT look deeper at some point on any non- trivial code base. A good program with good names and comments in the appropriate places is what allows you to do exactly that more easily . When you see sort(usernames) in the middle of a function do you need to dive into sort to be able to understand the code in that function?? Probably not, unless you are fixing a bug in how usernames are sorted!

With that said , get good at jumping into definitions, finding all implementations, then jumping back where you were. With a good IDE you can do that at the speed of thought (in IntelliJ that’s Cmb+b, Cmd+Alt+b, Cmd+[ on Mac). I only open more than one file at the same time when comparing them. Otherwise it’s much easier to jump around back and forth (you can even open another function inline if you just want to take a Quick Look, it’s Alt+Space). Don’t use the mouse to do that, things you do all the time can be made an order of magnitude faster via shortcuts. Too many developers I see struggle with that and are embarrassingly slow moving around the code base!

lll-o-lll · 7h ago
Humans have a very limited amount of working memory. 3-5 items on average. A savant might be at something like 12. It is trivially easy to blow that with code. OO with code inheritance is a prime example of combinatorial explosion that can lead to more possibilities than atoms in the universe, let alone one persons ability to reason.

Watch ‘Simple made Easy’ by Rich Hickey; a classic from our industry. The battle against complexity is ever ongoing. https://youtu.be/SxdOUGdseq4?feature=shared

seadan83 · 2h ago
Experience helps to recognize intent sooner. That reduces cognitive load. Getting lost 5 levels deep seemingly never stops being a thing, not just you.
swat535 · 11h ago
I've long given up on trying to find the perfect solution for Software. I don't think anyone has really "cracked the code" per se. The best we have is people's wisdom and experiences.

Ultimately, context, industries and teams vary so greatly that it doesn't make sense to quantify it.

What I've settled on instead is aiming for a balance between "mess" and "beauty" in my design. The hardest thing for me personally to grasp was that businesses are indeterministic whereas software is not, thus requirements always shifts and fitting this into the rigidity of computer systems is _difficult_.

These days, I only attempt to refactor when I start to feel the pain when I'm about to change the code.. and even then, I perform the bare minimum to clean up the code. Eventually multiple refactoring shapes a new pattern which can be pulled into an abstraction.

tverbeure · 12h ago
Only half joking: I don’t think I trust a book from an author who has inflicted decades of TCL pain on me (and on the entire community of EDA tool users.)
RossBencina · 8h ago
I know you're only half joking, but I don't think you can pin the blame on John or TCL. Osterhaut's thesis, as I recall, was that there is real benefit to having multiple programming languages working at different levels of the domain (e.g. a scriptable system with the core written in a lower level language). Of course now this is a widespread practice in many domains (e.g. web browsers, numerical computing: matlab, numpy). It's an idea that has stood the test of time. TCL is just one way of achieving that aim, but at the time it was one of few open-source options available. I think scheme/lisp would have been the obvious alternative. AutoDesk went in that direction.

I remember using TCL in the 90s for my own projects as an embeddable command language. The main selling point was that it was a relatively widely understood scripting language with an easily embeddable off-the-shelf open source code base, perhaps one of the first of its kind (ignoring lisps.) Of course the limitations soon became clear. Only a few years later I had high hopes that Python would become a successor, but it went in a different direction and became significantly more difficult to embed in other applications than was TCL -- it just wasn't a primary use case for the core Python project. The modern TCL-equivalent is Lua, definitely a step up from TCL, but I think if EDA tools used Lua there would be plenty of hand-wringing too.

Just guessing, but I imagine that at the time TCL was adopted within EDA tools there were few alternatives. And once TCL was established it was going to be very hard to replace. Even if you ignore inertia at the EDA vendors, I can't imagine hardware engineers (or anyone with a job to do) wanting to switch languages every two to five years like some developers seem happy to do. It's a hard sell all around.

I reckon the best you can do is blame the vendors for (a) not choosing a more fit-for purpose language at the outset, which probably means Scheme, or inventing their own, (b) or not ripping the bandaid off at some point and switching to a more fit-for-purpose language. Blaming (b) is tough though, even today selecting an embedded application language is vexed: you want something that has good affordances as a language, is widely used and documented, easily embedded, and long-term stable. Almost everything I can think of fails the long term stability test (Python, JavaScript, even Lua which does not maintain backward compatibility between releases).

Shorn · 7h ago
Unsurprisingly, minimising the amount of cognitive complexity is how you get the most out of LLM coding agents. So now have a theoretically repeatable way to measure cognitive load as contextualised to software engineering.
onlinehost · 11h ago
I bought his book after seeing this talk of his https://youtu.be/bmSAYlu0NcY
semiinfinitely · 18h ago
The ability to create code that imposes low cognitive load on others not only is a rare and difficult skill to cultivate- it takes active effort and persistence to do even for someone who already has the ability and motivation. I think fundamentally the developer is computing a mental compression of the core ideas - distilling them to their essence - and then making sure that the code exposes only the minimum essential complexity of those ideas. not easy and rare to see in practice
bombela · 17h ago
And if you do it really well, people think it must have been such an easy problem to solve all along. Since everything always appears so obvious in insight.

While the castle of cards of unfathomable complexity is praised for visibly hard work and celebrated with promotions.

an0malous · 16h ago
“When you do things right, people won’t be sure you’ve done anything at all”
goalieca · 11h ago
This is bad for promotions. You have to make grand efforts with impacts of saving things that clearly need saving.
maccard · 10h ago
There’s more than one way to get promoted. Being the common factor among projects that succeed is a really really good one, and IME it’s far more common to find people promoted after one successful project than it is to a “noisy Nancy” be promoted.
folkhack · 6h ago
And, furthermore - being a "noisy Nancy" is often a bad move for your career, socially. As I age, I realize it's more important to get along in most corporate/professional settings than it is to be the person fixing things.

All work represents a social entity (person/persons) and when you're the one calling out issues, pushing for proactive measures, and pushing against bad practices/complexity you're typically taking issue with _someone's_ work along the way. This is often seen as a "squeaky wheel" or "noisy Nancy" - or hell, outright antisocial. Most of the time it is not in your best interest to be this person.

The people who keep their nose down + mouth shut, those who prioritize marketing their work, and the sycophants are the ones who have longevity and upward trajectory - this is corporate America work culture.

dwattttt · 11h ago
But only if you make it look like an electrical thing
DrewADesign · 16h ago
This is also true of interface/UX/interaction design. Most developers are really skilled at maintaining a higher cognitive load than most, and the interfaces that work best for less technical people often frustrate developers, who want everything in front of them, visible, at all times because they intuitively know what’s important. Interfaces created by developers might click with other devs, but often bewilder less technical people. It’s really hard to design a tool that less technical people can use intuitively to solve complex problems without wanting to throw their electronics out the window.
Tarks · 17h ago
Plus rarely survives requirements/context changing because most abstractions are leaky.

My favourite frameworks are written by people smart enough to know they're not smart enough to build the eternal perfect abstraction layers and include 'escape hatches' (like getting direct references to html elements in a web UI framework etc) in their approach so you're not screwed when it turns out they didn't have perfect future-sight.

BenkaiDebussy · 9h ago
I think one issue is that some people just find very different things intuitive. Low cognitive load for one person might be high cognitive load for another.

Because of some quirk of the way my brain works, giant functions with thousands of lines of code doesn't really present a high cognitive load for me, while lots of smaller functions do. My "working memory" is very low (so I have trouble seeing the "big picture" while hopping from function to function), while "looking through a ton of text" comes relatively easily to me.

I have coworkers who tend to use functional programming, and even though it's been years now and I technically understand it, it always presents a ton of friction for me, where I have to stop and spend a while figuring out exactly what the code is saying (and "mentally translating" it into a form that makes more sense to me). I don't think this is necessarily because their code inherently presents a higher cognitive load - I think it's easier for them to mentally process it, while my brain has an easier time with looking at a lot of lines of code, provided the logic within is very simple.

TZubiri · 3h ago
And ironically, writing code that is maintainable on the long run and imparts a low cognitive load on successive developers is itself a cognition consuming effort.

In tradeoff engineering, maintainability over the long term is one of the many variables to optimize, and finite resources need to be alloted to it.

When I read this article I get the feeling that it's more likely that he is obsessing over maintainability over the long term while his app has a user count of zero. This malady usually comes from the perspective of being a user, one finds that the experience of writing some code is a "bad experience" so they strive to improve it or learn how to build a good "coder experience", the right answer is to understand that one is stepping into the shoes of the plumber, and it will be shitty, just gotta roll up your sleeves.

Don't get me wrong, there's a lot of wisdom here, but to the extent that there is, it's super derivative and well established, it's just the kind of stuff that a developer learns on their first years of software by surfing the web and learning about DRY, KISS and other folklore of software. To some extent this stuff is useful, but there's diminishing returns and at some point you have to throw shit and focus on the product instead of obsessing over the code.

jama211 · 14h ago
It is! But it’s also a bonus rather than a requirement for a lot of firms. For proof, look at any major codebase.
Buttons840 · 20h ago
I'm probably one of the "smart developers" with quirks. I try to build abstractions.

I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.

People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.

But the thing is... I'm not sure there's a better alternative.

You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.

Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.

"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?

You can't build careful bug-free abstractions in corporate environments.

So, is pile-of-if-statements the best we can do for business software?

Swizec · 20h ago
> So, is pile-of-if-statements the best we can do for business software?

You’ll enjoy the Big Ball of Mud paper[1].

Real world systems are prone to decay. You first of all start with a big ball of mud because you’re building a system before you know what you want. Then as parts of the system grow up, you improve the design. Then things change again and the beautiful abstraction breaks down.

Production software is always changing. That’s the beauty of it. Your job is to support this with a mix of domain modeling, good enough abstraction, and constructive destruction. Like a city that grows from a village.

[1] https://laputan.org/mud/mud.html

[2] my recap (but the paper is very approachable, if long) https://swizec.com/blog/big-ball-of-mud-the-worlds-most-popu...

citizenpaul · 15h ago
I'm not sure the author or most people that write these types of academic theory papers ever really see actual ball-of-mud-spaghetti code in real world scenarios.

I think anyone that thinks mudball is OK because business is messy has never seen true mudball code.

I've had to walk out of potential work because after looking at what they had I simply had to tell them I cannot help you, you need a team and probably at minimum a year to make any meaningful progress. That is what mudballs leads to. What this paper describes is competent work that is pushed too quickly for cleaning rough edges but has some sort of structure.

I've seen mudballs that required 6-12 months just to do discovery of all the pieces and parts. Hundreds of different version of things no central source control, different deployment techniques depending on the person that coded it even within the same project.

Swizec · 14h ago
> I think anyone that thinks mudball is OK because business is messy has never seen true mudball code.

I’ve seen and created some pretty bad stuff. Point is not that it’s okay, but that that’s the job: managing, extending, and fixing the mess.

Yes a perfect codebase would be great, but the code is not perfect and there’s a job to do. You’re not gonna rebuild all of San Francisco just to upgrade the plumbing on one street.

Much of engineering is about building systems to keep the mess manageable, the errors contained, etc. And you have to do that while keeping the system running.

citizenpaul · 10h ago
Sure but again you are referring to somewhat structured systems that went off the rails a bit. Thats not what I consider a mudball.

I've seen numerous places trying to hire someone to fix a 5-10 year mudball that has reached a point where progress is no longer possible without breaking something else which breaks something else and so on.

There is an endgame to the mudball and it does end in complete and total development stopping and systems that are constantly going offline and take weeks to get restarted. Most of the time the place will say: "Oh we've already had several consultants tell us the same thing" The same thing being the situation is hopeless and they are facing years of simply untangling the mess they made.

Usually the mudball is held together by a chain of increasingly shorter senior positions that keep jumping the sinking ship faster and faster. Finally they can no longer convince anyone sane to take on the ticking time bomb they have created and they turn to consultants.

Also my advice is often you should bring back person X that was at least familiar with the system at whatever salary they require. I am inevitably told that that person will literally not even take calls or emails from the company any more, every time. Thats how bad a real world mudball is.

conradkay · 18h ago
whstl · 20h ago
I believe you can build great abstractions in this kind of software, but if you want them to survive you gotta keep them any of that away from anything involving the business logic itself. You can only do this on product-like things: authn/authz, audit logs, abstractions over the database (CQRS, event sourcing), content/translation management, messaging infrastructure, real infrastructure. As soon as you allow anything from the business itself to affect or dictate those abstractions, you get shit again.

You're right that the business logic is gonna be messy, and that's because nobody really cares, and they can offload the responsibility to developers, or anyone punching it in.

On the other hand, separating "good code" and "bad code" can have horrible outcomes too.

One "solution" I saw in a fintech I worked at, was putting the logic in the hands of business people itself, in the form of a decision engine.

Basically it forced the business itself to maintain its own ball of mud. It was impossible to test, impossible to understand and even impossible simulate. Eventually software operators were hired, basically junior-level developers using a graphical interface for writing the code.

It was rewritten a couple times, always with the same outcome of everything getting messy after two or three years.

oh_my_goodness · 19h ago
You can't depend on the business people to understand the business logic clearly enough to explain it to the implementers. That will never happen. They may understand it themselves, but they're not coders and they can't write requirements for code.

Instead, at least one implementer needs to get hands dirty on what the application space really is. Very dirty. So dirty that they actually start to really know and care about what the users actually experience every day.

Or, more realistically for most companies, we insist on separate silos, "business logic" comes to mean "stupid stuff we don't really care about", and we screw around with if statements. (Or, whatever, we get hip to monads and screw around with those. That's way cooler.)

djtango · 19h ago
IMO you touch on the real heart of the issue at the end - the real world and business is messy and really _is_ just a pile of if statements.

When the problem itself is technical or can be generalised then abstractions can eliminate the need for 1000s of if-statement developers but if the domain itself is messy and poorly specified then the only ways abstractions (and tooling) can help is to bake in flexibility, because contradiction might be a feature not a bug...

dehrmann · 16h ago
In most assembly languages, the instructions are essentially load and store, arithmetic operations, and branch and jump. Almost everything is abstractions around how to handle branching and memory.
djtango · 1h ago
Right and a compiler is mostly technical.
hmottestad · 20h ago
Been playing with Codex CLI the past week and it really loves to create a fix for a bug by adding a special case for just that bug in the code. It couldn't see the patterns unless I pointed them out and asked it to create new abstractions.

It would just keep adding what it called "heuristics", which were just if statements that tested for a specific condition that arose during the bug. I could write 10 tests for a specific type of bug, and it would happily fix all of them. When I add another one test with the same kind of bug it obviously fails, because the fix that Codex came up with was a bunch of if statements that matched the first 10 tests.

xyzzy123 · 19h ago
Also they hedge a lot, will try doing things one way, have a catch / error handler and then try a completely different way - only one of them can right but it just doesn't care. Have to lean hard to get it to check which paths are actually used and delete the others.

I am convinced this behaviour and the one you described are due to optimising for swe benchmarks that reward 1-shotting fixes without regard to quality. Writing code like this makes complete sense in that context.

mewpmewp2 · 19h ago
That's a really good point. I was wondering why some of the LLMs were trained to try to pass things so sloppily constantly. Writing mock data, methods and pretending as if the task is complete and everything is great, good to go. They do seem to be trained just to pass some sort of conditions sadly and it feels somehow to me that it has got worse as of late. It should be relatively easy to reward them for writing robust code even if it takes longer or won't work, but it does seem they are geared towards getting high swe benchmarks.
Buttons840 · 20h ago
It's clear that these AIs are approaching human level intelligence. (:

Thank you for giving a perfect example of what I was describing.

The thing is, you actually can make the software work this way, you just have to add enough if-statements to handle all cases--or rather, enough cases that the manager is happy.

ttz · 20h ago
I was recently having a conversation with some coworkers about this.

IMO a lot of (software) engineering wisdom and best practices fails in the face of business requirements and logic. In hard engineering you can push back a lot harder because it's more permanent and lives are more often on the line, but with software, it's harder to do so.

I truly believe the constraints of fast moving business and inane, non sensical requests for short term gains (to keep your product going) make it nearly impossible to do proper software engineering, and actually require these if-else nests to work properly. So much so that I think we should distinguish between software engineering and product engineering.

andix · 14h ago
> a lot of (software) engineering wisdom and best practices fails in the face of business requirements

They fail on reality. A lot of those "best" practices assume, that someone understands the problem and knows what needs to be built. But that's never true. Building software is always an evolutionary process, it needs to change until it's right.

Try to build an side project, that doesn't accept any external requirements, just your ideas. You will see that even your own ideas and requirements shift over time, a year (or two) later your original assumptions won't be correct anymore.

ttz · 12h ago
You can still design for evolution and follow best practices. That's actually IMO a hallmark of good software design.

The issue is when the evolution is random and rife with special cases and rules that cannot be generalized... the unknown unknowns of reality, as you say.

Then, you just gotta patch with if elses.

andix · 12h ago
> The issue is when the evolution is random and rife with special cases and rules that cannot be generalized

You’ve just described the universe. It’s full of randomness.

ttz · 9h ago
Thank you for this relevant insight.
sky2224 · 11h ago
This is how I feel as well. What's even worse is that it seems like the academics doing research in the area of software engineering don't really have up to date experience that's practical.

Add to the fact that they're the professor of many software engineering courses and you start to see why so many new grads follow SOLID so dogmatically, which leads to codebases quickly decaying.

markus_zhang · 6h ago
I kinda think only system programming is programming.
vjerancrnjak · 20h ago
There are many ways code can get simpler even with ifs.

If you find yourself sprinkling ifs everywhere, try to lift them up, they’ll congregate at the same place eventually, so all of your variability is implemented and documented at a single place, no need to abstract anything.

It’s very useful to model your inputs and outputs precisely. Postpone figuring out unified data types as long as possible and make your programming language nice to use with that decision.

Hierarchies of classes, patterns etc are a last resort for when you’re actually sure you know what’s going on.

I’d go further and say you don’t need functions or files as long as your programming is easy to manage. The only reason why you’d need separate files is if your vcs is crippled or if you’re very sure that these datetime handlers need to be reused everywhere consistently.

Modern fullstack programming is filled with models, middleware, Controllers , views , … as if anyone needs all of that separation up front.

soulofmischief · 18h ago
These abstractions become a toolset for creating a program that naturally evolves as new goals and constraints are introduced. It also allows other engineers to understand your code at a high level without reading it from top to bottom.

If your code ever has the possibility of changing, your early wins by having no abstraction are quickly paid for, with interest, as you immediately find yourself refactoring to a higher abstraction in order to reason about higher-order concepts.

In this case, the abstraction is the simplicity, for the same reason that when I submit this comment, I don't have to include a dictionary or a definition of every single word I use. There is a reason that experienced programmers reach for abstractions from the beginning, experience has taught them the benefits of doing so.

The mark of an expert is knowing the appropriate level of abstraction for each task, and when to apply specific abstractions. This is also why abstractions can sometimes feel clumsy and indirect to less experienced engineers.

vjerancrnjak · 1h ago
Haven’t seen even something like opinionated frameworks work well with their initial abstractions.

Even file interfaces in most programming languages don’t come with pipelining. Most are leaky abstraction.

Most abstractions also deal with 1 thing instead of N things. There’s no popular http server that supports batch request processing.

Async-await is a plague of an abstraction.

Abstracting something like trivial if statements is not a problem. The best transaction of all, passing a function to a function is underused.

rendaw · 5h ago
There is an alternative: take the parts that _aren't_ in if statements (the actual common code) and make them into shared functions. Then split up the rest into multiple functions that call the shared functions, one for each independent condition, so that they don't have if statements.

These individual functions are easier to reason about since they have specific use cases, you don't have to remember which combinations of conditions happen together while reading the code, they simplify control flow (i.e. you don't have to hack around carrying data from one if block to the next), and it uses no "abstraction" (interfaces) just simple functions.

It's obviously a balance, you'll still have some if statements, but getting rid of mutually exclusive conditions is basically a guaranteed improvement.

ajzbzizkloll · 20h ago
Most business logic is “last mile” software. Built on top of beautiful abstractions that came not from an abstract idea of what’s correct but from painful clashes with reality that eventually provided enough clarity to build a good abstraction.

Sometimes last mile software turns into these abstractions but often not.

I’ve worked with very smart devs that try to build these abstractions too early, and once they encounter reality you just have a more confusing version of if statement soup.

figassis · 19h ago
Abstractions are ok, SomethingFactories are stupid. If your code is more abstractions than actual logic and you need logic to manage the abstractions (eg. FactoryFactories, 2+ inheritance levels), you should rethink your strategy.
mdaniel · 18h ago
I had previously thought SomethingFactory was abstracting away the logic for the "new" keyword, but for people who dislike inversion of control frameworks

I'm firmly in the "DI||GTFO" camp, so I don't meant to advocate for the Factory pattern but saying that only abstractions that you like are ok starts to generate PR email threads

andix · 14h ago
I'm also into building abstractions, but I always try to leave "escape hatches" in place. I try to build my abstractions out of reusable components, that can also be used independently.

If the abstraction doesn't fit a new problem, it should be easy to reassemble the components in a different way, or use an existing abstraction and replace some components with something that fits this one problem.

The developers shouldn't be forced to use the abstractions, they should voluntarily use them because it makes it easier for them.

jsrcout · 11h ago
> it should be easy to reassemble the components in a different way

An underappreciated value. I call this composability and it is one of my primary software development goals.

BenkaiDebussy · 9h ago
I think one issue you can run into with clever abstractions is that it can be harder to fix/change them if something is wrong with their fundamental assumptions (or those assumptions change later). Something like this happened at my work a while back, where if I had written the code it would have probably just involved a few really long/ugly functions (but only required changing a few lines in and after the SQL query to fix), but instead the logic was so deeply intertwined with the code structure that there wasn't any simple way to fix it without straight-up rewriting the code (it was written in a functional way with a bunch of functions taking other functions as arguments and giving functions as output, which also made debugging really tough).

It also depends how big the consequences to failure/bugs are. Sometimes bugs just aren't a huge deal, so it's a worthwhile trade-off to make development easier in change for potentially increasing the chance of them appearing.

marginalia_nu · 20h ago
I honestly think that's pretty close to optimal for a lot of places. With business software it's often not desirable to have large sweeping changes. You may need some small change to a rule or condition, but usually you want things to stay exactly the way they are.

The model of having a circle of ancient greybeards in charge of carefully updating the sacred code to align with the business requirements, while it seems bizarre bordering on something out of WH40K, actually works pretty well and has worked pretty well everywhere I've encountered it.

Attempts to refactor or replace these systems with something more modern has universally been an expensive disaster.

Buttons840 · 20h ago
It does work for awhile, until one day:

Project Manager: "Can we ship an order to multiple addresses?"

Grey Beard: "No. We'd have to change thousands of random if-statements spread throughout the code."

Project Manager: "How long do you think that would take?"

Grey Beard: "2 years or more."

Project Manager: "Okay, we will break you down--err, I mean, we'll need to break the task down. I'll schedule long meetings until you relent and commit to a shorter time estimate."

Grey Beard eventually relents and gives a shorter time estimate for the project, and then leaves the company for another job that pays more half-way through the project.

weiliddat · 17h ago
If Grey Beard doesn't relent

Project Manager: "Can we ship an order to multiple addresses? We need it in 2 weeks and Grey Beard didn't want to do it"

Eager Beaver: "Sure"

  if (order && items.length > 1 && ...) {  
    try {  
      const shipmentInformation = callNewModule(order, items, ...)  
      return shipmentInformation  
    } catch (err) {  
      // don't fail don't know if error is handled elsewhere  
      logger.error(err)  
    }  
  } else {  
    // old code by Grey Beard  
  }
quectophoton · 16h ago
... and then that `callNewModule` has weird bugs like mysteriously replacing `+` with spaces, sometimes labels are empty but only if they are shipped to a specific company, sometimes the invoices are generated multiple times for the same shipment, after 1 year after Sales has already sold this multi-item shipment feature to massive companies it suddenly stops working because the new module wasn't properly hooked for auto-renewing credentials with a specific service and the backlog of unshipped items but marked as shipped grows by the second...

Of course Eager Beaver didn't learn from this experience because they left the company a few months ago thinking their code was AWESOME and bragging about this one nicely scalable service they made for shipping to multiple addesses.

Meanwhile Grey Beard is the one putting out the fires, knowing that any attempt to tell Project Manager "finding and preventing situations like this was the reason why I told my estimate back then" would only be received with skepticism.

weiliddat · 16h ago
Of course, why reuse existing logic when we can (vibe) code new modules and functions from scratch every time we need it!

/s

RaftPeople · 16h ago
> It does work for awhile, until one day:

Your counter example assumes the people managing the code base are incompetent.

Wouldn't the rewrite fail for the exact same reason if the company only employs incompetent tech people?

hackerthemonkey · 16h ago
And eventually it took 3 years.
marginalia_nu · 16h ago
Some say they are still trying to get the original functionality back to this day.
marginalia_nu · 20h ago
Oh but the greybeards love meetings. There's nothing they'd rather do than spend days and weeks discussing how to affect changes, drawing boxes, writing documents, sending emails.
atomicnumber3 · 20h ago
"the owners of the business logic do not treat the business logic with enough care."

Certainly, there are such people who simply don't care.

However I would also say that corporations categorically create an environment where you are unable to care - consider how short software engineer tenures are! Any even somewhat stable business will likely have had 3+ generations of owner by the time you get to them. Owner 1 is the guy who wrote 80% of the code in the early days, fast and loose, and got the company to make payroll. Owner 2 was the lead of a team set up to own that service plus 8 others. Owner 3 was a lead of a sub-team that split off from that team and owns that service plus 1 other related service.

Each of these people will have different styles - owner 1 hated polymorphism and everything is component-based, owner 2 wrapped all existing logic into a state machine, owner 3 realized both were leaky abstractions and difficult to work with, so they tried to bring some semblance of a sustainable path forward to the system, but were busy with feature work. And owner 3 did not get any Handoff from person 2 because person 2 ragequit the second enough of their equity vested. And now there's you. You started about 9 months ago and know some of the jargon and where some bodies are buried. You're accountable for some amount of business impact, and generally can't just go rewrite stuff. You also have 6 other people on call for this service with you who have varying levels of familiarity with the current code. You have 2.25 years left. Good luck.

Meanwhile I've seen codebases owned by the same 2 people for over 10 years. It's night and day.

Buttons840 · 19h ago
What you say is true, but I meant the product owners are the one who don't fully weigh the cost of their decisions.

I once tried to explain to a product owner that we should be careful to document what assumptions are being made in the code, and make sure the company was okay committing to those assumptions. Things like "a single order ships to a single address" are early assumptions that can get baked into the system and can be really hard to change later, so the company should take care and make sure the assumptions the programmers are baking into the system are assumptions the company is willing to commit to.

Anyway, I tried to explain all this to the product owner, and their response was "don't assume anything". Brillant decisions like that are why they earned the big bucks.

eastbound · 19h ago
> consider how short software engineer tenures are!

Employees aren’t fired. They leave for a 10% increase. Employees are the ones who seek always more in a short-termist way.

kibwen · 19h ago
At the same time, the reason that employees leave on such a regular cadence is precisely because the companies they work for refuse to give them the salary increase that they could get by going elsewhere. Companies could solve this by giving commensurate raises.
whstl · 14h ago
They would rather give a 10% increase to the guy replacing you.
martin-t · 19h ago
It would be short-termist if they had actual gains from their work other that salary. They don't.

They are acting rationally given companies don't seem to value long term expertise.

Now, if it was a worker- owned cooperative, that would be a different thing.

api · 20h ago
A pile of if statements is a direct model of business deal making, which is literally a pile of if statements.

Sales contracts with weird conditions and odd packaging and contingencies? Pile of if statements.

The other great model for business logic is a spreadsheet, which is well modeled by SQL which is a superset of spreadsheet functionality.

So piles of if’s and SQL. Yeah.

Elegant functional or OOP models are usually too rigid unless they are scaffolding to make piles of conditions and relational queries easier to run.

marcosdumay · 18h ago
> So piles of if’s and SQL.

One would imagine by now we would have some incredibly readable logical language to use with the SQL on that context...

But instead we have people complaining that SQL is too foreign and insisting we beat it down until it becomes OOP.

To be fair, creating that language is really hard. But then, everybody seems to be focusing on destroying things more, not on constructing a good ecosystem.

dsego · 19h ago
You usually don't want your code logic to stray from the mental or domain model of business stakeholders. Usually when my code makes assumptions to unify things or make elegant hierarchies, I find myself in a very bad place when stakeholders later make decisions that flip everything and make all my assumptions in the code base structure fall apart.
AnimalMuppet · 17h ago
But even with OOP... Virtual functions take over the pile of ifs, and so the ifs move to where you instantiate the class that has the virtual functions. (There is some improvement, though - one class can have many virtual functions, so you can replace all the ifs that ask the same question with one if that creates a class with all the right virtual functions. It gets messier if your class has to be virtual against more than one question.)
dboreham · 6h ago
Long ago this was called a jump table.
ajuc · 11h ago
I like to make truth tables for understanding piles of ifs. Like there's 5 ifs with 5 different conditions - so I make 5 columns and 32 rows, and enumerate all the possible combinations of these 5 ifs and write what happens for each. And then what should happen.

Of course, the disadvantage is the exponential growth. 20 ifs means a million cases (usually less because the conditions aren't independent, but still).

Then I have a flat list of all possible cases, and I can reconstruct a minimal if tree if I really want to (or just keep it as a list of cases - much easier to understand that way, even if less efficient).

energy123 · 18h ago
Better when those if statements are before any loops
DarkNova6 · 19h ago
Perhaps expressing intend and describing the underlying domain carries more value than pure “abstraction”.
helge9210 · 19h ago
You can carefully pick an order of features to build in a way, that every new feature will invalidate an abstraction correctly implementing all the previous features.
kaffekaka · 17h ago
Definitely true.

I don't think it is malice or incompetence, but this happens too often to feel good.

sfn42 · 15h ago
> "A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?

I don't see the problem. Okay, so we need to support multiple addresses for orders. We can add a relationship table between the Orders and ShippingAddresses tables, fix the parts of the API that need it so that it still works for all existing code like before using the updated data model, then publish a v2 of the api with updated endpoints that support creating orders with multiple addresses, adding shipping addresses, whatever you need.

Now whoever is dependent on your system can update their software to use the v2 endpoints when they're ready for it. If you've been foolish enough to let other applications connect to your DB directly then those guys are going to have a bad time, might want to fix that problem first if those apps are critical. Or you could try to coordinate the fix across all of them and deploy them together with the db update.

The problems occur when people don't do things properly, we have solutions for these problems. It's just that people love taking shortcuts and that leads to a terrible system full of workarounds rather than abstractions. Abstractions are malleable, you can change them to suit your needs. Use the abstractions that work for you, change them if they don't work any more. Design the code in such a way that changing them isn't a gargantuan task.

gnramires · 15h ago
I am not a super experienced coder or anything. But I like thinking about it[1].

The way I've been thinking about it is about organization. Organize code like we should organize our house. If you have a collection of pens, I guess you shouldn't leave them scattered everywhere and in your closet, and with your cutlery, and in the bathroom :) You should set up somewhere to keep your pens, and other utensils in a kind of neat way. You don't need to spend months setting up a super-pen-organizer that has a specially sculpted nook for your $0.50 pen that you might lose or break next week. But you make it neat enough, according to a number of factors like how likely it is to last, how stable is your setup, how frequently it is used, and so on. Organizing has several advantages: it makes it easier to find pens, shows you a breath of options quickly, keeps other places in your house tidier and so less cognitively messy as well. And it has downsides, like you need to devote a lot of time and effort, you might lose flexibility if you're too strict like maybe you've been labeling stuff in the kitchen, or doing sketches in your living room, and you need a few pens there.

I don't like the point of view that messiness (and say cognitive load) is always bad. Messiness has real advantages sometimes! It gives you freedom to be more flexible and dynamic. I think children know this when living in a strict "super-tidy" parent house :) (they'd barely get the chance to play if everything needs to be perfectly organized all the time)

I believe in real life almost every solution and problem is strongly multifactorial. It's dangerous to think a single factor, say 'cognitive load', 'don't repeat yourself', 'lesser lines of code', and so on is going to be the single important factor you should consider. Projects have time constraints, cost, need for performance; expressing programs, the study of algorithms and abstractions itself is a very rich field. But those single factors help us improve a little on one significant facet of your craft if you're mindful about it.

Another factor I think is very important as well (and maybe underestimated) is beauty. Beauty for me has two senses: one in an intuitive sense that things are 'just right' (which capture a lot of things implicitly). A second and important one I think is that working and programming, when possible, should be nice, why not. The experience of coding should be fun, feel good in various ways, etc. when possible (obviously this competes with other demands...). When I make procedural art projects, I try to make the code at least a little artistic as well as the result, I think it contributes to the result as well.

[1] a few small projects, procedural art -- and perhaps a game coming soon :)

HumblyTossed · 17h ago
There are two things I hate more than anything else when coding - if statements and abstraction. If statements are bug magnets. Abstraction is a mental drain. My coding stile is to balance those two things in a way that makes the code as easy to read and extend as possible without relying on either too much and only just enough.
vasco · 20h ago
> So, is pile-of-if-statements the best we can do for business software?

I'm not sure if that's anywhere in the rating of quality of business software. Things that matter:

1. How fast can I or someone else change it next time to fulfill the next requirements?

2. How often does it fail?

3. How much money does the code save or generate by existing.

Good architecture can affect 1 and 2 in some circumstances but not every time and most likely not forever at the rate people are starting to produce LLM garbage code. At some point we'll just compile English directly into bytecode and so architecture will matter even less. And obviously #3 matters by far the most.

It's obviously a shame for whoever appreciates the actual art / craft of building software, but that isn't really a thing that matters in business software anyway, at least for the people paying our salaries (or to the users of the software).

whoamii · 20h ago
Plenty of architecture to be found in well written text.
bambax · 4h ago
Cognitive load is an important concept in aviation. It is linked to the number of tasks to run and the number of parameters to monitor, but it can be greatly reduced by training. Things you know inside and out don't seem to consume as much working memory.

So in software development there may be an argument to always structure projects the same way. Standards are good — even when they're bad! because one of their main benefit is familiarity.

Szpadel · 4h ago
I would say that's very important rule. We have lots of projects using framework dependent magic, lots of useless interfaces and factories that give only theoretical value, magic that patch other classes methods etcz but this is standard practice in this framework and all experienced developers know that.

by doing something better here would actually not bring any value, because it would mean that developers would have to remember that this one thing is done differently.

that's trap where I would say many mid Devs fall in, they learned how do things better, but increase congnitive load for the rest of developers just by doing things differently.

TZubiri · 3h ago
An important difference is that the aviator would be the user of the airplane system. OP is talking about the cognitive load of the plane engineer.

It's an important distinction in terms of priorities. I personally think the experience of the user is orders of magnitude more important than engineer cognitive load.

noen · 20h ago
This article reminds me of my early days at Microsoft. I spent 8 years in the Developer Division (DevDiv).

Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).

But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.

Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.

Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius. For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.

Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.

None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.

darkstarsys · 19h ago
Clearly they were missing Amanda, the engineer who's had to review others' terrible code (and her own) for 20 years, and has learned the hard way to keep it simple. She knows she's writing code mostly for people to read, not computers. Give me a small team of Amandas any day.
jadbox · 18h ago
Mort, Elvis, Einstein, Amanda does seem to fit well with my experience. While people are a mix, generally I think its fair that there is a primary focus/mode that fits on career goals.

- Mort wants to climb the business ladder.

- Elvis wants earned social status.

- Einstein wants legacy with unique contributions.

- Amanda just wants group cohesion and minimizing future unpredictability.

lukeschlather · 14h ago
I don't really like the axes Mort/Elvis/Einstein are on, they all seem like obviously pathological examples.

I think if I were to make three strawmen like this I would instead talk about them as maximizing utility, maintainability, and effectiveness. Utility because the "most business value" option doesn't always make the software more useful to people. (And I will tend to prioritize making the software better over making it better for the business.) Maintainability because the thing that solves the use case today might cause serious issues that makes the code not fit for purpose some time in the future. Effectiveness because the basket of if statements might be perfect in terms of solving the business problem as stated, but it might be dramatically slower or subtly incorrect relative to some other algorithm.

Mort is described as someone who prioritizes present business value with no regard to maintainability or usefulness.

Elvis is described as someone who prioritizes shiny things, he's totally a pejorative.

Einstein is described as someone who just wants fancy algorithms with no regard for maintainability or fitness to the task at hand. Unlike Elvis I think this one has some value, but I think it's a bit more interesting to talk about someone who is looking at the business value and putting in the extra effort to make the perfectly correct/performant/maintainable solution for the use case, rather than going with the easiest thing that works. It's still possible to overdo, but I think it makes the archetype more useful to steelman the perspective. Amanda sounds a bit more like this, but I think she might work better without the other three but with some better archetypes.

germandiago · 15h ago
I vote for Amanda. Really, there is no substitute for seeing something easy to understand.

I have been most of my career working with C++. You all may know C++ can be as complex as you want and even more clever.

Unless I really need it, and this is very few times, I always ask myself: will this code be easy to understand for others? And I avoid the clever way.

RaftPeople · 16h ago
> - Mort wants to climb the business ladder.

I think the personas have some validity but I don't agree with the primary focus/mode.

For example, I tend to be a mort because what gets me up in the morning is solving problems for the enterprise and seeing that system in action and providing benefit. Bigger and more complex problems are more fun to solve than simpler ones.

darkstarsys · 19h ago
And as a manager/CTO, the way to do this is to give the devs time to think about what they're doing, and reward implementation clarity (though it's its own reward for Amandas).
flappyeagle · 17h ago
The way to do this is to chew people out when they let their own sources get in the way of doing a good job
makeitdouble · 17h ago
What difference do you see from a Mort ?

If there is no inherent complexity, a Mort will come up with the simplest solution. If it's a complex problem needing trade-offs the Mort will come up with the fastest and most business centric solution.

Or would you see that Amanda refactoring a whole system to keep it simple above all whatever the deadlines and stakes ?

gherkinnn · 17h ago
Mort is happy with an if soup. Amanda sees what the if soup ought to do and replaces it with a simple state machine and fixes two bugs along the way.
makeitdouble · 17h ago
Wouldn't refactoring the if soup into an algorithmically elegant solution what the Einstein does ?
layer8 · 14h ago
Einstein would write a new state-machine library with SIMD optimization for the purpose, and refactor any logic into it that could possibly be contorted into a state machine.
binoct · 17h ago
As I imagine it, Einstein would no be happy with fixing a couple bugs and making a state machine. Einstein would add a new unit test framework and implement a linear optimizer written with only lambdas to solve the problem and recommend replacing the web server with it as well. This is tongue in cheek but gets the idea across.
makeitdouble · 12h ago
Sounds me like what you see in the Amanda type is "balance", landing as Mort mixed with an Einstein.

To quote OP: "None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives"

Spivak · 6h ago
The problem is that "work smart not hard" for software devs is counterintuitive because using your brain is the hard work. Einstein works too hard and creates code that's hard to reason about, Most doesn't work hard enough and creates code that's hard to reason about.

The originating example for an Amanda is someone who used her brain to recognize that the existing code was clumsily modeling a state machine and clarified the code by reframing it in terms of well-known vocabulary. It's technically an abstraction but because every dev is taught in advance how they work it's see-through and reduces cognitive load even when you must peel back the abstraction to make changes.

earleybird · 15h ago
The Amanda solution is the intuitively obvious to even the casual reader. The Einstein solution is quite succinct but takes years to understand all the nuance in the one liner. :-)

I appreciate both for different reasons.

makeitdouble · 6h ago
I kinda see the original proposition as similar to a RGB framework. The same way we mix RGB to have a whole spectrum of colors, I assume we can mix Mort, Einstein and Elvis to get whole spectrums of engineers profiles.

There will be people looking at pure Green and pure Blue and ask for an Emerald color to get RGBE instead, but that's not how the RGB framework works. And I can't get rid of the feeling that Amanda is that Emerald color people are clamoring for.

I also kinda get why Microsoft got rid of the system for something more abstract.

pessimizer · 17h ago
Elegant is usually the opposite of maintainable. Reading elegant code is like reading a book of riddles (which is one of the reasons we enjoy it.)
n4r9 · 17h ago
True, but we shouldn't understate how beneficial elegant solutions can be in the appropriate setting. Sometimes you read code that gives you a new and memorable way to think about a certain kind of problem.
pessimizer · 17h ago
I agree we like it. I don't want to have to review it. I'd rather review code where the bugs stick out like blinking yellow lights, even if it runs 10% slower (or 1000% slower, if I'm only running it once.)
packetlost · 14h ago
Your definition of elegant is definitely different from mine lol
Izkata · 13h ago
Yeah, I'd define elegant as something like "unexpectedly simple and easy to understand", relative to the simple approach to the problem at hand.
latexr · 15h ago
> Amanda

Am I missing a reference? If not, may I suggest “Ada”?

https://en.wikipedia.org/wiki/Ada_Lovelace

Or even better, “Grace”. Seems to fit your description better.

https://en.wikipedia.org/wiki/Grace_Hopper

https://www.youtube.com/watch?v=gYqF6-h9Cvg

No comments yet

socalgal2 · 15h ago
Mort: Someone who lacks sense of life, looks dumbfounded, and has only a limited ability to learn and understand. (urban slang)

Elvis: A famous rock star

Enstein: A famous physicist

Amanda: ???

Mort, Elvis, Enstein are referencing things I've heard of before. What is Amanda referencing? is there some famous person named Amanda? Is it slang I'm unaware of?

noisy_boy · 3h ago
She is not familiar because she is referencing things that are rare. You haven't seen an "Amanda" because she is rare. Just like common sense.
layer8 · 14h ago
Amanda is clearly the most beloved of those four. ;)
dudeinjapan · 15h ago
Amanda try to be like.
rawgabbit · 16h ago
They were also missing Steve Jobs. Having had the displeasure to work with Microsoft tools and code for most of my career. Microsoft never in my experience just plain works. I had to fight Microsoft every step of the way to get things to "work". And when it does it invariably breaks in the next major software release.
SJC_Hacker · 16h ago
Microsoft is/was far more developer friendly than Apple

MFC may have been a steaming pile of doodoo, but at least the tools for developing on the OS were generally free and had decent documentation

st3fan · 16h ago
Pretty sure this was not true for the longest time actually. Up to at least the mid 90s, both Apple and Microsoft had their own tools like Visual Basic/C/C++ and MPW on the Mac and none of those tools were free. You could get significant educational discounts or other deals but the tools cost real money.

Later, Xcode (or Project Builder) became pretty much free with the first release of MacOS X. You could buy a Mac and install all the tools to develop software. Very much in the spirit of NeXT. I am sure something similar happened for Microsoft around the same time.

And now of course all the tools both native from vendors + a large selection of additional third party tools are basiclly free for all major platforms.

(Disregarding things like 'app store fees' or 'developer accounts' which exists for both Apple and Microsoft but are not 100% required to build stuff.)

AllegedAlec · 17h ago
You clearly missed the entire message of the entire "three kinds of developers" sort of shit if you think that a fourth type that's perfect is what's missing from it.
rapind · 18h ago
Mort is the pragmatist, Einstein is the perfectionist, and Elvis is... let's be honest, Elvis is basically cancer to a project. I guess maybe a small dose of Elvis can help motivate?

I see the ideal as a combination of Mort and Einstein that want to keep it simple enough that it can be delivered (less abstraction, distilled requirements) while ensuring the code is sufficiently correct (not necessarily "elegant" mind you) that maintenance and support won't be a total nightmare.

IMO, seek out Morts and give them long term ownership of the project so they get a little Einstein-y when they realize they need to support that "pile of if statements".

As an aside, I'm finding coding agents to be a bit too much Mort at times (YOLO), when I'd prefer they were more Einstein. I'd rather be the Mort myself to keep it on track.

dahart · 16h ago
Your comment made me think Mort represents efficiency, Einstein represents quality, and Elvis represents risk. The ideal combination is difficult, and it changes over time. If anyone knew what the ideal combination was, companies would never fail. Risk can get something started, and lack of it can eventually kill software. In fact, I would argue the vast majority of software we’ve seen so far dies an eventual death due in part to its inability to take risk and change and adapt - it might be not enough Elvis in the long term. Too much risk can kill something before it takes off and can undermine the ability to ship and to ship quality. Generally speaking my gut instinct was to (perhaps like you) align with and defend Morts; the business objective is the only thing that matters and pays the bills, and there is certainly a class of Morts that doesn’t write spaghetti code, and cares about quality and tries new things, but prioritizes work toward the customer and not code wonkery. Anyway… this is too probably abstract to be very useful and I made it worse and more abstract, but it’s fun to hypothesize!
kyralis · 10h ago
Mort might mean short-term efficiency, but those solutions are where technical debt and unmaintainable organically-grown complexity come from. That has its time and place, but it must be balanced to not doom anything but short-lived projects.
KronisLV · 17h ago
> Elvis is basically cancer to a project. I guess maybe a small dose of Elvis can help motivate?

Sometimes teams are quite stuck in their ways because they don’t have the capacity or desire to explore anything new.

For example, an Elvis would probably introduce containers which would eliminate a class of dependency and runtime environment related issues, alongside allowing CI to become easier and simpler, even though previously using SCP and Jenkins and deploying things into Tomcat mostly worked. Suddenly even the front end components can be containers, as can be testing and development databases, everyone can easily have the correct version locally and so on.

An unchecked Elvis will eventually introduce Kubernetes in the small shop to possibly messy results, though.

rapind · 5h ago
> An unchecked Elvis will eventually introduce Kubernetes in the small shop to possibly messy results, though.

Elvis and Einstein joined powers to create 14 new javascript package managers over a handful of years while Mort tore his hair out.

chickenbuckcar · 17h ago
Unfortunately for a fast growing industry (think AI, LLM), Mort + Elvis will be much more success then any combination with Einstein. The speed to adapt a new technology into a specific domain outweight your ability to scale for long term (think the oracle vs sybase in server)
freshtake · 18h ago
The best engineers are all three, and can turn up or down these tendencies depending on what's required for the project, business, or personal goals. These should not be fixed in proportion over time, as they are each useful in different circumstances.

I spent time at Microsoft as well, and one of the things I noticed was folks who spent time in different disciplines (e.g. dev, test, pgm) seemed to be especially great at tailoring these qualities to their needs. If you're working on optimizing a compiler, you probably need a bit more Einstein and Mort than Elvis. If you're working on a game engine you may need a different combination.

The quantities of each (or whether these are the correct archetypes) is certainly debatable, but understanding that you need all of them in different proportions over time is important, IMHO.

SteveJS · 16h ago
Personas are a great tool. IMO - By the time you arrived these had transformed into bad shorthand. (I say this having been in Devdiv through those years.)

Elvis is not a persona - it is an inside baseball argument to management. It suffered a form of Goodhart’s law … it is a useful tool so people phrase their arguments in that form to win a biz fight and then the tool degrades.

Alan Cooper, who created VB advocated personas. When used well they are great.

The most important insight is your own PoV may be flawed. The way a scientist provides value via software is different than how a firmware developer provides value.

https://www.amazon.com/Inmates-Are-Running-Asylum/dp/0672316...

jama211 · 14h ago
I think this is somewhat dangerous, it can lead you to categorise people unfairly and permanently. Also, in my experience this has a critical flaw - the managers love morts in my experience, not Elvises. They don’t care about the technical details, so “fastest and fits the business outcome the most” is ideal.

Also the actual solution is proper team leadership/management. If you have morts, make sure that code quality requirements are a PART of the requirements their code must pass, and they’ll instead deliver decent work slightly slower. Got an elvis? Give more boundaries. Got Einsteins? Redefine the subtasks so they can’t refactor everything and give deadlines both in terms of time but also pragmatism.

Either way, I don’t love this approach, as it removes the complexity from the human condition, complexity which is most important to keep in mind.

zmmmmm · 11h ago
I agree with you and one of the most important ways is that it bakes in an assumption that people cant grow, learn and change.

Life is all about learning, adapting and changing. Great leaders see the potential growth in people and are up for having hard conversations about how they can improve.

Even if people do have these personality traits as life long attributes, that doesn't define them or prevent them from learning aspects of the others over time.

whiteboardr · 15h ago
If personas in this context resonate with you, I highly recommend - even if only remotely related - to read Rich Gold’s The Plenitude.

https://mitpress.mit.edu/9780262543798/the-plenitude/

Waterluvian · 18h ago
Yeah… it’s like picking three points in an n-dimensional matrix. It is sufficient for creating an illusion of being scientific about it.
tempodox · 18h ago
Indeed. And besides that, all three are really bad parodies. Mort is the only one where the product actually works, because for him that’s an explicit goal. With the other two, a working product is mere coincidence.
jama211 · 14h ago
Precisely. Also people underestimate the power of mort code. The world runs on it, and besides, at the end of the day unless you are an executive or own significant stock in the company, making decisions about speed/outcome vs tech debt actually isn’t your job IMO. Give your opinions and advice but at the end of the day build what they ask you to in the manner they’re happy for you to build it - if they demand speed over quality that’s on them.

And you can improve everything with a system. A team of morts forced into a framework where testers/qa/code review find and make them fix the problems along the way before the product is shipped is an incredibly powerful thing to behold.

s1mplicissimus · 15h ago
Exactly the expectation value for "analysis of types of developers done by people who really don't care about people"

"so, there's 3 boxes. no more, no less. why? i have a gut feeling. axis? on a case by case basis. am i willing to put my money where my mouth is? heallnaw!"

cgarvis · 17h ago
Reminds me of the three tribes of software programmers. https://josephg.com/blog/3-tribes/amp/

Mort == maker Elvis ==? hacker Einstein == poet

HumblyTossed · 17h ago
I lean towards Mort for most things, Einstein for the key things and freaking NEVER Elvis.
jama211 · 14h ago
Mort is the only one who does their job from the perspective of the business owner.
whilenot-dev · 13h ago
I think implementing stuff correctly (Einstein) is sometimes more important than doing what you're told to do (Mort) - requirements don't always represent business interests.
augusto-moura · 7h ago
Can you share what was the new framework? At least some of the details, it sound interesting what was the new understand
roblh · 18h ago
That’s super interesting. What was the ideal ratio, back then. Is it still the same now? Or I guess maybe it depends on the specific role and could be different in each. What exactly do you find valuable about thinking in these terms?
bravetraveler · 19h ago
Animal Farm, but with a twist
james_marks · 18h ago
This is a helpful framing, thank you.

Something was bugging me after an interview with a potential hire, and now I can articulate that they were too much Einstein and not enough Mort for the role.

purplezooey · 18h ago
Perhaps best modeled as a waveform that starts before morning coffee. Each engineer has a vector of spectral magnitudes.
TZubiri · 3h ago
I'd love one of those old facebook quizzes like "take this quizz to figure out which friends character you are", but for figuring out whether you are a Mort, an Elvis or an Einstein
DonHopkins · 19h ago
Is Microsoft so Balkanized that they have a Developer Division, Developer Multiplication, Developer Addition, and Developer Subtraction (where you get transferred to before they fire you)?
Waterluvian · 18h ago
The larger a corporation gets, the more npm packages you have to first install before accomplishing any meaningful work.
azhenley · 19h ago
I had a lot of fun in DevDiv!
_dain_ · 17h ago
mort = paladin

elvis = thief

einstein = mage

jama211 · 14h ago
Who’s the bard? I feel like I’m more of a bard
VagabundoP · 16h ago
Amanda=Cleric
Disposal8433 · 19h ago
> three personas for software engineers

The kind of psycho-bullshit that we should stay away from, and wouldn't happen if we respected each other. Coming from Microsoft is not surprising though.

lgessler · 17h ago
Novels are fictional too. So long as they're not taken too literally, archetypes can be helpful mental prompts.
AllegedAlec · 17h ago
Stereotyping people is good.
mdaniel · 18h ago
For my frame of reference, do you think the Myers-Briggs Type Indicator are psycho-bullshit, too? Because I had characterized personas as a very similar "of course it's a generalization" and OP even said themselves "every engineer is a mix" but if you're coming from stance that bucketing people is disrespectful, then your perspective on MBTI would help me digest your stance
NeutralForest · 18h ago
I've seen those kinds of tests described as astrology for business guys. Sounds about right.
wiseowise · 18h ago
Both are made up bollocks for idiots to label people. Just write fucking code, solve business problems and go home.
fragmede · 15h ago
Everything is made up. How do you organize people into being able to solve problems?
wiseowise · 12h ago
> How do you organize people into being able to solve problems?

By not putting reductionist labels on them.

whatevertrevor · 17h ago
MBTI is absolutely bullshit, it's like one level above horoscopes and astrology, but very similar type of BS. There's also the Gallup crap that many corps were doing to evaluate the strengths and weaknesses of each employee so they could fit them into neat buckets such as "Leader" vs "Follower", as if these aren't skills people develop over time but actual personality traits.
cm2012 · 16h ago
Its kind of a common thing to say Myers-Briggs typing is useless because its pseudo-science. I dont think this is supported by the data in the way people think.

For one, many studies of identical twins raised in separate households show they have the same personality type at a much higher rate than chance.

Two, there are incredibly strong correlations in the data. In different surveys of 100k+ people, the highest earning type has twice the salary of the lowest type. This is basically impossible by chance.

The letters (like ENTJ) correlate highly to the variables of Big 5, the personality system used by scientists. Its just that it's bucketed into 16 categories vs being 5 sliding scales.

Scientific studies are looking for variables that can be tracked over time reliably, so Big 5 is a better measure for that.

But for personal or organizational use, the category approach is a feature, not a bug. It is much more help as a mental toolkit than just getting a personality score on each of the 5 categories.

BlarfMcFlarf · 11h ago
You can pick any set of axis you feel like and get similar results. “Do you like X? Wow you are an X person!”. So yeah, technically better than horoscopes, more like a “warm” reading where you tell a person what they told you earlier. But it’s entirely unclear why these axis are the right ones over a million other possible ones, if these are particularly stable categories in time and context, or if the harm of encouraging people to box themselves or others into specific stereotypes has any possible benefit to outweigh the obvious harms of simplifying stereotypes.
cm2012 · 9h ago
It's good questions. Here's why this axis is the right one in my opinion:

1) As I mentioned, it has a lot of statistically significant correlations, including to all the variables of the Big 5. Example: Surveys show that % of the overall population that is each type (like INFJ) is very consistent across time and populations.

2) Beyond that, youre right, there are a lot of personality systems with pros and cons. But Myers-Briggs has by far the must supporting materials, tools, ease of use, and so on. I think its the quickest to make useful to the average person.

3) I've found it really helpful as a lens for self analysis in my own life.

tptacek · 18h ago
I briefly flagged the preceding comment for "psycho-bullshit" before concluding that it was just a really forceful way to say the developer personas were pseudoscientific (of course they are, nobody is claiming otherwise) but I think it's worth calling out that MBTI is also pseudoscientific; it has no real validity, or even test-test reliability.
mdaniel · 17h ago
I would guess that lack of repeatability happens a lot with any self-reporting scheme, but I am sorry that I accidentally picked such a polarizing "people generalization" scheme to use as contrast. Maybe I should have used "introvert versus extrovert" or something

Anyway, their sibling comment told me what I wanted to know, so in that way I'm wasting more of my time contributing to this

tptacek · 14h ago
I thought the Microsoft developer persona thing was cute. I didn't think anybody was claiming it was science!
Zarathruster · 17h ago
I'll leave it to others to make the argument for why Jungian psychology (and by extension, MBTI) is/isn't bullshit.

But since nobody has mentioned the alternative yet, the framework used by anyone in any scientific capacity is the Big Five: https://en.wikipedia.org/wiki/Big_Five_personality_traits

The link between programming and conscientiousness seems fairly straightforward. To fully translate Mort/Elvis/Einstein into some kind of OCEAN vector would take a little more effort.

cm2012 · 16h ago
Big 5 correlates to MBTI very closely in any case. With the exception of neuroticism.
fragmede · 15h ago
Okay, it's psycho bullshit. How would you organize and sort people instead?
neehao · 6h ago
This article covers a lot of the points: https://www.gojiberries.io/building-together-separately-chal...

"A single page on Doordash can make upward of 1000 gRPC calls (see the interview). For many engineers, upward of a thousand network calls nicely illustrate the chaos and inefficiency unleashed by microservices. Engineers implicitly diff 1000+ gRPC calls with the orders of magnitude fewer calls made by a system designed by an architect looking at the problem afresh today. A 1000+ gRPC calls also seem like a perfect recipe for blowing up latency. There are more items in the debit column. Microservices can also increase the costs of monitoring, debugging, and deployment (and hence cause greater downtime and worse performance)."

xmprt · 17h ago
This is one of the reasons I fear AI will harm the software engineering industry. AI doesn't have any of these limitation so it can write extremely complex and unreadable code that works... until it doesn't. And then no one can fix it.

It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.

edem · 27m ago
id argue that it already harmed the industry
inkyoto · 7h ago
> It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster […]

I am afraid, the cat is out the bag, and there is no turning back with GenAI and coding – juniors have got a taste of GenAI assisted coding and will persevere. The best we can do it educate them on how to use it correctly and responsibly.

The approach I have taken involves small group huddles where we talk to each other as equals, and where I emphasise the importance of understanding the problem space, the importance of the depth and breadth of knowledge, i.e. going across the problem domain – as opposed to focusing on a narrow part of it. I do not discourage the junior engineers from using GenAI, but I stress the liability factor and the cost: «if you use GenAI to write code, and the code falls apart in production, you will have a hard time supporting it if you do not understand the generated code, so choose your options wisely». I also highlight the importance of simplicity over complexity of the design and implementation, and that simplicity is hard, although it is something we should strive for as an aspiration and as a delivery target.

I reflect on and adjust the approach based on new observations, feedback loop (commits) and other indirect signs and metrics – this area is still new, and the GenAI assisted coding framework is still fledging.

fsckboy · 17h ago
>AI can write extremely complex and unreadable code that works... until it doesn't. And then

AI can fix it

I'm not defending or encouraging AI, just saying that argument doesn't work

xmprt · 17h ago
I'm talking about cases where even AI can't fix it. I've heard of a lot of stories where people vibe code their applications to 80% and then get stuck in a loop where AI is unable to solve their problems.

It's been well documented that LLMs collapse after a certain complexity level.

fsckboy · 16h ago
you were also talking about the future (as AIs get better and better). as of now AIs cannot write code too complex for better programmers to understand. your point holds for armies of low skill programmers, but you're just raising a fear and haven't come close to proving the case you're trying to make. We already know as counterweight that being first to the market with very substandard code generally wins over taking your time to get it right, so why should it be different with AI?
AstroBen · 10h ago
> We already know as counterweight that being first to the market with very substandard code generally wins

..we do?

Who created short stories as used in Tiktok/IG?

The first touch screen phone?

First social media app?

Was Google the first?

I mean I almost see the opposite of what you're saying..

CamperBob2 · 15h ago
Most programmers can't make much sense of the output of a C compiler, either. We'll all be in that boat before long.

(To anticipate the usual reaction when I point that out: if you're going to sputter with rage and say that compilers are deterministic while AI isn't, well... save it for a future argument with someone who can be convinced that it matters.)

cowboylowrez · 3h ago
on the other hand, I'd love to read an argument that persuades me that determinism doesn't matter in this case because I can't form any mental model that makes determinism-ed-ness a non factor in my decision making. of course, this comes with a disclaimer that I have no experience as a vibe coder lol
YetAnotherNick · 15h ago
> AI doesn't have any of these limitation

> AI is unable to solve their problems.

You are contradicting yourself. AI works worse than humans in places where cognitive load is required, and so it can't cross the boundary of cognitive load. If say it becomes better at managing cognitive load in the future, then in any case it doesn't matter as you can ask it to reduce the cognitive load in the code and it would.

pessimizer · 17h ago
Thus far, and granted I don't have as much experience as others, I just demand that AI simplify the code until I understand everything that it is doing. If I see it doing something in a convoluted way, I demand that it does it in the obvious way. If it's adding too many dependencies, I tell it to remove the goofy ones and write it the long way with the less capable stdlib function or helped by something that I already have a dependency on.

It's writing something for me, not for itself.

balder1991 · 15h ago
Yeah, as much as I don’t like to use AI to write large portions of code, I’m using it to help me learn web development and it can feel like a following a tutorial, but tailored to the exact project I want.

My current approach is creating something like a Gem on Gemini with custom instructions and the updated source code of the project as context.

I just discuss what I want, and it gives me the code to do it, then I write by hand, ask for clarifications and suggest changes until I feel like the current approach is actually a good one. So not really “vibe-coding”, though I guess a large number of software developers who care about keeping the project sane must be doing this.

mupuff1234 · 17h ago
Or maybe it will actually increase the quality of software engineering because it will free up the cognitive load from thinking of the low level design to higher level architecture.
dsego · 17h ago
That's my fear, it will become a sort of a compiler. Prompts will be the code and code will be assembly, and nobody will even try to understand the details of the generated code unless there is something to debug. This will cause the codebases to be less refined with less abstraction and more duplication and bloat, but we will accept it as progress.
lukeschlather · 14h ago
For me, I think it makes it more likely I will pick simple abstractions that have good software verification. Right now the idea of a webservice that has been proven correct to a spec is ridiculous, no one has time to write that, but it seems more likely that sort of thing will become ordinary. Yes, I won't be able to hold the webservice in my head, but reviewing it and making correct and complete statements about how it functions will be easier.
mupuff1234 · 16h ago
Funny, I'd say that codebases nowadays usually have too many abstractions.
dsego · 15h ago
Some certainly do. I have also noticed that the format of the code and structure used depend more on tools and hardware the developer uses rather than some philosophical ideal. A programmer with a big monitor could prefer big blocks of uninterrupted code with long variable names. Because of the big screen area, they can see the whole outline and understand the flow of this long chunk of code. Someone on a small 13" laptop might tend to split big pieces of code into smaller chunks so they won't have to scroll so much because things would get hidden. The other thing is the IDE or editor that's used. A coder who relies on the builtin goto symbol feature might not care as much about organizing folder and file structure, since they can just click on the method name, or use the command palette that will direct them to that piece of code. Their colleague might need the code to be in well organized file structure because they click through folders to reach the method.
mupuff1234 · 15h ago
Those are all examples for why having a single source for code generation would most likely simply things - basically we will have a universal code style and logic, instead of every developer reinventing the wheel.

And let's face it, 95% of software isn't exactly novel.

alphazard · 13h ago
The bit about "smart developer quirks" looks suspiciously like the author only understands code that they have written, or is in a specific style that they recognize. That's not the biggest driver behind cognitive load.

Reducing cognitive load comes from the code that you don't have to read. Boundaries between components with strong guarantees let you reason about a large amount of code without ever reading it. Making a change (which the article uses as a benchmark) is done in terms of these clear APIs instead of with all the degrees of freedom available in the codebase.

If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much. They are visible in the volume, but not in the surface area.

djmips · 13h ago
https://news.ycombinator.com/newsguidelines.html

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

hinkley · 13h ago
I learned pretty early on that people get really tired of optimization of code that is directly in the call stack they have to breakpoint in. Later on I clocked some of that as code smells pulling attention and analysis time during debug work.

But the trick I found is that if you can extract a function for only the part of the code you’re optimizing/improving, and then make your change in a single commit, two things happen. One, it’s off the code path, so out of site, out of mind. Two, people are more forgiving of code changes they don’t like but can roll back by reverting a single commit. That breaks down a bit with PRs, since they tend to think of the code as a single commit. But the crisp boundaries still matter a lot.

srcreigh · 13h ago
What do you mean about “people get really tired of optimization of code that is directly in the call stack they have to breakpoint in”? What’s the context where everybody else is using breakpoints?
nycdotnet · 13h ago
In some software platforms, the tooling makes it really easy to use a debugger to see what’s happening, so it’s common for everyone on the team to use them all the time.

The comment you’re responding to mentioned pulling code into a function. As an example, if there’s a clever algorithm or technique that optimizes a particular calculation, it’s fine to write code more for the machine to be fast than the human to read as long as it’s tidy in a function that a dev using a debugger can just step over or out of.

hinkley · 12h ago
More succinct than I managed.
hinkley · 12h ago
Anything that is more complex than the simplest thing that can work, needs to be out in leaf node functions any time that is remotely possible, if you want to remain popular with your coworkers. It is, I believe, part of the calculus of “premature optimization” nobody wants to have to think about complex code when looking for other bugs. Kernighan’s Law also informs on this topic.

There are code improvements that improve legibility, correctness, and performance. There are ones that improve two of those three qualities. You can use those pretty much anywhere and people will pick a reason to like the code you modified in such a manner.

But if you put “clever” code high in the call graph, get ready for grumbling from all corners. If it happens to be near where legitimate bugs tend to live, get ready for a lot of it.

I would probably also add that this advice goes hand in glove with Functional Core, Imperative Shell. The surface area for unintended consequences in pure code is much tighter, so people won’t have to scan as much code to narrow down the source of a strange interaction because there aren’t interactions to be strange. I don’t need to look at code that has a single responsibility that is orthogonal to the problem I’m researching. Until or unless I become desperate because nothing else has worked so far.

zmmmmm · 12h ago
> If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much

To me this is the upside of the microservices concept. Of course, true microservices take it way too far. But once you tell two teams they can only talk to each other with APIs and make them use tooling that properly defines what those are (schemas etc) .... all of a sudden they are forced to draw those boundaries well and then stick to them. And they get really conservative about changing them and think hard about what the definitions should be up front. It's sort of perversely sticking technical friction in at the points where you want there to be natural conservatism around change.

SickOfItAll · 13h ago
It’s simple, but developers are generally terrible at it, especially the ones who are ‘smart’: Aim to think in terms of contracts and interfaces, not implementation. I don’t want to know what’s in your black box in order to understand how it works; if you make me read your code, you’ve done it wrong.
zakirullin · 13h ago
Having a clear and minimalistic API doesn't mean that the underlying code is easy to understand. It's good when things just work and you can use an API, but most of the time you have to dig whatever is under the rug.
alphazard · 9h ago
> Having a clear and minimalistic API doesn't mean that the underlying code is easy to understand.

Doesn't need to be. The API tells me what it does, hopefully there is a test suite to assure me. I can add to the test suite if I have a question, or want to lock in a behavior.

When it's a third party library, everyone assumes it's supposed to work? but when it's a system in the same repository, all of a sudden all bets are off, and you need to understand it fully to get anything done?

> It's good when things just work and you can use an API, but most of the time you have to dig whatever is under the rug.

If most changes require simultaneously changing multiple areas of the code, such that changing one system implies changing every other system with roughly equal probability, then it's not well designed.

I don't know what else to tell you. It's going to be hard to iterate or maintain that system, in part because it requires a high cognitive load. None of the code in such a system provides any cognitive leverage. You can't rule out, or infer behavior of a large amount of code, by reading a small amount.

If such a system is important, then part of the strategy has to be to improve the architecture.

Buttons840 · 20h ago
It's been said: "Document the why, not the what."

I have a hard time separating the why and the what so I document both.

The biggest offender of "documenting the what" is:

    x = 4  // assign 4 to x
Yeah, don't do that. Don't mix a lot of comments into the code. It makes it ugly to read, and the context switching between code and comments is hard.

Instead do something like:

    // I'm going to do
    // a thing. The code
    // does the thing.
    // We need to do the
    // thing, because the
    // business needs a
    // widget and stuff.
    
    setup();
    t = setupThing();
    t.useThing(42);
    t.theWidget(need=true);
    t.alsoOtherStuff();
    etc();
    etc();
Keep the code and comments separate, but stating the what is better than no comments at all, and it does help reduce cognitive load.
marginalia_nu · 19h ago
I generally don't mind documenting both when it's merited. Sometimes you need to clarify the why, occasionally you need to clarify the what.

I think comments in general are underrated. You don't need to annotate every line like a freshman programming assignment, but on the other hand most supposed self-documenting code just isn't.

mastermage · 19h ago
sometimes you do some wack magic in just one line of code, sometimes thats necessary for performance or because what you are trying todo is inherently wack magic. Example the fast inverse square from quake. Insane magic and if you just document does inverse square approximately people would freak out. So sometimes when wack magic is used explain the wack magic (as concise as reasonable)
marginalia_nu · 19h ago
Yup.

I've got a function the gist of which is

  if (!cond())
    return val;

  do {
    // logic
  } while (cond());

  return val;

This looks like it could be simplified as

  while (cond())
    // logic
  }
  return val;
But if you do you lose out on 20% of performance due to branch mispredictions, and this is a very hot function. It looks like a mistake, like the two are equivalent, but they are actually not. So it gets a comment that explains what's happening.
brokencode · 19h ago
That feels like.. something the compiler should be optimizing for you? I would certainly be among those questioning this code.
marginalia_nu · 19h ago
The compiler can't know from the code alone which branch is more likely. This is a property of the input data and not the code. Really advanced JIT compilers can sometimes do those types of optimizations, but this is a fairly rare scenario.
whitehexagon · 16h ago
The other day I spotted Zig has @branchHint, not tried it yet, my code isnt that hot!
jijijijij · 16h ago
Isn't branch prediction mostly a CPU thing? Do you have an example with corresponding assembly?

It's not that I don't believe you about the performance impact, as I have observed the same with e.g. Rust in some cases, but I don't think it has a lot to do with the compiler judging what's more likely, but rather more or less "random" optimization differences/bugs. At least in my case, the ordering had nothing to do with likelihood, or even had a reverse correlation.

I think in your example a compiler may or may not realize the code is semantically equivalent and all bets are off about what's going to happen optimization-wise.

I mean, in the end it doesn't matter for the commenting issue, as you are realistically not going to fix the compiler to have slightly more readable code.

marginalia_nu · 16h ago
I don't have an assembly output for this particular case, but how I understand it is that the re-write basically turns it into two separate conditions, which means the branch predictor is free to model their outcomes separately.

In this case, the data is bimodal, depending on the chosen input, two likely outcomes exists. Either no looping is needed, or much looping is needed. This seemingly confuses the branch predictor when it's the same branch dealing with both scenarios.

Waterluvian · 17h ago
The challenge I have these days is deciding what belongs in code and what goes into design documentation and technical manuals.

I generally find that comments in code should explain why the code is doing non-obvious things. “This gets memoized because it’s actually important to maintain referential identity for reason X.”

jijijijij · 16h ago
The problem is, a year later the obvious things are not obvious anymore :D
dsego · 19h ago
> x = 4 // assign 4 to x

Ah, the chat gpt style of comments.

> Instead do something like:

The only negative is that there is a chance the comment becomes stale and not in sync with the code. Other coders should be careful to update the comment if they touch the code.

Buttons840 · 19h ago
If the what becomes stale, you can tell. If the why becomes stale (and it can become stale), you'll never know, unless the what is also included.
fcantournet · 16h ago
The why becoming stale is a feature, that's when you know there is a VALID reason you thought this looked weird and convoluted, instead of you completely missing the inherent complexity of the problem.
marcosdumay · 18h ago
The reason you want people to document the "why" is because you can easily check if one reason has become stale, but you can never check if every single possible reason is still valid.
jmpeax · 7h ago
> It makes it ugly to read, and the context switching between code and comments is hard.

That's a symptom of bad syntax highlighting, fix it and you're good to go.

ivanjermakov · 10h ago
Such comments are never maintained. Implementation will change 5 times this year and the comment will get stale and confusing.

At my current projects we drop all code comments except for some really tricky logic and very high level docs.

mastermage · 19h ago
I am trying to shift more to this style of commenting. Because I am not a programmer by education. I am a physicist and most physicists are like I want comments in my code so do it like this.. and then you as a student do what you need to for a good grade.
e40 · 16h ago
This is why I make lists. Of everything. Checklists for technical processes (work and personal). Checklists for travel. Little "how to" docs on pretty much everything I do that I'm sure I won't remember past a week.

It completely removes the stress of doing things repeatedly. I recently had to do something I hadn't done in 2 years. Yep, the checklist/doc on it was 95% correct, but it was no problem fixing the 5%.

germandiago · 15h ago
I am kind of a bit lazy person at times. But when I need to absolutely not to forget anything that can become messy, a checklist is difficult to beat and as you say, it removes all the stress: you elaborate it and when the time comes, keep applying it or applying all at once.

It works very well for me.

ivan_ah · 12h ago
+1 for lists. I used to procrastinate and suffer every year to do my taxes. Always submit last minute (stress) or late (stress + penalties), but ever since I created a checklist for all the steps (data extractions, data transform in Excel, data loading into tax software, etc.) the tax season is no longer this bad. It still takes me a day, but no stress... just grind though the checkboxes!
schrectacular · 15h ago
Temporal cognitive load reduction A.K.A. "thanks, past self!"
computerdork · 14h ago
Yeah, my OneNote notebook is huge now, littered with checklists, processes, and just notes on different subjects. It's like a secondary memory for me at this point, like longterm storage for rarely used but important info.
rossant · 13h ago
Same. Like what to take in my luggage. Grocery list. To do list. And everything else. I use Dynalist. Great tool. My secret super power, by those who also made Obsidian.
mettamage · 15h ago
In like Apple Notes or what do you store the checklists in?
mimischi · 14h ago
I don’t think there’s a correct answer here. Whatever floats your boat. Do you want to scribble things by hand into a physical notebook? Great! Want to use Notepad on Windows for .txt? Or create a .docx using Word?

Don’t follow trends and seek the “next best way to hack your productivity”. Most of those things are snake oil and a waste of time. Just use whatever you have available and build a process yourself. That’s what most people have done that are successful in applying this. They just use the tool they are comfortable with, and don’t over engineer for the sake of it

computerdork · 14h ago
I like the digital note-taking tools, Evernote and Onenote - actually, used to use Evernote, but it started slowing down after my notebooks became too large, so switched to Onenote.

And eventhough Onenote is MS product and Evernote was the original that OneNote copied off of, OneNote is a better engineered piece of software (I have tons of notes and a few of them very large documents), and Onenote rarely has problems.

germandiago · 15h ago
Depending on the task the most effective way I found sometimes is a hand-written paper stuck on a wall.

Why so? It is always in front of you, it reminds you what you need to do and does not get out of sight, which helps keep the focus.

When you bury it or set it somewhere else it is very easy to bury it.

e4325f · 15h ago
I use Apple Reminders
ajuc · 14h ago
Txt files are hard to beat
Xmd5a · 14h ago
The right way to do it is to have one person (preferably you) go over the checklist, while someone else (typically a son) do the actual thing.
gblargg · 4h ago
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" -- Brian Kernighan
neonrider · 14h ago
Love it. Make code accessibility a first-class citizen. Turn the rule books and their principles into guidelines. A smart coder knows to follow rules. A master knows code is meant to be read and develops contextual awareness for when and why to break a rule, or augment it, as the case may be. So, reintroduce judgment and critical thinking in your coding practice. Develop an intuitive feel for the cognitive costs and trade-offs of your decisions. Whether you choose to duplicate or abstract, think of the next person (who sometimes is you in six months).

For those asking why author doesn't come up with their own new rules that can then be followed, this would just be trading a problem for the same problem. Absentmindedly following rules. Writing accessible code, past a few basic guidelines, becomes tacit knowledge. If you write and read code, you'll learn to love some and hate some. You'll also develop a feel for heavy handedness. Author said it best:

> It's not imagined, it's there and we can feel it.

We can feel it. Yes, having to make decisions while coding is an uncomfortable freedom. It requires you to be present. But you can get used to it if you try.

rhameetman · 11h ago
> Make code accessibility a first-class citizen.

This is a good article but the main thing that bugs me about it is that the author completely disregards germane overhead.

Germane overhead is about recognition and practice and, at scale, it matters just as much.

Intrinsic and extraneous overhead is about the information itself and how it’s presented.

Germane overhead is about the receiver so in order to make code accessibility a first-class citizen you can’t ignore it.

zakirullin · 14h ago
Thanks a lot! You've nailed it :)
david_draco · 17h ago
Unit and Integration testing is great for decreasing cognitive load too. When you are staring at an error stack trace of a complex code base, and go through mentally what could have played out to cause this, it's great to have confidence in components due to testing. Hypothesis/QuickCheck is allows dropping entire classes of worries.
supernuova · 16h ago
Yes, exactly! If you can trust that each unit is working in all the ways covered in the tests, you can focus on the unit you are developing and not have to keep the other unit in your mind while working on it. And if there is an untested edge case you think might be the cause, you can test that edge case independently from the unit where the issue is occurring, and confirm if it produces the expected result.

Good, trusted unit tests are the difference between encapsulation reducing or increasing/complicating cognitive load. (And similar but between components for integration tests).

That being said, there will be rare times that the issue is due to something that is only an edge case due to an implementation detail several units deep, and so sometimes you do still need the full picture, but at least it lets you save doing that until you're stumped, which IMO is well worth it if the code is overall well-designed and tested.

reactordev · 19h ago
Boy if I had a dollar for every “we’ve been doing it wrong” posts.

The issue with this stance is, it’s not a zero sum game. There’s no arriving to a point where there isn’t a cognitive load on the task you’re doing. There will always be some sort of load. Pushing things off so that you reduce your load is how social security databases end up on S3.

Confusion comes from complexity. Not a high cognitive load. You can have a high load and still know how it all works. I would better word this as Cognitive load increases stress as you have more things to wrestle about in your head. Doesn’t add or remove confusion (unless that’s the kind of person you are), it just adds or removes complexity.

An example of a highly complex thing with little to no cognitive load due to conditioning, driving an automobile. A not-complex thing that imparts a huge cognitive load, golf.

marginalia_nu · 20h ago
I think it's pretty tiresome that "smart authors" are blamed for writing complex code. Smart authors generally write simpler code. It's much harder to write simple code than complex for reasons that boil down to entropy -- there are simply many more ways to write complex code than simple code, and finding one of the simple expressions of program logic requires both smarts and a modicum of experience.

If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.

E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.

aDyslecticCrow · 18h ago
I like to call that a leaky abstraction. The author used "UNIX I/O" as a great example. It perfectly hides the complexity and abstraction is such a way that the programmer never needs to know the internals. It has sealed all the juicy complexity in a watertight container that the user of the abstraction never needs to peak inside of.

The auth example may not be. You may need to do validatePassword(user) for passwordCorrect(user) to be true, which then forces you to open up a hole in the abstraction that is userAuthorized(request) and peak inside. userAuthorized() has leaked out its logic, it has failed as an abstraction. Its a box with 3 walls and no roof that blocks visibility to important logic rather than hides away the complexity.

munchlax · 15h ago
If you're refering to fopen and friends, that's leaky too. Fopen alone has an append mode which was meant for tapes. And binary mode that was probably useful some day, but hasn't been since idk when. Fsync has its own set of trouble.

Read the fine print.

baobabKoodaa · 20h ago
When an article like this uses the term "smart people", I'm always a bit confused if they mean actually smart people, or not-that-smart-people-who-think-highly-of-themselves. Because there's a lot more people in the latter category, and in my view they are the ones building unnecessary complexity into codebases.

To clarify, when I say "not-that-smart-people", I don't mean "stupid people". You need to be beyond some basic level of intelligence in order to have the capability to overcomplicate a codebase. For lack of a better metric, consider IQ. If your IQ is below 80, you are not going to work day-to-day overcomplicating a codebase. You need to be slightly above average intelligence (not stupid, but also "not-that-smart") to find yourself in that position.

marginalia_nu · 20h ago
It takes intelligence to see where to make a change though.

If you make a change at the wrong place, you add more complexity than if you put the change in the right place. You often see the same thing with junior developers, in that case due to a limited mental model of the code. You give them a task that from a senior developer would result in a 2 line diff and they come back changing 45 lines.

baobabKoodaa · 19h ago
??

I suspect that we agree with each other and you misread my earlier comment.

marginalia_nu · 18h ago
Yeah maybe, there were a lot of negations in there...
incognito124 · 20h ago
You reminded me of a rule of thumb that says: Keep the complexities in data structures and simplicity in algorithms
hibikir · 20h ago
I have seen subscription systems built following that rule of thumb. It collapses pretty well, as the data structure then becomes impossible to engage with unless you are an expert, and the callers are never experts.

Things make more sense when the data structure lives in a world where most, if not all illegal atates become unrepresentable. But given that we often end un building APIs in representations with really weak type systems, doing that becomes impossible.

winwang · 16h ago
Ironically, the attempt to prevent illegal states may create "complex" code (quoted since it may be due to perceived or actual complexity).
weiliddat · 16h ago
> It's much harder to write simple code than complex for reasons that boil down to entropy -- there are simply many more ways to write complex code than simple code, and finding one of the simple expressions of program logic requires both smarts and a modicum of experience.

Also effort, there are smart people who couldn't be bothered to reduce extraneous load for other people, because they already took the effort to understand it, but they don't have the theory-of-mind to understand that it's not easy for others, or can't be bothered to do so.

> I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal

Good rule of thumb I find is, did the new change make it harder or easier to reason about the change / topic?

If we go back to the concept of cognitive load, it's fine cognitive load goes up if the solution is necessarily complex. It's the extraneous bit that we should work to minimize, reduce if possible.

zahlman · 12h ago
> Smart authors generally write simpler code. It's much harder to write simple code than complex for reasons that boil down to entropy -- there are simply many more ways to write complex code than simple code, and finding one of the simple expressions of program logic requires both smarts and a modicum of experience.

The people writing the complex code generally seem to think they're smart.

That was me, once. And I was smart, but I was also applying my smarts very, very poorly.

st3fan · 16h ago
"Smart authors generally write simpler code"

I don't like to generalize.

You are lucky then. I've definitely worked with super smart engineers who chose incredibly complicated solutions over more simpler and pragmatic solutions. As a result the code was generally hard to maintain and specially difficult to understand.

It is a real thing. And it generally happens with "the smart ones" because people who don't know how to make things complicated generally stick with simpler solutions. In my experience.

peacebeard · 20h ago
It’s really easy to generalize. Some smart people write simple, maintainable code. Some smart people find it fun to over-complicate. Neither is useful as a generalization in my opinion.
shreddit · 2h ago
Don't agree with "Business logic and HTTP status codes" tbh, because now i have to work with apis that do this:

Statuscode: 200 { success: false, error: "..." }

rkagerer · 6h ago
These are good tips. A lot of it boils down to writing well-organized code geared for human consumption.

Junior programmers too often make the mistake of thinking the code they write is intended for consumption by the machine.

Coding is an exercise in communication. Either to your future self, or some other schmuck down the line who inherits your work.

When I practice the craft, I want to make sure years down the line when I inevitably need to crack the code back open again, I'll understand what's going on. When you solve a problem, create a framework, build a system... there's context you construct in your head as you work the project or needle out the shape of the solution. Strive to clearly convey intent (with a minimum of cognitive load), and where things get more complicated, make it as painless as possible for the next person to take the context that was in your head and reconstruct it in their own. Taking the patterns in your brain and recreating them in someone else's brain is in fact the essence of communication. In practice, this could mean including meaningful inline comments or accompanying documentation (eg. approach summary, drawings, flowcharts, state change diagrams, etc). Whatever means you have to efficiently achieve that aim. If it helps, think of yourself as a teacher trying to teach a student how your invention works.

schnatterer · 4h ago
> business logic and http status codes Why hold this custom mapping in our working memory? It's better to abstract away your business details from the HTTP transfer protocol, and return self-descriptive codes directly in the response body: { "code": "jwt_has_expired" }

While the logic behind it sounds reasonable, REST does the exact opposite with the same goal: simplicity, easy to learn, i.e. reduce mental load. I know there are other reasons for REST/SOAP/Graphql, etc. Still makes mental load a somewhat subjective matter to me.

hn_throwaway_99 · 4h ago
In my experience, though, a lot of "REST in the real world" failed at its lofty original goals, precisely because its original goals required too much cognitive load.

The reason REST largely succeeded (or, rather, what I like to refer to as "REST-lite") is because people who wanted to build stuff quickly on the web realized "Hey, I don't need all this protocol complexity (see: SOAP), I can just make simple, human-readable API calls over the same HTTP layer my browser uses anyway".

There is other stuff in "official REST" that I think has some value, like the noun/verb structure of API routes, but shoehorning API-level error codes into HTTP status codes has been a disaster IMO. Every time I've seen this done I've seen the same issues come up again and again and new developers constantly have to rediscover solutions and problem spots. Does "404" mean the API endpoint doesn't exist, or that particular resource doesn't exist? How do I map my very specific API error to rather generic HTTP status codes? Does a status code error mean a problem with the networking or the application?

Too · 4h ago
The article misses that http status code is not a custom mapping, it’s a standard mapping. Using this standard, most http libraries will already be equipped with features to handle them, for example automated retries and backoffs on a 429 with Retry-After.

Replacing this standard with custom strings in the response body is terrible advice. Even if we all could have wished that http status codes should have been human readable strings rather than numbers. Augmenting the standard response with additional custom information is still something you can and should do as cherry on the top, or if you have many conditions falling under the same standard code. Like, don’t shoehorn something custom into 418 I’m a teapot just because it happened to be unused.

noiv · 19h ago
I spend a few decades in the industry and in even more teams. I think, the quality of code strongly correlates with the team's ability to articulate its members cognitive load and skills. In some projects it is just not opportune to point out a need to skill up, so everybody just accepts whatever in PRs and quality never gets any better.

On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".

Guess, what happens over time in these teams?

PeterWhittaker · 19h ago
The most important user of my temporary variables, à la "isValid" or "isSecure" is older/later me.

I could be adding a new feature six months later, or debugging a customer reported issue a week later. Especially in the latter case, where the pressure is greater and available time more constrained, I love that earlier/younger me was thoughtful enough to take the extra time to make things clear.

That this might help others is lagniappe.

zahlman · 12h ago
> A lagniappe (/ˈlænjæp/ LAN-yap, /lænˈjæp/ lan-YAP) is "a small gift given to a customer by a merchant at the time of a purchase" (such as a 13th doughnut on purchase of a dozen), or more broadly, "something given or obtained gratuitously or by way of good measure."[2] It can be used more generally as meaning any extra or unexpected benefit.[3]

> The word entered English from the Louisiana French adapting a Quechua word brought in to New Orleans by the Spanish Creoles.

... I see.

PeterWhittaker · 7h ago
Quite a lovely word. Learned it from Steinem writing a foreword for a Doonesbury collection in which she excoriated Buckley for his use of the term to describe the fourth panel, the main punchline often having come in the third, in his foreword for a previous collection.
baazaa · 6h ago
I would just add the IsAllowed etc. as a comment next to the relevant line. Often the explanation is bigger than what you'd want in a variable name, I find it less overhead than making more variables, and it makes better use of screen-space.

I'd only lean towards intermediate variables if a) there's lots of smaller conditionals being aggregated up into bigger conditionals which makes line-by-line comments insufficient or b) I'm reusing the same conditional a lot (this is mostly to draw the reader's attention to the fact that the condition is being re-used).

xiphmont · 11h ago
This is actually some wonderful work that succinctly explains a lot of my experience. Much of how I was formally taught to program is counterproductive to the big picture the second someone else has to understand the code. It's part of the reason that I hate dealing with Rust and C++, and breathe a sigh of relief when the codebase I need to suck into my head is good old C. C offers fewer ways to hide all the working code in six layers of templates.
zakirullin · 11h ago
Nice to hear that it resonates with your experience. I liked C for the exact same reason. Switched to Golang recently, simplicity is cherished here too!
makeitdouble · 20h ago
Lowering the cognitive load by assigning temporary variables requires more thought and skill than credited here.

In particular these variables need to be extremely well named, otherwise people reading the code will still need to remember what exactly is abstracted if the wording doesn't exactly fit their vision. E.g.

> isSecure = condition4 && !condition5

More often than not the real proper name would be "shouldBeSecureBecauseWeAlsoCheckedCondition3Before"

To a point, avoiding the abstraction and putting a comment instead can have better readability. The author's "smart" code could as well be

  ```
  if (val > someConstant // is valid
      && (condition2 || condition3) // is allowed
      && (condition4 && !condition5) // is secure 
  ) {
      ...
  }
  ```
SoftTalker · 19h ago
They can all be right. I agree assigning variables doesn't help much more than good comments. But worth doing if they are needed more than once in the same scope. But do we need "is valid", "is allowed", and "is secure" more than once in differnt scopes? They should probably functions then. Do we always need all three considered together? Then they should be a single function. Are there ever places where condition2 or condition 3 are not allowed? More complexity.

Even simple examples like this get complicated in the real world.

cowlby · 19h ago
One quirk of AI agents is I've moved to `isValid = val > someConstant` over comments because Cursor (I guess Claude by extension) frequently removes and re-writes comments. Or `isValid = checkForValidity(val, someConstant)` if the condition check grows significantly.
claytongulick · 19h ago
I try to structure functions and validations like this in a early-return list at the top of a function.

    if(val <= someconstant)
        return; //not valid

    if(!(condition2 || condition3))
        return; //not allowed

    ...
The author mentions this technique as well.

I find it particularly useful in controller API functions because it makes the code a lot more auditable (any time I see the same set of conditions repeating a lot, I consider whether they are a good candidate for middleware).

I try to explain this to newer developers and they just don't get it, or give me eyerolls.

Maybe sending them this article will help.

james-bcn · 20h ago
Some of this reminds me of the recommendations on one of the great programming books, Code Complete: https://en.wikipedia.org/wiki/Code_Complete
nige123 · 5h ago
Too much cognitive load is a flow stopper.

Finding flow while coding is a juggling act to keep things in the Goldilocks zone: not too hard, not too easy.

This is tricky on an individual level and even trickier for a team / project.

Coding is communicating how to solve a problem to yourself, your team, stakeholders and lastly the computer.

The Empathic Programmer?

asimovfan · 4h ago
I don't seem to come across such limits when i am doing something i am not supposed to be doing (procrastinating, for example obsessively reading something other than what i should be reading).
jakubdudek · 4h ago
Unless you have a quad-core brain like me and can vibecode four separate parts of the project in four terminal tabs. Of course, using your own method. Just kidding, of course.
msephton · 10h ago
Curious why github is linked and not the blog post? https://minds.md/zakirullin/cognitive
computerdork · 14h ago
I completely agree with every in this article, but seems like it's just at different way of looking at the well-known software-engineering concept of "complexity." Yeah, the main difference is cognitive load is considering the complication of the system from how it effects the developer, while complexity focuses on the amount of complications in the system itself.

Yeah, if you go through this article and replace most of the places where it mentions "cognitive load" with "complexity," it still makes sense.

Yeah, this isn't a criticism of the article - In fact, there are important difference, like having more of a focus on what the dev is experiencing handling the complications of the system - But for those really interested in its concept, may want to learn about complexity too, as there is a lot of great info on this.

physidev · 14h ago
I think the viewpoint articulated in this post fits quite well with the one expressed in the often-shared "Programming as Theory-building" article (I think it was shared here just a few days ago).

Scientists, mathematicians, and software engineers are all really doing similar things: they want to understand something, be it a physical system, an abstract mathematical object, or a computer program. Then, they use some sort of language to describe that understanding, be it casual speech, formal mathematical rigor, scientific jargon -- or even code.

In fact, thinking about it, the code specifying a program is just a human-readable description (or "theory", perhaps) of the behavior of that program, precise and rigorous enough that a computer can convert the understanding embodied in that code into that actual behavior. But, crucially, it's human readable: the reason we don't program in machine code is to maximize our and other people's understanding of what exactly the program (or system) does.

From this perspective, when we write code, articles, etc., we should be highly focused on whether our intended audience would even understand what we are writing (at least, in the way that we, the writer, seem to). Thinking about cognitive load seems to be good, because it recognizes this ultimate objective. On the other hand, principles like DRY -- at least when divorced from their original context -- don't seem to implicitly recognize this goal, which is why they can seem unsatisfactory (to me at least). Why shouldn't I repeat myself? Sometimes it is better to repeat myself!? When should I repeat myself??

If you want to see an example of a fabulous mathematician expressing the same ideas in his field (with much better understanding and clarity than I could ever hope to achieve), I highly recommend Bill Thurston's article "On proof and progress in mathematics" <https://arxiv.org/abs/math/9404236>.

zkmon · 13h ago
Children were always told to cram as much as possible into their memory. It was even claimed that, more you put into memory, you mind works better.

Not quite. Human mind has evolved to interpret the sensory data collected by senses, and cause necessary action. Some of that interpretation uses memory to correlate the perceived data with the memory data. That's pretty much it.

Overloading the human memory with tons of data which is not related to the context in which the person lives, can cause negative effects. I suspect it can also cause faster aging. New experiences, new information is like scales on a tree trunk. As you accumulate more of it, you age more.

hinkley · 13h ago
I love learning and until recently despised teaching. I felt like a double agent nodding along at all the lies my teachers told me while I mastered the material almost in spite of the teaching instead of because of it. This became a real problem I college when the material was no longer a given. There were going to be people who flunked because the material was just too hard.

I have always been a C+ student of rote memorization at best. Almost enough to be good at trivia, but not enough to do well in coursework. I am always trying to build a Theory of a System from practically word one, which is the fifth stage of learning, where rote is the first.

supportengineer · 19h ago
Large software projects built by humans will always be doomed to fail, because humans like to build the new, and nobody likes to maintain the old.
mdaniel · 18h ago
I'm pretty sure this entire thread is filled with "nobody likes to maintain the pile of ifs", since I doubt very seriously it's the age that jams people up, it's finding the correct place to make a surgical change that only produces the net-new behavior without blowing up the world. I guess the rest of that is that often the older a codebase is, the more revenue stream in impacts if something goes wrong
tenacious_tuna · 17h ago
I've very much enjoyed maintaining or optimizing or hardening existing systems--I can just never convince my leadership to let me do that.

My current org has a terrible case of not-invented-here syndrome, and it's so easy to pitch new projects that solve something that there's already an existing tool for, or building a new feature. We would love to spend time just working within our existing systems and fixing crap abstractions we made under the deadline-gun, but we're not "allowed" to.

> [...] humans like to build the new, and nobody likes to maintain the old

I think this is certainly true at organizational scale, but most of the people I've met are change-resistant overall.

bauble · 16h ago
Humans are the worst programmers, except for all other programmers.
trentnix · 16h ago
Cognitive load and the benefits of simplification aren't just for systems and code. Reducing cognitive load is critically important in delivering good requirements. It enables engineers to focus on the technical and organization aspects of the solution, not interpreting the problem.

The fact is, despite all the process and pipelines and rituals we've invented to guide how software is made, the best thing leadership can do is to communicate incremental, unambiguous requirements and provide time and space for your engineers to solve the problem. If you don't do that, none of the other meetings and systems and processes and tools will matter.

layer8 · 18h ago
> Then QA engineers come into play: "Hey, I got 403 status, is that expired token or not enough access?"

To be fair, the HTTP status line allows for arbitrary informational text, so something like “HTTP/1.1 401 JWT token expired” would be perfectly allowable.

No comments yet

ivanjermakov · 10h ago
> Let's say we have been asked to make some fixes to a completely unfamiliar project.

How real is this use case? Unless you switch projects really often, this is like a week per two years.

Perhaps we should focus on solving problems that are hard by nature, not by experience of a developer or other external factors.

stevage · 11h ago
A tip: if you're ever writing an article like this which is essentially do's and don'ts, adopt a consistent format for each. In many of these it's not immediately clear which is the do and which is the don't, creating, ironically, cognitive load for the reader.
scoofy · 13h ago
I always explain it this way:

Balancing a cup on a tray isn't too hard. The skill comes in when you can balance 10 cups, and a tray on top of them, and then ten more cups, and another tray, and a vase on that... each step isn't difficult, but maintain the structure is difficult. It's like that, but with ideas.

holysoles · 15h ago
Great read. At my last job, everything was quite monolithic when I joined, and I led the crusade to move to more segmented, module-driven development. There was definitely a period where I eventually swung too far in that direction and only realized it after a dependency issue led to an escalation.

Hopefully someone can learn from this before they spin a complex web that becomes a huge effort to untangle.

MiscCompFacts · 19h ago
This essay was shared 8 months ago and had significant discussion.

https://news.ycombinator.com/item?id=42489645 (721 comments)

sltr · 14h ago
Any discussion of cognitive load in programming needs to include awareness of this book:

The Programmer's Brain: What every programmer needs to know about cognition. By Felienne Hermans

https://www.manning.com/books/the-programmers-brain

guerrilla · 16h ago
> isValid = val > someConstant > isAllowed = condition2 || condition3 > isSecure = condition4 && !condition5 > // , we don't need to remember the conditions, there are descriptive variables > if isValid && isAllowed && isSecure { > ... >}

I literally relaxed in my body when I read this. It was like a deep sigh of cool relief in my soul.

lilerjee · 16h ago
Cognitive Load is not what matters, Solving problems is what matters.

"Cognitive Load" is a buzzword which is abstract.

Cognitive Load is just one factor of projects, and not the main one.

Focus on solving problems, not Cognitive Load, or other abstract concepts.

Use the simple, direct, and effective method to solve problems.

Cognitive Load is relative, it is a high Cognitive Load for one person, but low cognitive load for another person for the same thing.

adolph · 15h ago
"Cognitive Load" may be a buzzword, and not well defined, and that doesn't mean it isn't a useful concept for evaluating different approaches toward solving problems that may extend the useful life of that solution instead of reinventing yet another wheel.

Getting to a better understanding of "cognitive load" does seem useful. Some things are "easier" to understand than others. Could things that are less efficient to understand be formulated in a way that is more efficient?

I have a notion that "cognitive load" is related to the human's ability to gain and maintain attention to mentally ingesting a solution (along with the problem the solution putatively solves). Interesting reads for this include McGilchrist's Master and His Emissary, and Carolyn Dicey Jennings' "I attend, therefore I am," [0], who was interviewed on the Rutt podcast [1].

0. https://aeon.co/essays/what-is-the-self-if-not-that-which-pa...

1. https://jimruttshow.blubrry.net/the-jim-rutt-show-transcript...

andix · 15h ago
I had the exact same experience with layered architectures like described in this article. Avoid them as much as possible, naive and simple code is often better. It might look messy on the surface, and the layered code might look much cleaner. Until you drown in indirections, that are impossible to keep track of.
zakirullin · 14h ago
It seems like some engineers have emotional attachment to all these layered architectures. Explaining or giving them examples of failed projects (based on this architecture) doesn't help.
andix · 12h ago
Cargo cults
dope9967 · 15h ago
Feels like the author completely misunderstood at least one of the fundamental and basic concepts of DDD - writing how it is only about the problem and not the solution space, where it is actually very clearly about both - but still decided to write down a very sure judgement of it. Disappointing.
AlexCoventry · 16h ago
He should cite John Ousterhout, IMO. He's clearly influenced by Ousterhout's (excellent) work.
zakirullin · 14h ago
And I did it a few times :) He knows about the article, we talked about it.
roadbuster · 14h ago
Lucky to have an opportunity to chat with him! Did he have any specific feedback on your essay?
zakirullin · 13h ago
jsd1982 · 19h ago
I think cognitive load has a lot more to do with the paradigm that the code is written in than any particular type of author's contribution to the code. For instance, the object-oriented paradigm by design increases cognitive load by encouraging breaking up otherwise straightforward logic into multiple interfaces, classes, and methods.
haffi112 · 16h ago
The section on nested ifs reminded be of being a Nevernester: https://www.youtube.com/watch?v=CFRhGnuXG-4 (it's a short [~8 min], fun watch)
delichon · 20h ago
The AI version of this is the degradation when the context window gets too large. The fix is the same too, summarize and reset.
ashwinsundar · 20h ago
AI does not have a cognitive load problem in remotely the same way people do. People can chunk and re-chunk info based on their skill and experience, but even the best AI just knows one thing - token length
ijk · 19h ago
Given the way attention works, it seems to me that AI has an even more concrete instantiation of cognitive load.
edtechdev · 19h ago
Cognitive load isn't a valid or useful concept: https://edtechdev.wordpress.com/2009/11/16/cognitive-load-th... https://www.tandfonline.com/doi/full/10.1080/00131857.2024.2...

There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.

In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.

aDyslecticCrow · 19h ago
The linked articles seem to primarily criticize three things about connotative load theory;

- Difficult to measure and therefore a hard or impossible to empirically study. (a bad scientific theory)

- Its application to education and learning theory which where a lot of other techniques are more proven.

- The idea that it's a primary mechanism of human learning, which has had a-lot of research showing otherwise.

Though those points seem valid, this article does not concern itself deeply with this concept. The word "mental strain" or "limited short term memory" could have been inserted in place of "cognitive load", and the points raised would be valid. In effect the article argues we should minimize the amount of things that need to be taken into consideration at any given point when reading (or writing) code. This claim is quite reasonable irrespective of the scientific bases of CTL which it takes its wording from.

So i don't think your criticism is entirely relevant to this article, but raising it does help inform others about issues with the used wording if they happen to want to learn more.

reikonomusha · 18h ago
I think the criticism is relevant because TFA isn't the first to exercise the term "cognitive load" in the context of computing. It's a term thrown around quite often, so we should cross reference its alleged meaning to literature.

I myself find it to be a term that's effectively used as a thought-terminating cliche, sometimes as a way to defend a critic's preferred coding style and organization.

aDyslecticCrow · 18h ago
hmm. Using a term from formal science literature to loosely argue or back questionable arguments withe the ruse of scientific basis is a common issue. I pointed out that this article does not use the formal definition of the term, which you point out is itself an issue. Put that way i agree.

I think the article could have used a different term, or made a more clear declaration of what they specifically meant with the term to resolve this issue. Though i don't think it was done intentionally to deceive since the article makes no mention of the formal literature or theory of "cognitive load" to back its arguments.

00yogi · 17h ago
A similar design principle is called “APIs should be deep, not wide”: https://unterwaditzer.net/2024/api-wrappers.html
zakirullin · 14h ago
Thanks! I might link it to the article.
1dom · 19h ago
This article makes me feel weird.

I think I'm not smart enough for it. I can't really take anything new away from it, mainly just a message of "we're smart people, and trust us when we say smart things are bad. All the smart sounding stuff you learned about how to program from smart sounding people like us? Lol, that's all wrong now."

Okay, I get the cognitive load is bad, so what's the solution?

"Just do simple dumb stuff, duh." Oh, right... Useful.

The problem is never just the code, or the architecture, or the business, or the cognitive load. It's the mismatch of those things against the people expected to work with them.

Walk into a team full of not-simple engineers, and tell them all what they've been doing is wrong, and they need to just write simple code, some of them will fail, some will walk out, and you'll be no closer to a solution.

I wish I knew of the tech world before 20 years ago, where technical roles were long and stable enough for teams to build their own understanding of a suitable level of complexity. Without that, churn means we all have to aim for the lowest common denominator.

bheadmaster · 17h ago
> Okay, I get the cognitive load is bad, so what's the solution?

Modularity.

Each component in your system should be a (relatively) simple composition of other (smaller) components, in such a way that each component can be understood as a black box, and is interchangable with any other implementation or the same thing.

1dom · 15h ago
From the article:

> All too often, we end up creating lots of shallow modules, following some vague "a module should be responsible for one, and only one, thing" principle.

This is what I'm talking about: this writing is too smart for me, because I can't take any simple answers from it like "modularity" without feeling another part of the article contradicts it with other smart sounding ways of saying don't listen to smart stuff.

arnonejoe · 16h ago
"Domain-driven design has some great points, although it is often misinterpreted”. Agreed. The worst shops I’ve ever worked in are ones where the DDD/evans/fowler orthodoxy has run amok.
gtzi · 14h ago
This is also a proper framework for evaluating AI replies – they do not only need to be appropriate, they also need to consume the minimum cognitive load for parsing.
madman2890 · 18h ago
“Smart developer’s quirks” tend to peak in 3-8 years of experience and fade off thereafter. A hipster will never fade off and instead continue hipster coding alongside their identity in perpetuity.
jakobov · 15h ago
Yes this is the central theme in https://codeisforhumans.com/
insanebrain · 14h ago
I love love love monorepo + fat encapsulated modules + a couple of deployables. Why does the software industry create fake complexity/creativity in the craft? Fashion/hype architectures for dopamine and fulfilment. You're just wasting energy finding new ways to create machine code for hardware. Why not get creative in the hardware and actually make something new? Another thing that really grinds my gears, is how many human hours are poured into js frameworks instead of improving browsers. What an utter waste of time. Maybe mediocre engineers like me who could never design a CPU or extend a browser need to feel seen by embracing a new domain whatever concept to make us feel warm and fuzzy but not really innovate the real hard things that move the needle.
zakirullin · 14h ago
Agreed on every point! From what I observed, some engineers try to express themselves through exciting architectures. They are proud of it, they get their dopamine. Their craft can't be simple, if it is simple then they are dull too. At some point in time some of them stop associating their code with themselves, and take somewhat business approach. Most of them do not pass this point, unfortunately.
zakirullin · 14h ago
I also find JS and all the infrastructure very mentally taxing. Building a legacy project via some esoteric and already out-of-fashion build system is so painful. Thousands of dependencies, hundreds of issues and warnings...
riedel · 18h ago
Speaking only of intrinsic and extraneous cognitive load oversimplifies things. This works for tasks of pilots (see e.g. NASATLX scores).

However, if new information that needs to be learned is in the game there is also germane cognitive load [0]. It is a nice theory, however, practically there is unfortunately no easy way to separate them and look at them totally independently.

[0] https://mcdreeamiemusings.com/blog/2019/10/15/the-good-the-b...

nibblenum · 19h ago
Still processing this article, but so far enjoying that it opens with some humour, and also shows off logistics ideas that are not locked into one domain if you zoom out. Thank you :)
rowanG077 · 19h ago
It's always interesting that many people who push the cognitive load argument also push for simpler languages. To me once I have learned a language well the features it has don't add to the cognitive load. they become basically second nature. It even has a great benefit, many things that are explicit in simple languages because there is no language support fall away in more complex languages. So more complex languages reduce cognitive load, at least for me.
tester756 · 2h ago
>So more complex languages reduce cognitive load, at least for me.

Even C++ and all it's crazy features of last 5-10 years?

weiliddat · 17h ago
Exactly, cognitive load is dynamic not static, and you can actually hold many more things in working memory than the oft-repeated 3-7 items (that's more if you're trying memorize and recall unrelated, novel items).

Once you commit a particular concept to long-term memory and it's not "leaky" (you have to think through the internal behavior/implementation details), then now you have more tools and ways to describe a collective bunch of lower-level concepts.

That's the same feeling programmers used to more powerful languages have to write less powerful languages — instead of using 1 concept to describe what you want, now you have to use multiple things. It's only easier if you've not grokked the concept.

aDyslecticCrow · 18h ago
Complex languages can give the programmer powerful tools to abstract things badly. But those powerful tools can also help make the code clearer if used right. (I'm a real sucker for a map().filter().map().sort().unwrap(), and feel the same logic becomes so unruly to understand if converted to a large loop)

I think the sentiment that we should use simpler languages comes abuse of powerful features. Once we've meta-programmed the entire program logic with a 12 layer deep type tree or inheritance chain... we may realize we abused the tool in a way a simple language would have stopped.

But at the same time...checking a errno() after a function call just because the language lack result type or exception handling, is clearly too simple. A minor increase in language complexity would have made the code much clearer.

chrisweekly · 20h ago
Brilliant essay. Bookmarked for future reference.
dsego · 19h ago
I would also recommend A Philosophy of Software Design if you haven't read it, a very short and brilliant read with a similar approach.

There is also a discussion between the author of Clean Code and APOSD:

https://github.com/johnousterhout/aposd-vs-clean-code

marcelr · 18h ago
while i think this is generally good advice, i also think reality isn't easy to define

i like what others would call complexity, i always have, and have from very very early on been mindful of that, i think to a fault since i no longer trust my intuition

is it good to try to turn wizards into brick layers? is there no other option?

ChrisMarshallNY · 15h ago
There's some good stuff in the posting.

Certainly giving me some pause for thought, in my own work.

oh_my_goodness · 19h ago
I would love to have four chunks in my head. I feel like I have to start writing when I get to #3.
perlgeek · 18h ago
I think the origin of the "four chunks in short-term memory" comes from giving people tasks to calculate some numbers, and seeing after how many digits they became noticeably slower.

A fact that you need to remember about code might use up more or less short-term memory in a human brain compared to a digit or a number, so don't be ashamed if your number is 3 instead of 4.

I also think that my working memory was better when I was 20ish, now at 41 I already feel less fits in and I forget it faster.

kadutskyi · 14h ago
Tidy first? book gives lots of such advice. Very helpful.
hinkley · 13h ago
Cognitive load may be the dominant form of stress but it is not the only one. I feel like this is very close to correct but subtlety and critically broken.

In particular, when the shit hits the fan, your max cognitive load tanks. Something people who grumble at the amount of foolproofing I prefer often only discover in a crisis. Because they’re used to looking at something the way they look at it while sipping their second coffee of the day. Not when the servers are down and customers are calling angry.

You’ll note that we only see how the control room at NASA functions in movies and TV when there’s a massive crisis going on, or intrigue. Because the rest of the time it’s so fucking boring nobody would watch it.

weiliddat · 18h ago
While I support the goal of article, reducing extraneous cognitive load, I think some of the comments, and the article are missing a key point about cognitive load — it depends on the existing mental model the reader/author/developer has about the whole thing. There is no universal truth to reducing cognitive load like reducing abstractions / not relying on frameworks.

Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.

What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.

I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.

Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.

If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)

zakirullin · 14h ago
lxe · 20h ago
Introducing intermediate variables is what I call "indirection". You're adding another step to someone reading the code.

Let's take a recipe:

   Ingredients:
   large bowl
   2 eggs
   200 grams sugar
   500 grams flour
   1/2 tsp soda

   Steps:

   Crack the eggs into a bowl. Add sugar and whisk. Sift the flower. Add the soda.
When following the instruction, you have to always refer back to the ingredients list and search for the quantity, which massively burdens you with "cognitive load". However, if you inline things:

   Crack 2 eggs into a large bowl. Add 200g sugar and whisk. Sift 500g of flower. Add 1/2 tsp soda.
Much easier to follow!
bathtub365 · 20h ago
When making a recipe you usually need to buy some or all of the ingredients. You also want to collect them all together beforehand since it makes things go a lot more smoothly. If they didn’t list them separately it would be easier to miss one.
mdaniel · 18h ago
This probably isn't entirely what you meant, but people who do this code smell drive me starkraving mad

  shipToAddress(getShippingAddress(getStreet()),calculateShipping(getShippingAddress(getStreet())))
because as soon as one needs to update the mechanism for getting the shipping address, congratulations, you're updating it everywhere -- or not, nothing matters in this timeline
aDyslecticCrow · 18h ago
your example makes me question how much you bake.

The instructions are simple to remember, the ingredient quantities are not. Once i have read the recipe, i have a clear idea what i need to do. But i may need to look up the quantities. Looking for those in a block of text is a waste of time and error prone. Having a recipe formatted like the 2nd example is only useful for a someone inexperienced with baking.

Its also difficult to buy or bring out the ingredients if they're hidden in text. Or know if i can bake something based on what i have or not.

If you've baked an really old recipe from a book, you may find instructions like "Mix the ingredients, put in a pan, bake until brown on high heat", with a list of quantities and not even a picture to know what it is you're baking. An experienced baker will know the right order and tools, and how their oven behaves, so it's not written out at all.

If i look for recipes online, seeing the instructions written like you have makes me reluctant to trust the author, and also makes me annoyed when trying to follow or plan out the process.

pessimizer · 17h ago
Redundancy like that makes people like me feel that we missed something, and ruins forward progress. If you started "Crack 1 egg into the bowl." I'd move through it at light speed. When you say "Crack 2 eggs..." I think, "did I miss some ingredients, aren't there only two eggs?" and I rescan the entire ingredient list.

Then, still unsure, I go back and read the end of the sentence. "...into a large bowl."

Another large bowl? Wasn't there only one bowl? Was there another small bowl? Were the other eggs supposed to be in the small bowl? Is there another recipe listed separately where I'm supposed to crack the other egg (that might exist) into a small bowl? A single egg would only need a small bowl. Yes, there's probably another part of the recipe that is a sauce or filling or something that needs you to crack one egg into a small bowl. Let me reread everything on this webpage until I find something referred to that might be a necessary part of finishing this recipe, find the instructions to make that, then come back to this. Also, I have to reevaluate whether I even have enough eggs and bowls to make this recipe. I've read the entire page again, and I can't find what I'm missing. Maybe I'll google another, similar recipe, and they'll be a sauce or side that is mandatory for this dish that everybody already knows about but me, and it will be obvious. Ok, this is something similar, and it's served with some sort of a lemon sauce? I don't think I like lemon sauces. I'm not going to make this, I'm not even sure if I can. I hate trying to find recipes on the internet.

piskov · 18h ago
This is your annual reminder to watch “Simple made easy” by Rich Hickey

https://youtu.be/SxdOUGdseq4

dennisy · 16h ago
Whilst I agree with lots of ideas in this piece, I fell out of love with it when clicking into the discussion on what should be done instead of using a layered architecture.

The author makes valid points but they are vacuous and do not provide concrete alternatives.

Many engineering articles disappoint me in this way, I get hyped by all the “don’t dos”, but the “do dos” never come.

zakirullin · 13h ago
Software engineering is a relatively immature field. Nobody knows how to cook it in a proper way. What we know for sure is how to fail (point of the article). There was an analogy about building bridges and writing software. Building a bridge is boring, it's a very mature engineering field, and it's clearly known how to do it the best way possible. Software development is far from it. Unknown unknowns, ever changing requirements, different mental models in people's brains...
bob1029 · 20h ago
At some point, you just need to go with the flow. Worrying about the metacognitive consequences of your work and trying to actively manage this with technological policy will turn into a death spiral. You should always try to take advantage of the momentum in the things around you. Our profession has a reputation for going out of its way to do things like push for a rewrite of a company's codebase after having seen the legacy for 20 minutes. This isn't even a chesterson's fence conversation. This is crude, impulsive behavior that makes any kind of productive business infeasible.

Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.

0xbadcafebee · 4h ago
> We need something more fundamental, something that can't be wrong.

What is this bug in software people's brains that keeps thinking "I can come up with a perfect idea that is never wrong" ? Can a psychologist explain this to me please?

Like, scientists know this is dumb. The only way something can be perceived as right, scientifically, is if lots of people independently test an idea, over and over and over and over again, and get the same result. And even then, they just say it's true so far.

But software people over here like "If I spend 15 minutes thinking about an idea, I can come up with a fundamental principle of everything that is always true forever." And sadly the whole "fundamental principle" is based in ignorance. Somebody heard an interesting-sounding term, never actually learned what it meant, but decided to make up their own meaning for it, and find anything else in their sphere (software) that backs up their theory.

If they'd at least quoted any of the academic study and research about cognitive load over the past 35 years, maybe I might be blowing this out of proportion? But nope. This is literally just a clickbait rant, based on vibes, backed up by quotes from blogs. The author doesn't seem to understand cognitive load at all, and their descriptions of what it is, and what you should do in relation to it, are all wrong. The article doesn't even mention all three types of cognitive load. And one of the latest papers on the subject (Orru G., Longo L. (2019)) basically came to the conclusion that 1) the whole thing is very complex, and 2) all the previous research might be bunk or at least need brand new measurement methods, so... why is anyone taking this all as if it's fact?

But I'm not really bothered by the ignorance. It's the ego that kills me. The idea that these random people who know nothing about a subject are rushing to debate this, as if this idea, or these people's contributions, have merit, just because they think they're really smart.

shaimagz · 14h ago
I’m not reading code anymore, cursor does
vdupras · 16h ago
I don't know, I'm seduced by the elitist approach: code with a high cognitive load keeps mediocre developers away.

Case in point: Forth. It generally has a heavy cognitive load. However, Forth also enables a radical kind of simplicity. You need to be able to handle the load to access it.

The mind can train to a high cognitive load. It's a nice "muscle" to train.

Should we care about cognitive load? Absolutely. It's a finite budget. But I also think that there are legitimate reasons to accept a high cognitive load for a piece of code.

One might ask "what if you need to onboard mediocre developers into your project?". Hum, yeah, sure. In that case, this article is correct. But being forced to onboard mediocre developers highlights an organizational problem.

winwang · 16h ago
This is an interesting take. I have a somewhat orthogonal viewpoint -- rather than "heavy cognitive load", I think that going somewhat off-mainstream is good for attracting, on average, better devs. For example, it's likely that the average Haskell dev spends more time honing their craft than the average Java dev. The article kind of touches on this (e.g. FP "vs" the more popular OOP) with familiarity vs simplicity though.
ruszki · 6h ago
As a coder, these are the exact problems which shouldn’t cause problems for you. You should be a coder because you’re better in these problems than others.

More coders are needed, than to who these are “simple”, I understand. But, if you have problems with these, I would definitely try to pivot to something else, like managerial positions. Especially with AI on us. Of course, if you are fine to be an “organic robot”, then it’s fine, but you’ll never really get why this profession is awesome. You’ll never have the leverage.