Push Ifs Up and Fors Down

253 goranmoomin 109 5/17/2025, 9:31:55 AM matklad.github.io ↗

Comments (109)

Waterluvian · 3h ago
My weird mental model: You have a tree of possible states/program flow. Conditions prune the tree. Prune the tree as early as possible so that you have to do work on fewer branches.

Don’t meticulously evaluate and potentially prune every single branch, only to find you have to prune the whole limb anyways.

Or even weirder: conditionals are about figuring out what work doesn’t need to be done. Loops are the “work.”

Ultimately I want my functions to be about one thing: walking the program tree or doing work.

0xWTF · 1h ago
Can I float an adjacent model? Classes are nouns, functions are verbs.
slipnslider · 10m ago
I remember being taught that in CS101 and still use it today 15 years later. It's a good and simple and easy to follow pattern
Brian_K_White · 2h ago
perfectly good models
andyg_blog · 4h ago
A more general rule is to push ifs close to the source of input: https://gieseanw.wordpress.com/2024/06/24/dont-push-ifs-up-p...

It's really about finding the entry points into your program from the outside (including data you fetch from another service), and then massaging in such a way that you make as many guarantees as possible (preferably encoded into your types) by the time it reaches any core logic, especially the resource heavy parts.

dataflow · 3h ago
Doesn't this obfuscate what assumptions you can make when trying to understand the core logic? You prefer to examine all the call chains everywhere?
fmbb · 3h ago
The ”core logic” of a program is what output it yields for a given input.

If you find a bug, you find it because you discover that a given input does not lead to the expected output.

You have to find all those ifs in your code because one of them is wrong (probably in combination with a couple of others).

If you push all your conditionals up as close to the input as possible, your hunt will be shorter, and fixing will be easier.

avianlyric · 3h ago
This is why we invented type systems. No need to examine call chains, just examine input types. The types will not only tell you what assumptions you can make, but the compiler will even tell you if you make an invalid assumption!
dataflow · 3h ago
You can't shove every single assumption into the type system...
knome · 3h ago
You can and should put as many as you can there

https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

If instead of validating that someone has sent you a phone number in one spot and then passing along a string, you can as easily have a function construct an UncheckedPhoneNumber. You can choose to only construct VerifiedPhoneNumbers if the user has gone through a code check. Both would allow you to pluck a PhoneNumber out of them for where you need to have generic calling code.

You can use this sort of pattern to encode anything into the type system. Takes a little more upfront typing than all of those being strings, but your program will be sure of what it actually has at every point. It's pretty nice.

No comments yet

layer8 · 3h ago
True, but there are still documented interface contracts you can program against. The compiler won’t catch violations of the non-type parts, but the requirements are still well-defined with a proper interface contract. It is a trade-off, but so is repeating the same case distinction in multiple parts of the program, or having to pass around the context needed to make the case distinction.
sn9 · 55m ago
You can at least shove them into the constructors.
treyd · 3h ago
You can express a lot of concepts just through types in languages with richer type systems.
Nevermark · 3h ago
Assumptions are like di**os. You can shove them anywhere if you are brave enough. [0]

[0] Source: Abraham Lincoln. (At least I have heard he said that.)

EDIT: I am, of course, speaking specifically to the bravery and audacity of programming language designers, with admirable tunnel vision, bullheadedness and mission for maximally general algebraic and arbitrary constraint type systems.

I am on such a quest, and any capitulation to the inevitability of imperfect solutions, however probable of an outcome that might seem, is still a long way off.

badc0ffee · 3h ago
Leave the dingos alone
kevindamm · 1h ago

  > with admirable tunnel vision, bullheadedness and mission for maximally general algebraic and arbitrary constraint type systems.
I believe they're called keyhole optimizations, greedy search, and "the customer is always right..."
setr · 3h ago
If you’ve massaged and normalized the data at entry, then the assumptions at core logic should be well defined — it’s whatever the rules of the normalized output are.

You don’t need to know all of the call chains because you’ve established a “narrow waist” where ideally all things have been made clear, and errors have been handled or scoped. So you only need to know the call chain from entry point to narrow waist, and separately narrow waist till end.

furyofantares · 3h ago
The idea and examples are that the type system takes care of it. The rule of thumb is worded overly generally, it's more just about stuff like null checks if you have non-nullable types available.
kazinator · 11m ago
> If there’s an if condition inside a function, consider if it could be moved to the caller instead

This idle conjecture is too rife with counterexamples.

- If the function is called from 37 places, should they all repeat the if statement?

- What if the function is getaddrinfo, or EnterCriticalSection; do we push an if out to the users of the API?

I think that we can only think about this transformation for internal functions which are called from at most two places, and only if the decision is out of their scope of concern.

Another idea is to make the function perform only the if statement, which calls two other helper functions.

If the caller needs to write a loop where the decision is to be hoisted out of the loop, the caller can use the lower-level "decoded-condition helpers". Callers which would only have a single if, not in or around a loop, can use the convenience function which hides the if. But we have to keep in mind that we are doing this for optimization. Optimization often conflicts with good program organization! Maybe it is not good design for the caller to know about the condition; we only opened it up so that we could hoist the condition outside of the caller's loop.

These dilemmas show up in OOP, where the "if" decision that is in the callee is the method dispatch: selecting which method is called.

Techniques to get method dispatch out of loops can also go against the grain of the design. There are some patterns for it.

E.g. wouldn't want to fill a canvas object with a raster image by looping over the image and calling canvas.putpixel(x, y, color). We'd have some method for blitting an image into a canvas (or a rectangular region thereof).

sparkie · 38m ago
In some cases you want to do the opposite - to utilize SIMD.

With AVX-512 for example, trivial branching can be replaced with branchless code using the vector mask registers k0-k7, so an if inside a for is better than the for inside the if, which may have to iterate over a sequence of values twice.

To give a basic example, consider a loop like:

    for (int i = 0; i < length ; i++) {
        if (values[i] % 2 == 1)
            values[i] += 1;
        else
            values[i] -= 2;
    }
We can convert this to one which operates on 16 ints per loop iteration, with the loop body containing no branches, where each int is only read and written to memory once (assuming length % 16 == 0).

    __mmask16 consequents;
    __mmask16 alternatives;
    __mm512i results;
    __mm512i ones = _mm512_set1_epi32(1);
    __mm512i twos = _mm512_set1_epi32(2);
    for (int i = 0; i < length ; i+= 16) {
        results = _mm512_load_epi32(&values[i]); 
        consequents = _mm512_cmpeq_epi32(_mm512_mod_epi32(values, twos), ones);
        results = _mm512_mask_add_epi32(results, consequents, results, ones);
        alternatives = _knot_mask16(consequents);
        results = _mm512_mask_sub_epi32(results, alternatives, results, twos);
        _mm512_store_epi32(&values[i], results);
    }
Ideally, the compiler will auto-vectorize the first example and produce something equivalent to the second in the compiled object.
layer8 · 4h ago
The example listed as “dissolving enum refactor” is essentially polymorphism, i.e. you could replace the match by a polymorphic method invocation on the enum. Its purpose is to decouple the point where a case distinction is established (the initial if) from the point where it is acted upon (the invocation of foo/bar). The case distinction is carried by the object (enum value in this case) or closure and need not to be reiterated at the point of invocation (if the match were replaced by polymorphic dispatch). That means that if the case distinction changes, only the point where it is established needs to be changed, not the points where the distinct actions based on it are triggered.

This is a trade-off: It can be beneficial to see the individual cases to be considered at the points where the actions are triggered, at the cost of having an additional code-level dependency on the list of individual cases.

password4321 · 6h ago
Code complexity scanners⁰ eventually force pushing ifs down. The article recommends the opposite:

By pushing ifs up, you often end up centralizing control flow in a single function, which has a complex branching logic, but all the actual work is delegated to straight line subroutines.

https://docs.sonarsource.com/sonarqube-server/latest/user-gu...

hinkley · 5h ago
The way to solve this is to split decisions from execution and that’s a notion I got from our old pal Bertrand Meyer.

    if (weShouldDoThis()) {
        doThis();
    }
It complements or is part of functional core imperative shell. All those checks being separate makes them easy to test, and if you care about complexity you can break out a function per clause in the check.
0cf8612b2e1e · 4h ago
Functions should decide or act, not both.
swat535 · 56m ago
But if that’s all you have, then how does your system do anything ? You ultimately need to be able to decide and then act based in that decision somewhere..
turtleyacht · 29m ago
One possibility is a file.py that is called by your framework. The interface could be something like

  def doth_match(*args):
    return True  # the predicate

  def doeth_thou(*args):
    # processing things
    return {}  # status object for example
The framework loops and checks the first function; if true, then execute the second function. And then break or continue for other rule files (or objects).

There could be multiple files rule1.py, rule2.py, etc that check and do different things.

d_burfoot · 4h ago
This is an excellent rule.
btown · 2h ago
To add to this, a pattern that's really helpful here is: findThingWeShouldDoThisTo can both satisfy a condition and greatly simplify doThis if you can pass it the thing in question. It's read-only, testable, and readable. Highly recommend.
jt2190 · 4h ago
Code scanners reports should be treated with suspicion, not accepted as gospel. Sonar in particular will report “code smells” which aren’t actually bugs. Addressing these “not a bug” issues actually increases the risk of introducing a new error from zero to greater than zero, and can waste developer time addressing actual production issues.
xp84 · 4h ago
I agree with you. Cyclomatic complexity check may be my least favorite of these rules. I think any senior developer almost always “knows better” than the tool does what is a function of perfectly fine complexity vs too much. But I have to grudgingly grant that they have some use since if the devs in question routinely churn out 100-line functions that do 1,000 things, the CCC will basically coincidentally trigger and force a refactor which may help to fix that problem.
jerf · 4h ago
Cyclomatic complexity may be a helpful warning to detect really big functions, but the people who worry about cyclomatic complexity also seem to be the sort of people who want to set the limit really low and get fiesty if a function has much more than a for loop with a single if clause in it. These settings produce those code bases where no function anywhere actually does anything, it just dispatches to three other functions that also don't hardly do anything, making it very hard to figure out what is going on, and that is not a good design.
BurningFrog · 3h ago
I call this "poltergeist code". Dozens of tiny functions that together clearly does something complex correctly, but it's very hard to find where and how it's actually done.
Brian_K_White · 2h ago
One incomplete but easy to state counter "rule" to fling back in such cases is just: If the function isn't generic and re-used for other unrelated things, then it probably shouldn't be a seperate function.

Yeah only probably, there can sure be large distinct sub-tasks that aren't used by any other function yet would improve understanding to encapsulate and replace with a single function call. You decide which by asking which way makes the overall ultimate intent clearer.

Which way is a closer match to the email where the boss or customer described the business logic they wanted? Did they say "make it look at the 3rd word to see if it has trailing spaces..."?

Or to find out which side of the fuzzy line a given situation is falling, just make them answer a question like, what is the purpose of this function? Is it to do the thing it's named after? Or is it to do some meaningless microscopic string manipulation or single math op? Why in the world do you want to give a name and identity to a single if() or memcpy() etc?

zellyn · 2h ago
I love that name, and will definitely steal it!
mnahkies · 3h ago
I wonder if there's any value in these kind of rules for detecting AI slop / "vibe coding" and preempting the need for reviewers to call it out.
password4321 · 1h ago
The tools are usually required for compliance of some sort.

Fiddling with the default rules is a baby & bathwater opportunity similar to code formatters, best to advocate for a change to the shipping defaults but "ain't nobody got time for that"™.

SoftTalker · 1h ago
a/k/a if it works don't fuck with it.
daxfohl · 5h ago
IME this is frequently a local optimum though. "Local" meaning, until some requirement changes or edge case is discovered, where some branching needs to happen outside of the loop. Then, if you've got branching both inside and outside the loop, it gets harder to reason about.

It can be case-dependent. Are you reasonably sure that the condition will only ever effect stuff inside the loop? Then sure, go ahead and put it there. If it's not hard to imagine requirements that would also branch outside of the loop, then it may be better to preemptively design it that way. The code may be more verbose, but frequently easier to follow, and hopefully less likely to turn into spaghetti later on.

(This is why I quit writing Haskell; it tends to make you feel like you want to write the most concise, "locally optimum" logic. But that doesn't express the intent behind the logic so much as the logic itself, and can lead to horrible unrolling of stuff when minor requirements change. At least, that was my experience.)

ummonk · 4h ago
I've always hated code complexity scanners ever since I noticed them complaining about perfectly readable large functions. It's a lot more readable when you have the logic in one place, and you should only be trying to break it up when the details cause you to lose track of the big picture.
marcosdumay · 4h ago
There was a thread yesterday about LLMs where somebody asked "what other unreliable tool people accept for coding?"

Well, now I have an answer...

shawnz · 3h ago
Sometimes I like to put the conditional logic in the callee because it prevents the caller from doing things in the wrong order by accident.

Like for example, if you want to make an idempotent operation, you might first check if the thing has been done already and if not, then do it.

If you push that conditional out to the caller, now every caller of your function has to individually make sure they call it in the right way to get a guarantee of idempotency and you can't abstract that guarantee for them. How do you deal with that kind of thing when applying this philosophy?

Another example might be if you want to execute a sequence of checks before doing an operation within a database transaction. How do you apply this philosophy while keeping the checks within the transaction boundary?

avianlyric · 3h ago
You’ve kind of answered your own question here.

> If you push that conditional out to the caller, now every caller of your function has to individually make sure they call it in the right way to get a guarantee of idempotency

In this situation your function is no longer idempotent, so you obviously can’t provide the guarantee. But quite frankly, if you’re having to resort to making individual functions implement state management to provide idempotency, then I suspect you’re doing something very odd, and have way too much logic happening inside a single function.

Idempotent code tends to fall into two distinct camps:

1. Code that’s inherently idempotent because the data model and operations being performed are inherently idempotent. I.e. your either performing stateless operations, or you’re performing “PUT” style operations where in the input data contains all the state the needs to be written.

2. Code that’s performing more complex business operations where you’re creating an idempotent abstraction by either performing rollbacks, or providing some kind of atomic apply abstraction that ensures partial failures don’t result in corrupted state.

For point 1, you shouldn’t be checking for order of operations, because it doesn’t matter. Everything is inherently idempotent, just perform the operations again.

For point 2, there is no simple abstraction you can apply. You need to have something record the desired operation, then ensure it either completes or fails. And once that happens, ensures that completion or failure is persistent permanently. But that kind of logic is not the kind of thing you put into a function and compose with other operations.

shawnz · 2h ago
Consider a simple example where you're checking if a file exists, or a database object exists, and creating it if not. Imagine your filesystem or database library either doesn't have an upsert function to do this for you, or else you can't use it because you want some special behaviour for new records (like writing the current timestamp or a running total, or adding an entry to a log file, or something). I think this is a simple, common example where you would want to combine a conditional with an action. I don't think it's very "odd" or indicative of "way too much logic".
jknoepfler · 56m ago
Probably implicit in your #2, but there are two types of people in the world: people who know why you shouldn't try to write a production-grade database from scratch, and people who don't know why you shouldn't try to write a production-grade database from scratch. Neither group should try to write a production-grade database from scratch.
bee_rider · 3h ago
Maybe write the functions without the checks, then have wrapper functions that just do the checks and then call the internal function?
shawnz · 2h ago
Is that really achieving OP's goal though, if you're only raising it by creating a new intermediary level to contain the conditional? The conditional is still the same distance from the root of the code, so that seems like it's not in the spirit of what they are saying. Plus you're just introducing the possibility for confusion if people call the unwrapped function when they intended to call the wrapped function
Brian_K_White · 2h ago
But the checking and the writing really are 2 different things. The "rule" that you always want to do this check before write is really never absolute. Wrapper is exactly correct. You could have the single function and add a param that says skip the check this time, but that is messier and even more dangerous than the seperate wrapper.

Depends just how many things are checked by the check I guess. A single aspect, checking whether the resource is already claimed or is available, could be combined since it could be part of the very access mechanism itself where anything else is a race condition.

astrobe_ · 2h ago
It sounds like self-inflicted boilerplate to me.
rco8786 · 2h ago
I'm not sure I buy the idea that this is a "good" rule to follow. Sometimes maybe? But it's so contextually dependent that I have a hard time drawing any conclusions about it.

Feels a lot like "i before e except after c" where there's so many exceptions to the rule that it may as well not exist.

Kuyawa · 5h ago
Push everything down for better code readability

  printInvoice(invoice, options) // is much better than

  if(printerReady){
    if(printerHasInk){
      if(printerHasPaper){
        if(invoiceFormatIsPortrait){
  :
The same can be said of loops

  printInvoices(invoices) // much better than

  for(invoice of invoices){
    printInvoice(invoice)
  }
At the end, while code readability is extremely important, encapsulation is much more important, so mix both accordingly.
lblume · 2h ago
> printInvoice(invoice, options)

The function printInvoice should print an invoice. What happens if an invoice cannot be printed due to one of the named conditionals being false? You might throw an exception, or return a sentinel or error type. What do to in that case is not immediately clear.

Especially in languages where exceptions are somewhat frowned upon for general purpose code flow, and monadic errors are not common (say Java or C++), it might be a better option to structure the code similar to the second style. (Except for the portrait format of course, which should be handled by the invoice printer unless it represents some error.)

> while code readability is extremely important, encapsulation is much more important

Encapsulation seems to primarily be a tool for long-term code readability, the ability to refactor and change code locally, and to reason about global behavior by only concerning oneself with local objects. To compare the two metrics and consider one more important appears to me as a form of category error.

inetknght · 3m ago
> Push everything down for better code readability

> demonstrates arrow anti-pattern

Ewwww gross. No. Do this instead:

if(!printerReady){ return; } if(!printerHasInk){ return; } if(!printerHasPaper){ return; } if(!invoiceFormatIsPortrait){ return; }

Way more readable than an expanding arrow.

> printInvoices(invoices) // much better than

But yes, put the loop into its own function with all of the other assumptions already taken care of? This is good.

wiradikusuma · 34m ago
I just wrote some code with this "dilemma" a minute ago. But I was worried the callers forget to include the "if" so I put it inside the method. Instead, I renamed the method from "doSomething" to "maybeDoSomething".
gnabgib · 3h ago
(2023) Discussion at the time (662 points, 295 comments) https://news.ycombinator.com/item?id=38282950
janosch_123 · 3h ago
If's to the top as guard statements.

Add asserts to the end of the function too.

Loop's can live in the middle, take as much I/O and compute out of the loop as you can :)

slt2021 · 1h ago
Ifs = control flow

Fors = data flow / compute kernel

it makes sense to keep control flow and data flow separated for greater efficiency, so that you independently evolve either of flows while still maintaining consistent logic

neRok · 4h ago
I agree that the first example in the article is "bad"...

  fn frobnicate(walrus: Option<Walrus>)`)
but the rest makes no sense to me!

  // GOOD
  frobnicate_batch(walruses)
  // BAD
  for walrus in walruses {
    frobnicate(walrus)
  }
It doesn't follow through with the "GOOD" example though...

  fn frobnicate_batch(walruses)
    for walrus in walruses { frobnicate(walrus) }
  }
What did that achieve?

And the next example...

  // GOOD
  if condition {
    for walrus in walruses { walrus.frobnicate() }
  } else {
    for walrus in walruses { walrus.transmogrify() }
  }
  // BAD
  for walrus in walruses {
    if condition { walrus.frobnicate() }
    else { walrus.transmogrify() }
  }
What good is that when...

  walruses = get_5_closest_walruses()
  // "GOOD"
    if walruses.has_hungry() { feed_them_all() }
    else { dont_feed_any() }
  // "BAD"
    for walrus in walruses {
       if walrus.is_hungry() { feed() }
       else { dont_feed() }
magicalhippo · 3h ago
> What did that achieve?

An interface where the implementation can later be changed to do something more clever.

At work we have a lot of legacy code written the BAD way, ie the caller loops, which means we have to change dozens of call sites if we want to improve performance, rather than just one implementation.

This makes it significantly more difficult than it could have been.

lblume · 1h ago
Two counterpoints.

Firstly, in many cases the function needs to serve both purposes — called on a single item or called on a sequence of such. A function that always loops would have to be called on some unitary sequence or iterator which is both unergonomic and might have performance implications.

Second, the caller might have more information than the callee on how to optimize the loop. Consider a function that might be computationally expensive for some inputs while negligible for others — the caller, knowing this information, could choose to parallelize the former inputs while vectorizing etc. the latter (via use of inlining, etc.). This would be very hard or at least complicate things when the callee's responsibility.

jerf · 4h ago
I think the "push for loops down" is missing a bit of detail about the why. The author alludes to "superior performance" but I don't think makes it clear how that can happen.

Vectorization is a bit obscure and a lot of coders aren't worried about whether their code vectorizes, but there's a much more common example that I have seen shred the performance of a lot of real-world code bases and HTTP APIs, which is functions (including APIs) that take only a single thing when they should take the full list.

Suppose we have posts in a database, like for a forum or something. Consider the difference between:

    posts = {}
    for id in postIDs:
        post[id] = fetchPost(id)
versus

    posts = fetchPosts(postIDs)
fetchPost and fetchPosts both involve hitting the database. The singular version means that the resulting SQL will, by necessity, only have the one ID in it, and as a result, a full query will be made per post. This is a problem because it's pretty likely here that fetching a post is a very fast (indexed) operation, so the per-query overhead is going to hit you hard.

The plural "fetchPosts", on the other hand, has all the information necessary to query the DB in one shot for all the posts, which is going to be much faster. An architecture based on fetching one post at a time is intrinsically less performant in this case.

This opens up even more in the HTTP API world, where a single query is generally of even higher overhead than a DB query. I think the most frequent mistake I see in HTTP API design (at least, ignoring quibbling about which method and error code scheme to use) is providing APIs that operate on one thing at a time when the problem domain naturally lends itself to operating on arrays (or map/objects/dicts) at a time. It's probably a non-trivial part of the reason why so many web sites and apps are so much slower than they need to be.

I find it is often easy to surprise other devs with how fast your system works. This is one of my "secrets" (please steal it!); you make sure you avoid as many "per-thing" penalties as possible by keeping sets of things together as long as possible. The "per-thing" penalties can really sneak up on you. Like nested for loops, they can easily start stacking up on you if you're not careful, as the inability to fetch all the posts at once further cascades in to you then, say, fetching user avatars one-by-one in some other loop, and then a series of other individual queries. Best part is, profiling may make it look like the problem is the DB because "the DB is taking a long time to serve this" because profiles are not always that good at turning up that your problem is per-item overhead rather than the amount of real work being done.

mnahkies · 3h ago
The worst / most amusing example of this I've seen in the wild was a third party line of business application that was sequentially triaging "pending tasks" to assign priority/to workers.

Our cloud provider had an aircon/overheating incident in the region we were using, and after it was resolved network latency between the database and application increased by a few milliseconds. Turns out if you multiply that by a few million/fast arrival rate you get a significant amount of time, and the pending tasks queue backs up causing the high priority tasks to be delayed.

Based on the traces we had it looked like a classic case of "ORM made it easy to do it this way, and it works fine until it doesn't" but was unfortunately out of our control being a third party product.

If they'd fetched/processed batches of tasks from the database instead I'm confident it wouldn't have been an issue.

salamanderman · 1h ago
Moving preconditions up depends what the definition of precondition is. There's some open source code I've done a deep dive in (Open cascade) and at some point they had an algorithm that assumed the precondition that the input was sorted, and that precondition was pushed up. Later they swapped out the algorithm for one that performs significantly better on randomized input and can perform very poorly on certain sorted input. Since the precondition was pushed up, though, it seems they didn't know how the input was transformed between the initial entrance function and the final inner function. Edit - if the precondition is something that can be translated into a Type then absolutely move the precondition up and let the compiler can enforce.
deredede · 1h ago
"Moving preconditions up" means moving the code that checks the precondition up. The precondition still needs to be documented (in the type system is ideal, with an assertion otherwise, in a comment if necessary) close to where it's assumed.
manmal · 4h ago
It’s a bit niche for HN, but SwiftUI rendering works way better when following this. In a ForEach, you really shouldn’t have any branching, or you‘ll pay quite catastrophic performance penalties. I found out the hard way when rendering a massive chart with Swift Charts. All branching must be pushed upwards.
ComputerGuru · 3h ago
Why? Does it interpret the code?
manmal · 2h ago
Kind of, it’s a declarative framework like React & co. Under the hood it maps to either UIKit components or GPU (Metal) rendering. And view identity is very important for change detection. AFAICT, putting a branch in a ForEach invalidates all elements in that ForEach whenever one branch changes, because its whole identity changes.
imcritic · 5h ago
This article doesn't explain the benefits of the suggested approach well enough.

And the last example looks like a poor advice and contradicts previous advice: there's rarely a global condition that is enough to check once at the top: the condition usually is inside the walrus. And why do for walrus in pack {walrus.throbnicate()} instead of making throbnicate a function accepting the whole pack?

No comments yet

neilv · 1h ago
A compiler that can prove that the condition-within-loop is constant for the duration of the looping, can lift up that condition branching, and emit two loops.

But I like to help the compiler with this kind of optimization, by just doing it in the code. Let the compiler focus on optimizations that I can't.

dcre · 5h ago
I took a version of this away from Sandi Metz’s 99 Bottles of OOP. It’s not really my style overall, but the point about moving logic forks up the call stack was very well taken when I was working on a codebase where we had added a ton of flags that got passed down through many layers.

https://sandimetz.com/99bottles

daxfohl · 5h ago
Yeah, I immediately thought of "The Wrong Abstraction" by the same author. Putting the branch inside the for loop is an abstraction, saying "the for loop is the rule, and the branch is the behavior". But very often, some new requirement will break that abstraction, so you have to work around it, and the resulting code has an abstraction that only applies in some cases and doesn't in others, or you force a bunch of extra parameters into the abstraction so that it applies everywhere but is hard to follow. Whereas if you hadn't made the abstraction in the first place, the resulting code can be easier to modify and understand.

https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

CodesInChaos · 4h ago
I can recommend this article about the "midlayer mistake" https://lwn.net/Articles/336262/
drob518 · 3h ago
Nice reference.
jmull · 5h ago
I really don't think there is any general rule of thumb here.

You've really got to have certain contexts before thinking you ought to be pushing ifs up.

I mean generally, you should consider pushing an if up. But you should also consider pushing it down, and leaving it where it is. That is, you're thinking about whether you have a good structure for your code as you write it... aka programming.

I suppose you might say, push common/general/high-level things up, and push implementation details and low-level details down. It seems almost too obvious to say, but I guess it doesn't hurt to back up a little once in a while and think more broadly about your general approach. I guess the author is feeling that ifs are usually about a higher-level concern and loops about a lower-level concern? Maybe that's true? I just don't think it matters, though, because why wouldn't you think about any given if in terms of whether it specifically ought to move up or down?

Tade0 · 3h ago
I also don't think there's a general rule.

I use `if`s a markers for special/edge cases and typically return in the last statement in the `if` block.

If I have an `else` block and it's large, then it's a clear indicator that it's actually two methods dressed as one.

hetman · 2h ago
I agree with this sentiment. I find attempts to create these kinds of universal rules are often a result of the programmer doing a specific and consistently repeating type of data transformation/processing. In their context it often makes a lot of sense... but try and apply the rules to a different context and you might end up with a mess. It can also often result in a reactionary type of coding where we eliminate a bad coding pattern by taking such an extremely opposite position that the code becomes just as unreadable for totally different reasons.

This is not to say we shouldn't be having conversations about good practices, but it's really important to also understand and talk about the context that makes them good. Those who have read The Innovator's Solution would be familiar with a parallel concept. The author introduces the topic by suggesting that humanity achieved powered flight not by blindly replicating the wing of the bird (and we know how many such attempts failed because it tried to apply a good idea to the wrong context) but by understanding the underlying principle and how it manifests within a given context.

The recommendations in the article smell a bit of premature optimisation if applied universally, though I can think of context in which they can be excellent advice. In other contexts it can add a lot of redundancy and be error prone when refactoring, all for little gain.

Fundamentally, clear programming is often about abstracting code into "human brain sized" pieces. What I mean by that is that it's worth understanding how the brain is optimised, how it sees the world. For example, human short term memory can hold about 7±2 objects at once so write code that takes advantage of that, maintaining a balance without going to extremes. Holy wars, for example, about whether OO or functional style is always better often miss the point that everything can have its placed depending on the constraints.

xg15 · 3h ago
Doesn't the second rule already imply some counterexamples for the first?

When I work with batches of data, I often end up with functions like this:

  function process_batch(batch) {
    stuff = setUpNeededHelpers(batch);
    results = [];
    for (item in batch) {
      result = process_item(item, stuff);
      results.add(result);
    }
    return results;
  }
Where "stuff" might be various objects, such as counters, lists or dictionaries to track aggregated state, opened IO connections, etc etc.

So the setUpNeededHelpers() section, while not extremely expensive, can have nontrivial cost.

I usually add a clause like

  if (batch.length == 0) {
    return [];
  }
at the beginning of the function to avoid this initialization cost if the batch is empty anyway.

Also, sometimes the initialization requires to access one element from the batch, e.g. to read metadata. Therefore the check also ensures there is at least one element available.

Wouldn't this violate the rule?

Jtsummers · 2h ago
> Wouldn't this violate the rule?

The article is offering a heuristic, not a hard rule (rule of thumb = heuristic, not dogma). It can't be applied universally without considering your circumstances.

Following his advice to the letter (and ignoring his hedging where he says "consider if"), you'd move the `if (batch.length == 0)` into the callers of `setUpNeededHelpers`. But now you have to make every caller aware that calling the function could be expensive even if there are no contents in `batch` so they have to include the guard, which means you have this scattered throughout your code:

  if (batch.length == 0) { return default }
  setup(batch)
Now it's a pair of things that always go together, which makes more sense to put into one function so you'd probably push it back down.

The advice really is contingent on the surrounding context (non-exhaustive):

1. Is the function with the condition called in only one place? Consider moving it up.

2. Is the function with the condition called in many places and the condition can't be removed (it's not known to be called safely)? Leave the condition in the function.

3. Is the function with the condition called only in places where the guard is redundant? In your example, `batch.length == 0` can be checked in `process_batch`. If all calls to `setup` are in similar functions, you can remove the condition from `setup` and move it up.

4. If it's causing performance concerns (measured), and in many but not all cases the check is unneeded then remove the guard from `setup` and add it back to only those call-sites where it cannot be safely removed. If this doesn't get you any performance improvements, you probably want to move it back down for legibility.

Basically, apply your judgment. But if you can, it's probably (but not always) a good idea to move the ifs up.

bluejekyll · 1h ago
I really like this advice, but aren’t these two examples the same, but yet different advice?

// Good? for walrus in walruses { walrus.frobnicate() }

Is essentially equivalent to

// BAD for walrus in walruses { frobnicate(walrus) }

And this is good,

// GOOD frobnicate_batch(walruses)

So should the first one really be something more like

// impl FrobicateAll for &[Walrus] walruses.frobicate_all()

daxfohl · 5h ago
I like this a lot. At first, putting ifs inside the fors makes things more concise. But it seems like there's always an edge case or requirement change that eventually requires an if outside the for too. Now you've got ifs on both sides of the for, and you've got to look in multiple places to see what's happening. Or worse, subsequent changes will require updating both places.

So yeah, I agree, pulling conditions up can often be better for long-term maintenance, even if initially it seems like it creates redundancy.

hinkley · 5h ago
There’s a lot of variable hoisting involved in loving conditional logic out of for loops and it generally tends to improve legibility. If a variable is loop invariant it makes debugging easier if you can prove it is and hoist it.
variadix · 4h ago
This is a great way of putting it.

The more things you can prove are invariant, the easier it is to reason about a piece of code, and doing the hoisting in the code itself rather than expecting the compiler to do it will make future human analysis easier when it needs to be updated or debugged.

uptownfunk · 1h ago
Wow this is great where can I find this type of advice that relates to how to structure your code essentially.
carom · 2h ago
I strongly disagree with this ifs take. I want to validate data where it is used. I do not trust the caller (myself) to go read some comment about the assumptions on input data a function expects. I also don't want to duplicate that check in every caller.
azaslavsky · 2h ago
Couldn't you just take the advice in [0] of parsing into types rather than validating? Then you get the best of both worlds: your inputs are necessarily checked every time the function is called (they would have to be to create the type in the first place), but you don't need to validate them at every nested layer. You also get the benefit of more descriptive function signatures to describe your interfaces.

[0] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

Zambyte · 1h ago
One option is to use asserts that are only included in debug builds. That way any incorrect call of the function will crash the program in debug builds, but will have the performance benefits of the lifted conditional checks in release builds.

You'll end up duplicating the condition, but that seems like a reasonable price to pay for correct and performant software.

zellyn · 2h ago
At least in the first example, the optionality is directly encoded in the types, so no assumptions have been lost.
nfw2 · 3h ago
The performance gap of running a for loop inside or outside a function call is negligible in most real usage.

The premise that you can define best patterns like this, removed from context with toy words like frobnicate, is flawed. You should abstract your code in such a way that the operations contained are clearly intuited by the names and parameters of the abstraction boundaries. Managing cognitive load >>> nickle and dime-ing performance in most cases.

jasonjmcghee · 2h ago
My take on the if-statements example wasn't actually so much about if statements.

And this was obfuscated by author's use of global variables everywhere.

The key change was reducing functions' dependencies on outer parameters. Which is great.

boltzmann_ · 2h ago
You notice this quickly after working on codebases efficiency is important. Filter pushdown is one of the first database optimizations
esafak · 6h ago
I agree, except for this example, where the author effectively (after a substitution) prefers the former:

    fn f() -> E {
      if condition {
        E::Foo(x)
      } else {
        E::Bar(y)
      }
    }
    
    fn g(e: E) {
      match e {
        E::Foo(x) => foo(x),
        E::Bar(y) => bar(y)
      }
    }
The latter is not only more readable, but it is safer, because a match statement can ensure all possibilities are covered.
CraigJPerry · 5h ago
> Prefers the former after a substitution...

That's not quite right, it's a substitution AND ablation of 2 functions and an enum from the code base.

There's quite a reduction in complexity he's advocating for.

Further, the enum and the additional boilerplate is not adding type safety in this example. Presumably the parameters to foo and bar are enforced in all cases so the only difference between the two examples is the additional boilerplate of a 2-armed enum.

I strongly suspect in this case (but i haven't godbolted it to be sure) that both examples compile to the same machine code. If my hunch is correct, then the remaining question is, does introduction of double-entry book keeping on the if condition add safety for future changes?

Maybe. But at what cost? This is one of those scenarios where you bank the easy win of reduced complexity.

josephg · 4h ago
> The latter is not only more readable, but it is safer, because a match statement can ensure all possibilities are covered.

Whether or not this matters depends on what, exactly, is in those match arms. Sometimes there's some symmetry to the arms of an if statement. And in that case, being exhaustive is important. But there's plenty of times where I really just have a bit of bookkeeping to do, or an early return or something. And I only want to do it in certain cases. Eg if condition { break; } else { stuff(); }

Also, if-else is exhaustive already. Its still exhaustive even if you add more "else if" clauses, like if {} else if {} else {}.

Match makes sense when the arms of the conditional are more symmetrical. Or when you're dealing with an enum. Or when you want to avoid repeating conditions. (Eg match a.cmp(b) { Greater / Equal / Less } ).

The best way to structure your code in general really comes down to what you're trying to do. Sometimes if statements are cleaner. Sometimes match expressions. It just depends on the situation.

esafak · 4h ago
It's only exhaustive in this toy case. Add another one and the burden of checking for exhaustiveness with ifs falls on your shoulders.
stevage · 5h ago
The author's main concern seems to be optimising performance critical code.
drob518 · 3h ago
Hmmm. Seems like he’s optimizing clarity of thought, first. The performance gain just comes along for the ride. If I were to summarize the article, I’d say that it’s advocating a pattern where you write code where higher layers decide what needs to be done and then lower layers do it, using a combination of straight line code and simple loops, with little to no further conditionality. Obviously, that represents an ideal.
greesil · 5h ago
That's not clear to me. It first reads like "don't branch in a for loop (because parallelization?)" but I think it's more for keeping the code from becoming a mess over time with multiple developers.
hinkley · 5h ago
If your compiler is having trouble juggling a number of variables just think how much difficulty the human brain will have on the same code.
throwawaymaths · 3h ago
i would agree with the push ifs up except if youre doing options parsing. having a clean line of flow that effectively updates a struct with a bunch of "maybe" functions is much better if youre consistent with it.

anywhere else, push ifs up.

hamandcheese · 4h ago
Isn't this just inversion of control? AKA the I in SOLID?
Mystery-Machine · 5h ago
Terrible advice. It's the exact opposite of "Tell, don't ask".

Performance of an if-statement and for-loop are negligent. That's not the bottleneck of your app. If you're building something that needs to be highly performant, sure. But that's not the majority.

https://martinfowler.com/bliki/TellDontAsk.html

eapriv · 4h ago
Performance of any given CPU instruction is negligible, yet somehow they accumulate to noticeable values.
drob518 · 3h ago
Amen.
achernik · 4h ago
consider his another post in somewhat similar spirit: https://tigerbeetle.com/blog/2024-12-19-enum-of-arrays/ the author is indeed working in a performance-oriented niche
hello_computer · 3h ago
He’s giving object oriented design the medal for cache locality’s victory.
quantadev · 3h ago
There's always a trade-off between performance v.s. clearness in the code.

If a certain function has many preconditions it needs to check, before running, but needs to potentially run from various places in the code, then moving the precondition checks outside the method results in faster code but destroys readability and breaks DRY principle.

In cases where this kind of tension (DRY v.s. non-DRY) exists I've sometimes named methods like 'maybeDoThing' (emphasis on 'maybe' prefix) indicating I'm calling the method, but that all the precondition checks are inside the function itself rather than duplicate logic all across the code, everywhere the method 'maybe' needs to run.

zahlman · 2h ago
> If you have complex control flow, better to fit it on a screen in a single function, rather than spread throughout the file.

This part in particular seems like an aesthetic judgment, and I disagree. I find it more natural to follow a flowchart than to stare at one.

> A related pattern here is what I call “dissolving enum” refactor.... There are two branching instructions here and, by pulling them up, it becomes apparent that it is the exact same condition, triplicated (the third time reified as a data structure):

The problem here isn't the code organization, but the premature abstraction. When you write the enum it should be because "reifying the condition as a data structure" is an intentional, purposeful act. Something that empowers you to, for example, evaluate the condition now and defer the response to the next event tick in a GUI.

> The primary benefit here is performance. Plenty of performance, in extreme cases.

Only if so many other things go right. Last I checked, simply wanting walruses to behave polymorphically already ruins your day, even if you've chosen a sufficiently low-level programming language.

A lot of the time, the "bad" code is the implementation of the function called in the "good" code. That makes said function easier to understand, by properly separating responsibilities (defining frobnication and iterating over walruses). Abstracting the inner loop to a function also makes it sane to express the iteration as a list comprehension without people complaining about how you have these nested list comprehensions spread over multiple lines, and why can't you just code imperatively like the normal programmers, etc.

> The two pieces of advice about fors and ifs even compose!

1. The abstraction needed to make the example comprehensible already ruins the illusion of `frobnicate_batch`.

2. If you're working in an environment where this can get you a meaningful performance benefit and `condition` is indeed a loop invariant (such that the transformation is correct), you are surely working in an environment where the compiler can just hoist that loop invariant.

3. The "good" version becomes longer and noisier because we must repeat the loop syntax.

> jQuery was quite successful back in the day, and it operates on collections of elements.

That's because of how it allowed you to create those collections (and provided iterators for them). It abstracted away the complex logic of iterating over the entire DOM tree to select nodes, so that you could focus on iterating linearly over the selected nodes. And that design implicitly, conceptually separated those steps. Even if it didn't actually build a separate container of the selected nodes, you could reason about what you were doing as if it did.

jonstewart · 2h ago
One reason to move conditionals out of loops is that it makes it easier for the compiler to vectorize and otherwise optimize the loop.

With conditionals, it's also useful to express them as ternary assignment when possible. This makes it more likely the optimizer will generate a conditional move instead of a branch. When the condition is not sufficiently predictable, a conditional move is far faster due to branch misprediction. Sometimes it's not always faster in the moment, but it can still alleviate pressure on the branch prediction cache.

renewiltord · 3h ago
Yep, as a general heuristic pretty good. It avoids problems like n+1 queries and not using SIMD. And the if thing often makes it easier to reason about code. There are exceptions but I have had this same rule and it’s served me well.
hello_computer · 3h ago
This thread is a microcosm of Babel.