Comment by Waterluvian
Comment by Waterluvian a day ago
My weird mental model: You have a tree of possible states/program flow. Conditions prune the tree. Prune the tree as early as possible so that you have to do work on fewer branches.
Don’t meticulously evaluate and potentially prune every single branch, only to find you have to prune the whole limb anyways.
Or even weirder: conditionals are about figuring out what work doesn’t need to be done. Loops are the “work.”
Ultimately I want my functions to be about one thing: walking the program tree or doing work.
This aligns nicely with how things look in the "small-step" flavour of PL theory / lambda calculus.
In the lingo, expressions are evaluated by repeatedly getting "rewritten", according to rules called reduction rules. e.g., (1 + 2) + 4 might get rewritten to 3 + 4, which would then get rewritten to 7.
There are two sorts of these rules. There are "congruence" rules, which direct where work is to be done ("which subexpression to evaluate next?"); and then there are "computation" rules (as Pierce [1] calls them), which actually rewrite the expression, and thus change the program state.
"Strict"/"non-lazy" languages (virtually every popular general-purpose language? except Haskell) are full of congruence rules – all subexpressions must be fully evaluated before a parent expression can be evaluated. The important exceptions are special constructs like conditionals and indefinite loops.
For conditionals in particular, a computation rule will kick in before congruence rules direct all subexpressions to be evaluated. This prunes the expression tree, now in a very literal sense.
[1]: Benjamin C. Pierce, Types and Programming Languages (recommended!)