Top
Best
New

Posted by xk3 12/19/2025

Kernighan's Lever(linusakesson.net)
112 points | 52 comments
yodon 12/22/2025|
This feels like a lot of rationalization for the purpose of excusing writing exactly the sort of code that Kernighan advised against.

Advising against writing complex code is not advising against learning.

The person who solves a hard problem correctly using simple code has generally spent more time learning than the person who solves it using complex code.

userbinator 12/22/2025||
Looking at all he has done, I don't think he means "complex" when he says "clever". He's not advocating for (and most likely against) the architecture-astronautism of overengineering that some people seem to be associating with "clever" here.

He means code that appears indecipherable at first glance, but then once you see how it works, you're enlightened. Simple and efficient code can be "clever".

johnmwilkinson 12/23/2025|||
I think clever is being used in two different ways, in that case.

In the original quote, “clever” refers to the syntax, where they way the code was constructed makes it difficult to decipher.

I believe your interpretation (and perhaps the post’s, as well) is about the design. Often to make a very simple, elegant design (what pieces exist and how they interact) you need to think really hard and creatively, aka be clever.

Programming as a discipline has a problem with using vague terms. “Clean” code, “clever” code, “complex” code; what are we trying to convey when we talk about these things?

I came up with a term I like: Mean Time to Comprehension, or MTC. MTC is the average amount of time it takes for a programmer familiar with the given language, syntax, libraries, tooling, and structure to understand a particular block of code. I find that thinking about code in those terms is much more useful than thinking about it in terms of something like “clever”.

(For anyone interested, I wrote a book that explores the rules for writing code that is meant to reduce MTC: The Elements of Code https://elementsofcode.io)

Mikhail_Edoshin 12/22/2025|||
Good code should not be immediately understandable. Machines that do pasta do not look like humans that do pasta. Same for code; good code does things in a machine way and it won't look natural.

Example: convert RGB to HSV. If you look around for a formula, you'll likely find one that starts so:

    cmin = min(r, g, b);
    cmax = max(r, g, b);
Looks very natural to a human. Thing is, as we compute 'cmin', we'll also compute or almost compute 'cmax', so if we rewrite this for a machine, we should merge these two into something that will be way less clear on the first glance. Yet it will be better and make fewer actions (the rest of the conversion is even more interesting, but won't fit into a comment).
zahlman 12/22/2025|||
Recognizing that sort of opportunity is why we have optimizing compilers and intrinsics.

Funny thing: in Python code I've had a few occasions where I needed both quotient and remainder of an integer division, so naturally I used `divmod` which under the hood can exploit the exact sort of overlap you describe. I get the impression that relatively few Python programmers are familiar with `divmod` despite it being a builtin. But also it really doesn't end up mattering anyway once you have to slog through all the object-indirection and bytecode-interpretation overhead. (It seems that it's actually slower given the overhead of looking up and calling a function. But I actually feel like invoking `divmod` is more intention-revealing.)

lucketone 12/22/2025|||
In short your stance is to sacrifice readability for performance.

Legit in some cases. But for usual business software, code is for humans (compiler will make machine code intended for the machine)

Mikhail_Edoshin 12/23/2025||
Readability belongs to documentation. Code should have certain technical aesthetic, it should be easy to navigate it, but why its operation should be obvious more than that of any complex mechanism? Nobody demands a mechanical watch to be readable or have meaningful names for the parts.

It is not just performance. A minimal component gives you flexibility: you may make the whole system performant or you may trade extra performance to reach a different goal, such as robustness or composability. It is a more fundamental principle, common to construction in general: a thing should do all it has to do and should not do anything more.

lucketone 12/23/2025|||
Optimisation usually sacrifices some flexibility and/or robustness.

Valid if needed, but exists

JKCalhoun 12/22/2025|||
Personally, I see a kind of arc in programming style over time. It does begin naive, and more-experienced you will look back at your early code realizing you were essentially re-inventing the wheel in one place or you may see now that a look-up table would have been more efficient (as examples).

As you learn more techniques and more data structures the "cleverness" creeps into your code. To the degree that the cleverness might have a complexity cost, sometimes the cost may be worth it—perhaps not always though.

Naive-you would have struggled to understand some of the shortcuts and optimizations you are leveraging.

But then still more-experienced you revisits the more clever code with years now to have both written and attempted to debug such code. You may now begin to eschew the "clever" to the degree its cleverness makes the code harder to understand or debug. You might swear off recursive code for example—breaking it into two functions where the outer one runs a loop of some sort that is easier to set a break-point in and unwind a problem you were seeing. Or you might now lean more on services provided by the platform you are programing for so you don't have to have your own image cache, your own thread manager, etc.

I feel like in that last stage, most-experienced you may well be writing code that naive-you could have understood and learned from.

GMoromisato 12/22/2025||
Yes, I agree this is true in some (many?) cases. But it is also true that sometimes the more complex solution is better, either for performance reasons or because it makes things simpler for users/API callers.
bruce511 12/22/2025||
Yes, there's a valid argument that simple code is not always best performance. Optimizing simple code usually makes it more complex.

But I think the main point stands. There's an old saying that doing a 60 minute presentation is easy, doing one in 15 minutes us hard. In other words writing "clever" (complicated) code is easy. Distilling it down to something simple is hard.

So the final result of any coding might be "complex", "simplified from complex", or "optimized from simple".

The first and third iterations are superficially similar, although likely different in quality.

GMoromisato 12/22/2025||
I like this insight, even though I think they are pushing Kernighan's quip a little too far.

I take away two ideas:

1. Always be learning. I think everyone believes this, but we often come up with plausible reasons to stick to what we know. This is a good reminder that we should fight that impulse and put in the effort to learn.

2. Always be fearless. This, I think, is the key insight. Fear is easy. We fear the unknown, whether they be APIs or someone else's code. We fear errors, particularly when they have real-world consequences. And we fear complexity, because we think we might not be able to deal with it. But the opposite of fear isn't recklessness, it's confidence. We should be confident that we will figure it out. And even if we don't figure it out, we should be confident that we can revert the code. Face your fears and grow.

poemxo 12/22/2025|
Fear is the mind killer
chimprich 12/22/2025||
> You effortlessly wield clever programming techniques today that would've baffled your younger self. (If not, then I'm afraid you stopped evolving as a programmer long ago.)

I think a better assessment of how well you've evolved as a programmer is how simple you can make the code. It takes real intelligence and flair to simplify the problem as much as possible, and then write the code to be boringly simple and easy to follow by a junior developer or AI agent.

If you're wielding increasingly clever programming techniques, then you're evolving in the wrong direction.

cassonmars 12/22/2025|
or you're working in embedded systems, machine learning, cryptography, or any other specialized field where being clever is very important
quietbritishjim 12/22/2025|||
Any good rule of thumb like the one in GP's comment is wrong sometimes, and that's ok. Adding more caveats just dilutes it without ever really making it watertight (if you'll forgive the very mixed metaphor).

But even in complex applications, there's still truth to the idea that your code will get simpler over time. Mostly because you might come up with better abstractions so that at least the complex bit is more isolated from the rest of the logic. That way, each chunk of code is individually easier to understand, as is the relationship between them, even if the overall complexity is actually higher.

DamonHD 12/22/2025||||
The best code, eg for embedded systems, is as simple as it can possibly be, to be maintainable and eg to let the compiler optimise it well, possibly across multiple targets. Sometimes very clever is needed, but the scope of that cleverness should always be minimised and weighed against the downsides.

Let me tell you about a key method in the root pricing class for the derivs/credit desk of a major international bank that was all very clever ... and wrong ... as was its sole comment ... and not entirely coincidentally that desk has gone and its host brand also...

immibis 12/22/2025||
Simple code means just doing the thing. It's often misinterpreted to mean code made of lots of small pieces (spaghetti with meatballs code) but this is simply not the case. Often, avoiding abstractions leads to simpler code.

At my job we're disqualifying candidates who don't use enough unnecessary classes. I didn't use them, but they proceeded with my interview because I happened to use some other tricks that showed good knowledge of C++. I think the candidate who just wrote the code to solve the task was the best solution, but I'm not in charge of hiring.

Without revealing the actual interview task, let's pretend it was to write a program that lowpass filters a .wav file. The answer we're apparently looking for is to read the input into a vector, FFT it, zero out the second half, unFFT it, and write the output file. And you must have a class called FFT, one called File, FrequencyDomainFile, and InverseFFT. Because that's simple logical organization of code, right? Meanwhile, the actual simple way to do it is to open the input and output files, copy the header, and proceed through the file one sample at a time doing a convolution on a ring buffer. This latter way involves less code, less computation, less memory, and is all-around better. If you think the ring buffer is too risky, you can still do a convolution over the whole file loaded into memory, and still come out ahead of the FFT solution.

But if you do it this way, we think you didn't use enough abstraction so we reject you. Which is insane. Some time after I got this job, I found out I would have also been rejected if not for a few thoughtful comments, which were apparently some of the very few signals that "this guy knows what he's doing and has chosen not to write classes" rather than "this guy doesn't know how classes work."

zahlman 12/22/2025||
> Often, avoiding abstractions leads to simpler code.... But if you do it this way, we think you didn't use enough abstraction so we reject you.

I think you've unwittingly bought into your hiring team's fallacy that classes are somehow essential to "abstraction". They are not. Wikipedia:

> Abstraction is the process of generalizing rules and concepts from specific examples, literal (real or concrete) signifiers, first principles, or other methods. The result of the process, an abstraction, is a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category.[1]

The fundamental abstraction in computer programs is the function. A class is principally a means of combination that sometimes incidentally creates a useful (but relatively complex) abstraction, by modeling some domain object. But the most natural expression of a "generalized rule" is of course the thing that takes some inputs and directly computes an output from them.

Of course, we also abstract when we assign semantics to some part of the program state, for example by using an enumeration rather than an integer. But in that case we are doing it in reverse; we have already noticed that the cases can be generalized as integers, and then explicitly... enumerate what it is that we're generalizing.

(The reason that "FFT" etc. classes are so grating is that the process of that computation hardly makes sense to model; the input and output do, but both of these are just semantic interpretations of a sequence of values. You could staple a runtime "time-domain" or "frequency-domain" type to those sequences; but the pipeline is so simple that there is never a real opportunity for confusion, nor reason for runtime introspection. I almost wonder if the hiring team comes from a Java background, where classes are required to hold the code?)

If I were writing the convolution, it would still probably involve quite a few functions, because I like to make my functions as short as feasible, hewing closely to SRP. Perhaps the ring buffer would be a class — because that would allow a good way to separate the logic of accessing the underlying array slots that make the ring buffer work, from the logic of actually using the ring buffer to do the convolution.

(I'm not sure offhand what you'd need to convolve with to get the same result as "zeroing out the second half" of the FFT. I guess a sinc pulse? But the simple convolutions I'd think of doing to implement "low-pass filter" would certainly have a different frequency characteristic.)

immibis 12/22/2025||
Well, I substituted the task for a different but related one, so the substitute task is not fully specified in detail and perfectly mathematically correct - just good enough to show the principle.

We have given extra points to a candidate for having an FFT class even though it should obviously be a function. And the comments clearly indicated that candidate simply thought everything should be a class and was skeptical of things not being classes.

amelius 12/22/2025|||
no
mrob 12/22/2025||
While I agree with the point about improving skills, I think there's a distinction to be made between artistic code and engineering code. Linus Åkesson writes some exceptionally clever code, but it's artistic code. The cleverness is both essential to the artistic effect and unlikely to break anything important.

But I wouldn't want my OS written like that. In engineering code, the only benefit of cleverness is better performance, and the risk is unreliability. My previous computer was a lot slower and it already did everything I need, so I'm willing to sacrifice a lot of performance for reliability. Most software is written so wastefully that it's usually possible to make up for the lost performance without cleverness anyway.

zahlman 12/22/2025|
> Linus Åkesson writes some exceptionally clever code, but it's artistic code.

Thanks. I somehow ignored the URL and the sidebar, and only now made the connection that OP is by the guy who does all that ridiculous C64 tech demo stuff (especially the music).

gaigalas 12/22/2025||
This whole "clever code" has become a social thing.

It's one of the things people say when they don't like some piece of code, but they also can't justify it with a more in-depth explanation on why the cleverness is unecessary/counter-productive/etc.

Truth is, we need "clever code". Lots of it. Your OS and browser are full of it, and they would suck even more without that. We also need people willing to work on things that are only possible with "clever code".

From this point of view, the idea of the Lever makes sense. The quote also works for criticizing clever code, as long as we follow up with concrete justification (not being abstract about some general god-given rule). In a world where _some clever code is always required_, it makes sense that this quote should work for both scenarios.

rswail 12/22/2025||
If debugging is the art of removing faults, then programming is the art of putting them in.
irishcoffee 12/22/2025|
IIRC, the term "debug" came from people literally picking insects out of massive walls of vacuum tubes. Someone can weigh in if I'm mistaken.

Also, a "computer" was a human back then, not a machine.

I'm not clear on if the term "programming" had been invented at that time or not.

zahlman 12/22/2025||
Etymonline attests:

> program(v.)

> 1889, "write program notes" (a sense now obsolete); 1896 as "arrange according to program," from program (n.).

> Of computers, "cause to be automatically regulated in a prescribed way" from 1945; this was extended to animals by 1963 in the figurative sense of "to train to behave in a predetermined way;" of humans by 1966. Related: Programmed; programming.

and

> computer(n.)

> 1640s, "one who calculates, a reckoner, one whose occupation is to make arithmetical calculations," agent noun from compute (v.).

> Meaning "calculating machine" (of any type) is from 1897; in modern use, "programmable digital electronic device for performing mathematical or logical operations," 1945 under this name (the thing itself was described by 1937 in a theoretical sense as Turing machine). ENIAC (1946) usually is considered the first.

The term "debug" also dates to 1945 per Etymonline, but Wikipedia also claims

> The term bug, in the sense of defect, dates back at least to 1878 when Thomas Edison wrote "little faults and difficulties" in his inventions as "Bugs".

> A popular story from the 1940s is from Admiral Grace Hopper.[1] While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay that impeded operation and wrote in a log book "First actual case of a bug being found". Although probably a joke, conflating the two meanings of bug (biological and defect), the story indicates that the term was used in the computer field at that time.

So the metaphorical sense previously existed, but was relatively new as applied to computers (since doing anything with computers at all was relatively new). And "computer" did refer to a human, but the modern sense was in the process of being established during the literal-bugs-in-vacuum-tubes era.

ninkendo 12/22/2025||
It doesn’t seem to me that it’s required for code that was hard to write, to be hard to debug. What if I spend my cleverness “budget” specifically on making the code easier to debug? Splitting out just the right pieces into generic bits so they can be replaced with debuggable mocks, for instance.

You could counter that the word “clever” only applies to hard-to-debug code, but that makes the whole statement rather vacuous, no?

misja111 12/22/2025||
> Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

It's worse than that. It might not be you who has to debug it, but someone else. Maybe after you left the company already. Maybe at 3AM after a pager alert in production ..

anilakar 12/22/2025|
The company made a choice, conscious or not, to not keep that talent in-house.
misja111 12/22/2025|||
Talent? Not if it was someone who was adding unnecessary complexity to the codebase.
DamonHD 12/22/2025|||
The company often does/did not get to make the choice, at least in my case.
nurettin 12/22/2025||
I am very happy and sad for people who will never debug their own code for days to figure out subtle bugs. Happy because they won't endure the torture, sad because an LLM took away their opportunity to learn and better themselves.
foster_nyman 12/22/2025|
This feels like a learning-theory restatement of the Kernighan quote: the point isn’t “never be clever”, it’s that cleverness is trainable. If you write right at your current ceiling, you reliably create a debugging task that’s a bit above it, and that mismatch becomes the stimulus (and motivation) for skill growth. I think the same lever shows up in writing: drafting is “coding”, editing is “debugging”. If I only write safe/obvious prose, revision stays in the flow zone but I plateau. If I try a structure/argument I can’t quite see the full shape of yet, the rewrite phase hurts, but it’s literally me moving through the next rung. All of which maps pretty cleanly to Vygotsky’s ZPD (the bug report / reader confusion is the scaffold), and it’s also an antidote to Dunning–Kruger: the work keeps falsifying your self-assessment. The “wow, I was wrong” moment is often just evidence your skill bar moved.

Caveat: in collaborative/prod contexts you sometimes trade cleverness for maintainability, but if you always do that, you skip the lever.

More comments...