Posted by vismit2000 16 hours ago
I learnt about rule-5 through experience before I had heard it was a rule.
I used to do tech due diligence for acquisition of companies. I had a very short time, about a day. I hit upon a great time saver idea of asking them to show their DB schema and explain it. It turned out to be surprisingly effective. Once I understood the scheme most of the architecture explained itself.
Now I apply the same principle while designing a system.
Getting competent at it, however, is no joke and takes time.
There are a lot of systems where useless work and other inefficiencies are spread all over the place. Even though I think garbage collection is underrated (e.g. Rustifarians will agree with me in 15 years) it's a good example because of the nonlocality that profilers miss or misunderstand.
You can make great prop bets around "I'll rewrite your Array-of-Structures code to Structure-of-Arrays code and it will get much faster"
https://en.wikipedia.org/wiki/AoS_and_SoA
because SoA usually is much more cache friendly and AoS makes the memory hierarchy perform poorly in a way profilers can't see. The more time somebody spends looking at profilers and more they quote Rule 1 the more they get blindsided by it.
On #5, I think most people tend to just lean on RDBMS databases for a lot of data access patterns. I think it helps to have some fundamental understandings in terms of where/how/why you can optimize databases as well as where it make sense to consider non-relational (no-sql) databases too. A poorly structured database can crawl under a relatively small number of users.
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=38097031 - Nov 2023 (259 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=24135189 - Aug 2020 (323 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15776124 - Nov 2017 (18 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15265356 - Sept 2017 (112 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=7994102 - July 2014 (96 comments)
There’s several orders of magnitude less available discussion of selecting data structures for problem domains than there is code.
If the underlying information is implicit in high volume of code available then maybe the models are good at it, especially when driven by devs who can/will prompt in that direction. And that assumption seems likely related to how much code was written by devs who focus on data.
I believe that’s what most algorithms books are about. And most OS book talks more about data than algorithms. And if you watch livestream or read books on practical projects, you’ll see that a lot of refactor is first selecting a data structure, then adapt the code around it. DDD is about data structure.
Based on everything public, Pike is deeply hostile to generative AI in general:
- The Christmas 2025 incident (https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/)
- he's labeled GenAI as nuclear waste (https://www.webpronews.com/rob-pike-labels-generative-ai-nuc...)
- ideologically, he's spent his career chasing complexity reduction, adovcating for code sobriety, resource efficiency, and clarity of thought. Large, opaque, energy-intensive LLMs represent the antithesis.
The whole article is an AI hallucination. It refers to the same "Christmas 2025 incident". The internet is dead for real.
This thing never resonated with me.
I often hear it as an excuse to ignore “optimization” at all.
It’s like “broken windows” theory. This allows slop, rot, and technical debt in. And it spreads fast.
Also if everything is unoptimized, this is not what could be easily fixed.
Death of thousand cuts, if you will.
If the next generation doesn't even want to learn a programming language, they're definitely not going to learn how to write _clean_ code.
Maybe I'm just overly pessimistic about junior engineers at the moment because of that conversation lol.
Random side note: my teen son has grown up with iPhone-level tech, yet likes and finds my old Casio F91 watch very interesting. I still have faith :)
Anyway, I've found that if you want to get a coworker into reading technical books, the best way is with a novel or three. I've had good success with The Martian. The Phoenix Project might work too. Slip them fun books until they've built a habit and then drop The Mythical Man Month on them. :)
This was hard before, too.
PS: Not that we do not have people working at all levels of stack today, just that each level of stack, like a discussion going on today about python's JIT compiler will be a few (dozen or hundred) specialists. Everyone else can work with prompts.
I’m hoping the situation with LLMs will be the same. Teach the basics and allow people to fall back on them for at least the simpler tasks for their lifetimes. I know people, by the way, who can still use an abacus and a slide rule. I can too, but with a refresher beforehand because I seldom use those.
Even knowing with 100% certainty that performance will be subpar, requirements change often enough that it's often not worth the cost of adding architectural complexity too early.
I think there is value in attempting to do something the "wrong way" on purpose to some extent. I have walked into many situations where I was beyond convinced that the performance of something would suck only to be corrected harshly by the realities of modern computer systems.
Framing things as "yes, I know the performance is definitely not ideal in this iteration" puts that monkey in a proper cage until the next time around. If you don't frame it this way up front, you might be constantly baited into chasing the performance monkey around. Its taunts can be really difficult to ignore.