Top
Best
New

Posted by vismit2000 16 hours ago

Rob Pike’s Rules of Programming (1989)(www.cs.unc.edu)
820 points | 404 commentspage 4
vishnugupta 10 hours ago|
I can’t emphasize the importance of rule-5 enough.

I learnt about rule-5 through experience before I had heard it was a rule.

I used to do tech due diligence for acquisition of companies. I had a very short time, about a day. I hit upon a great time saver idea of asking them to show their DB schema and explain it. It turned out to be surprisingly effective. Once I understood the scheme most of the architecture explained itself.

Now I apply the same principle while designing a system.

SoftTalker 9 hours ago|
Yes, fully agree. Rule 5 has been the center of my approach to designing and writing software for over 30 years now. Fad methodologies and platforms come and go but Rule 5 works as well for me today as it did in 1995.
artyom 8 hours ago||
I can't agree more. I live and breath by rule #5.

Getting competent at it, however, is no joke and takes time.

PaulHoule 9 hours ago||
The "bottleneck" model of performance has limitations.

There are a lot of systems where useless work and other inefficiencies are spread all over the place. Even though I think garbage collection is underrated (e.g. Rustifarians will agree with me in 15 years) it's a good example because of the nonlocality that profilers miss or misunderstand.

You can make great prop bets around "I'll rewrite your Array-of-Structures code to Structure-of-Arrays code and it will get much faster"

https://en.wikipedia.org/wiki/AoS_and_SoA

because SoA usually is much more cache friendly and AoS makes the memory hierarchy perform poorly in a way profilers can't see. The more time somebody spends looking at profilers and more they quote Rule 1 the more they get blindsided by it.

tracker1 9 hours ago||
Pretty much live by these in practice... I've had a lot of arguments over #3 though... yes nested loops can cause problems... but when you're dealing with < 100 or so items in each nested loop and outer loop, it's not a big deal in practice. It's simpler and easier to reason with... don't optimize unless you really need to for practical reasons.

On #5, I think most people tend to just lean on RDBMS databases for a lot of data access patterns. I think it helps to have some fundamental understandings in terms of where/how/why you can optimize databases as well as where it make sense to consider non-relational (no-sql) databases too. A poorly structured database can crawl under a relatively small number of users.

nateb2022 14 hours ago||
Previous discussion: https://news.ycombinator.com/item?id=15776124 (8 years ago, 18 comments)
tomhow 6 hours ago|
Here's the full set:

Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=38097031 - Nov 2023 (259 comments)

Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=24135189 - Aug 2020 (323 comments)

Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15776124 - Nov 2017 (18 comments)

Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15265356 - Sept 2017 (112 comments)

Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=7994102 - July 2014 (96 comments)

justacatbot 11 hours ago||
Rule 2 is the one that keeps biting me. You can spend days micro-optimizing functions only to realize the real bottleneck was storing data in a map when you needed a sorted list. The structure of the data almost always determines the structure of the code.
ummonk 10 hours ago|
That's Rule 5 no?
ryguz 7 hours ago||
Rule 5 about data dominating resonates most in modern systems. The trend of just throwing more code at it when most performance and correctness issues come down to how data flows through the system. Most junior engineers optimize the wrong layer because they start with the code instead of the data model.
tobwen 15 hours ago||
Added to AGENTS.md :)
wwweston 15 hours ago||
How good is your model at picking good data structures?

There’s several orders of magnitude less available discussion of selecting data structures for problem domains than there is code.

If the underlying information is implicit in high volume of code available then maybe the models are good at it, especially when driven by devs who can/will prompt in that direction. And that assumption seems likely related to how much code was written by devs who focus on data.

skydhash 13 hours ago||
> There’s several orders of magnitude less available discussion of selecting data structures for problem domains than there is code.

I believe that’s what most algorithms books are about. And most OS book talks more about data than algorithms. And if you watch livestream or read books on practical projects, you’ll see that a lot of refactor is first selecting a data structure, then adapt the code around it. DDD is about data structure.

ozgrakkurt 14 hours ago|||
Would be cool to see the live reaction of Rob Pike to this comment
andsoitis 14 hours ago||
> Would be cool to see the live reaction of Rob Pike to this comment

Based on everything public, Pike is deeply hostile to generative AI in general:

- The Christmas 2025 incident (https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/)

- he's labeled GenAI as nuclear waste (https://www.webpronews.com/rob-pike-labels-generative-ai-nuc...)

- ideologically, he's spent his career chasing complexity reduction, adovcating for code sobriety, resource efficiency, and clarity of thought. Large, opaque, energy-intensive LLMs represent the antithesis.

clktmr 12 hours ago||
> - he's labeled GenAI as nuclear waste (https://www.webpronews.com/rob-pike-labels-generative-ai-nuc...)

The whole article is an AI hallucination. It refers to the same "Christmas 2025 incident". The internet is dead for real.

phh 12 hours ago||
Unironically. Every time I asked a LLM to make something faster, they always tried blind code optimisations, rather than measure.
piskov 3 hours ago||
> Pike's rules 1 and 2 restate Tony Hoare's famous maxim "Premature optimization is the root of all evil."

This thing never resonated with me.

I often hear it as an excuse to ignore “optimization” at all.

It’s like “broken windows” theory. This allows slop, rot, and technical debt in. And it spreads fast.

Also if everything is unoptimized, this is not what could be easily fixed.

Death of thousand cuts, if you will.

cestith 9 hours ago||
Any software developer who hasn’t read _The Practice of Programming_ by Kernighan and Pike should. It’s not that long and much of it is timeless.
Insanity 9 hours ago|
Yeah, but I doubt many of the newer generation are going to read this. I manage a team of engineers, and one of the recent-ish graduates asked me in our 1-on-1 if it's still worth learning Python given that he can just write prompts. (Python is the language all our tools use).

If the next generation doesn't even want to learn a programming language, they're definitely not going to learn how to write _clean_ code.

Maybe I'm just overly pessimistic about junior engineers at the moment because of that conversation lol.

LVB 9 hours ago|||
Here's my optimistic take: the fundamental things that spark joy about learning a novel algorithm, pattern, technique, etc. haven't gone anywhere, and there's no reason to think those things won't continue to be interesting. Furthermore, it seems like reading code isn't going anywhere too soon, and that definitely benefits from clean code. It follows that someone who can actually recognize clean from spaghetti, and tell the LLM to refactor it into XYZ style, is going to be relatively more valuable.

Random side note: my teen son has grown up with iPhone-level tech, yet likes and finds my old Casio F91 watch very interesting. I still have faith :)

calepayson 9 hours ago||||
Junior here. There are still a few of us who value books and documentation. It's a weird time though. Hard to feel confident that you're learning in the correct way.

Anyway, I've found that if you want to get a coworker into reading technical books, the best way is with a novel or three. I've had good success with The Martian. The Phoenix Project might work too. Slip them fun books until they've built a habit and then drop The Mythical Man Month on them. :)

zahlman 2 hours ago||
> Hard to feel confident that you're learning in the correct way.

This was hard before, too.

wduquette 9 hours ago||||
In almost forty years of experience, the fraction of developers I've known who read in the field beyond what's strictly needed for their task is very small. I'm always delighted when I find one.
kshacker 8 hours ago||||
IMO it is a valid question. Our AI has not yet reached that level, our prompts have not yet reached that level of sophistication. But I do not code in assembly any more, I do not do pointer arithmetic any more, so maybe some day we get to a state where we do not write python also. It is not going to be soon despite the AI bandwagon saying so, there are too many legacy pieces that are not documented well and not easy decipherable due to context window limits. But in 10 years ...maybe prompts is all we need.

PS: Not that we do not have people working at all levels of stack today, just that each level of stack, like a discussion going on today about python's JIT compiler will be a few (dozen or hundred) specialists. Everyone else can work with prompts.

fwip 9 hours ago|||
I obviously wasn't there, but it sounds like maybe they were asking for reassurance. There's a lot of people out there saying that LLMs are going to totally replace regular programming, and for a new grad who doesn't know much about the world, they value your expertise.
Insanity 9 hours ago||
That's a positive interpretation. You might be right, either way that's what I pointed them to. I don't think the LLMs will really replace engineers in the foreseeable future, and so learning the languages and the fundamentals is still needed.
cestith 8 hours ago||
I have a laptop and a phone right here, right now. I have actual calculators around here somewhere. I’ve been out of schools for decades. I still can do arithmetic and basic algebra in my head or on paper and often do.

I’m hoping the situation with LLMs will be the same. Teach the basics and allow people to fall back on them for at least the simpler tasks for their lifetimes. I know people, by the way, who can still use an abacus and a slide rule. I can too, but with a refresher beforehand because I seldom use those.

patwolf 11 hours ago|
These rules apply equally well to system architecture. I've been trying to talk our team out of premature optimization (redis cluster) and fancy algorithms (bloom filters) to compensate for poor data structures (database schema) before we know if performance is going to be a problem.

Even knowing with 100% certainty that performance will be subpar, requirements change often enough that it's often not worth the cost of adding architectural complexity too early.

bob1029 11 hours ago|
> Even knowing with 100% certainty that performance will be subpar

I think there is value in attempting to do something the "wrong way" on purpose to some extent. I have walked into many situations where I was beyond convinced that the performance of something would suck only to be corrected harshly by the realities of modern computer systems.

Framing things as "yes, I know the performance is definitely not ideal in this iteration" puts that monkey in a proper cage until the next time around. If you don't frame it this way up front, you might be constantly baited into chasing the performance monkey around. Its taunts can be really difficult to ignore.

More comments...