Top
Best
New

Posted by vismit2000 14 hours ago

Rob Pike’s Rules of Programming (1989)(www.cs.unc.edu)
780 points | 389 commentspage 3
embedding-shape 13 hours ago|
"Epigrams in Programming" by Alan J. Perlis has a lot more, if you like short snippets of wisdom :) https://www.cs.yale.edu/homes/perlis-alan/quotes.html

> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

Always preferred Perlis' version, that might be slightly over-used in functional programming to justify all kinds of hijinks, but with some nuance works out really well in practice:

> 9. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.

rsav 12 hours ago||
There's also:

>I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.

-- Linus Torvalds

aleph_minus_one 7 hours ago|||
> >I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.

> -- Linus Torvalds

What about programmers

- for whom the code is a data structure?

- who formulate their data structures in a way (e.g. in a very powerful type system) such that all the data structures are code?

- who invent a completely novel way of thinking about computer programs such that in this paradigm both code and data structures are just trivial special cases of some mind-blowing concept ζ of which there exist other special cases that are useful to write powerful programs, but these special cases are completely alien from anything that could be called "code" or "data (structures)", i.e. these programmers don't think/worry about code or data structures, but about ζ?

sph 9 hours ago||||
From what I understand from the vibe coders, they tell a machine what the code should do, but not how it should do it. They leave the important decisions (the shape of data) to an LLM and thus run afoul of this, which is gonna bite them in the ass eventually.
mikepurvis 11 hours ago||||
I think this is sometimes a barrier to getting started for me. I know that I need to explore the data structure design in the context of the code that will interact with it and some of that code will be thrown out as the data structure becomes more clear, but still it can be hard to get off the ground when me gut instinct is that the data design isn't right.

This kind of exploration can be a really positive use case for AI I think, like show me a sketch of this design vs that design and let's compare them together.

sph 9 hours ago|||
AI is terrible for this.

My recommendation is to truly learn a functional language and apply it to a real world product. Then you’ll learn how to think about data, in its pure state, and how it is transformed to get from point A to point B. These lessons will make for much cleaner design that will be applicable to imperative languages as well.

Or learn C where you do not have the luxury of using high-level crutches.

ignoramous 11 hours ago|||
> This kind of exploration can be a really positive use case for AI I think

Not sure if SoTA codegen models are capable of navigating design space and coming up with optimal solutions. Like for cybersecurity, may be specialized models (like DeepMind's Sec-Gemini), if there are any, might?

I reckon, a programmer who already has learnt about / explored the design space, will be able to prompt more pointedly and evaluate the output qualitatively.

> sometimes a barrier to getting started for me

Plenty great books on the topic (:

Algorithms + Data Structures = Programs (1976), https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures...

mikepurvis 11 hours ago||
Yeah key word is exploration. It's not "hey Claude write the design doc for me" but rather, here's two possible directions for how to structure my solution, help me sketch each out a bit further so that I can get a better sense what roadblocks I may hit 50-100 hours into implementation when the cost of changing course is far greater.
Zamicol 8 hours ago|||
That is excellent. I'm putting that in my notes.
Intermernet 13 hours ago|||
I believe the actual quote is:

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

bfivyvysj 12 hours ago|||
This is the biggest issue I see with AI driven development. The data structures are incredibly naive. Yes it's easy to steer them in a different direction but that comes at a long term cost. The further you move from naive the more often you will need to resteer downstream and no amount of context management will help you, it is fighting against the literal mean.
nostrademons 9 hours ago|||
The rule may not hold with AI driven development. The rule exists because it's expensive to rewrite code that depends on a given data structure arrangement, and so programmers usually resort to hacks (eg. writing translation layers or views & traversals of the data) so they can work with a more convenient data structure with functionality that's written later. If writing code becomes free, the AI will just rewrite the whole program to fit the new requirements.

This is what I've observed with using AI on relatively small (~1000 line) programs. When I add a requirement that requires a different data structure, Claude will happily move to the new optimal data structure, and rewrite literally everything accordingly.

I've heard that it gets dicier when you have source files that are 30K-40K lines and programs that are in the million+ line range. My reports have reported that Gemini falls down badly in this case, because the source file blows the context window. But even then, they've also reported that you can make progress by asking Gemini to come up with the new design, and then asking it to come up with a list of modules that depend upon the old structure, and then asking it to write a shim layer module-by-module to have the old code use the new data structure, and then have it replace the old data structure with the new one, and then have it remove the shim layer and rewrite the code of each module to natively use the new data structure. Basically, babysit it through the same refactoring that an experienced programmer would use to do a large-scale refactoring in a million+ line codebase, but have the AI rewrite modules in 5 minutes that would take a programmer 5 weeks.

Intermernet 12 hours ago||||
Naive doesn't mean bad. 99% of software can be written with understood, well documented data structures. One of the problems with ai is that it allows people to create software without understanding the trade offs of certain data structures, algorithms and more fundamental hardware management strategies.

You don't need to be able to pass a leet code interview, but you should know about big O complexity, you should be able to work out if a linked list is better than an array, you should be able to program a trie, and you should be at least aware of concepts like cache coherence / locality. You don't need to be an expert, but these are realities of the way software and hardware work. They're also not super complex to gain a working knowledge of, and various LLMs are probably a really good way to gain that knowledge.

dotancohen 11 hours ago||||
Then don't let the AI write the data structures. I don't. I usually don't even let the AI write the class or method names. I give it a skeleton application and let it fill in the code. Works great, and I retain knowledge of how the application works.
andsoitis 12 hours ago|||
> This is the biggest issue I see with AI driven development. The data structures are incredibly naive.

Bill Gates, for example, always advocated for thinking through the entire program design and data structures before writing any code, emphasizing that structure is crucial to success.

neocron 12 hours ago|||
Ah Bill Gates, the epitome of good software
andsoitis 11 hours ago|||
> Ah Bill Gates, the epitome of good software

While developing Altair BASIC, his choice of data structures and algorithms enabled him to fit the code into just 4 kilobytes.

dotancohen 11 hours ago|||
Yes, actually. Gates wrote great software.

Microsoft is another story.

jll29 11 hours ago||
And Paul Allen wrote a whole Altair emulator so that they could use an (academic) Harvard computer for their little (commercial) project and test/run Bill Gates' BASIC interpreter on it.
PaulDavisThe1st 9 hours ago|||
I'd like to see Gates or anyone else do that for a project that lasts (at least) a quarter century and sees a many-fold increase in CPU speed, RAM availability, disk capacity etc.
mock-possum 9 hours ago|||
I’m really going to need to see both. There’s a lot of business logic that simply is not encoded in a data storage model.
jerf 10 hours ago|||
As I'm sure more and more people are using AI to document old systems, even just to get a foothold in them personally if they don't intend to share it, here's a hint related to that: By default, if you fire an AI at a programming base, at least in my experience you get the usual documentation you expect from a system: This is the list of "key modules", this module does this, this module does that, this module does the other thing.

This is the worst sort of documentation; technically true but quite unenlightening. It is, in the parlance of the Fred Brooks quote mentioned in a sibling comment, neither the "flowchart" nor the "tables"; it is simply a brute enumeration of code.

To which the fix is, ask for the right thing. Ask for it to analyze the key data structures (tables) and provide you the flow through the program (the flowchart). It'll do it no problem. Might be inaccurate, as is a hazard with all documentation, but it makes as good a try at this style of documentation as "conventional" documentation.

Honestly one of the biggest problems I have with AI coding and documentation is just that the training set is filled to the brim with mediocrity and the defaults are inferior like this on numerous fronts. Also relevant to this conversation is that AI tends to code the same way it documents and it won't have either clear flow charts or tables unless you carefully prompt for them. It's pretty good at doing it when you ask, but if you don't ask you're gonna get a mess.

(And I find, at least in my contexts, using opus, you can't seem to prompt it to "use good data structures" in advance, it just writes scripting code like it always does and like that part of the prompt wasn't there. You pretty much have to come back in after its first cut and tell it what data structures to create. Then it's really good at the rest. YMMV, as is the way of AI.)

0xpgm 12 hours ago|||
Reminded me of this thread between Alan Kay and Rich Hickey where Alan Kay thinks "data" is a bad idea.

My interpretation of his point of view is that what you need is a process/interpreter/live object that 'explains' the data.

https://news.ycombinator.com/item?id=11945722

EDIT: He writes more about it in Quora. In brief, he says it is 'meaning', not 'data' that is central to programming.

https://qr.ae/pCVB9m

gregw2 10 hours ago|||
Thanks for the pointer to this 2016 dialog!

One part of it has interesting new resonance in the era of agentic LLMs:

alankay on June 21, 2016 | root | parent | next [–]

This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen. Think about this as one of the consequences of massive scaling ...

Nowdays rather than the methods associated with data objects, we are dealing with "context" and "prompts".

0xpgm 10 hours ago||
Quite a nice insight there!

I should probably be thinking more in this direction.

johnmaguire 10 hours ago||||
Hm, not sure. Data on its own (say, a string of numbers) might be meaningless - but structured data? Sure, there may be ambiguity but well-structured data generally ought to have a clear/obvious interpretation. This is the whole idea of nailing your data structures.
0xpgm 10 hours ago||
Yeah, structured data implies some processing on raw data to improve its meaning. Alan Kay seems to want to push this idea to encapsulate data with rich behaviour.
christophilus 11 hours ago|||
I’m with Rich Hickey on this one, though I generally prefer my data be statically typed.
0xpgm 10 hours ago||
Sure, static typing adds some sort of process that provides a coarse interpretation of the data.
mchaver 12 hours ago|||
I find languages like Haskell, ReScript/OCaml to work really well for CRUD applications because they push you to think about your data and types first. Then you think about the transformations you want to make on the data via functions. When looking at new code I usually look for the types first, specifically what is getting stored and read.
embedding-shape 11 hours ago||
Similarly, that approach works really well in Clojure too, albeit with a lot less concern for types, but the "data and data structures first" principle is widespread in the ecosystem.
mchaver 9 hours ago||
I've heard good things about Clojure, and it'ss different from what I am used to (bonus points because I like an intellectual challenge), so trying it out is definitely on my todo list.
tangus 11 hours ago|||
Aren't they basically saying opposite things? Perlis is saying "don't choose the right data structure, shoehorn your data into the most popular one". This advice might have made sense before generic programming was widespread; I think it's obsolete.
Rygian 11 hours ago|||
Pike: strongly typed logic is great!

Perlin: stringly typed logic is great!

embedding-shape 10 hours ago|||
> Perlis is saying "don't choose the right data structure, shoehorn your data into the most popular one"

I don't take it like that. A map could be the right data structure for something people typically reach for classes to do, and then you get a whole bunch of functions that can already operate on a map-like thing for free.

If you take a look at the standard library and the data structures of Clojure you'd see this approach taken to a somewhat extreme amount.

alberto-m 11 hours ago|||
This quote from “Dive into Python” when I was a fresh graduate was one of the most impacting lines I ever read in a programming book.

> Busywork code is not important. Data is important. And data is not difficult. It's only data. If you have too much, filter it. If it's not what you want, map it. Focus on the data; leave the busywork behind.

TYPE_FASTER 11 hours ago|||
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

If I have learned one thing in my 30-40 years spent writing code, it is this.

seanalltogether 10 hours ago||
I agree. The biggest lesson I try to drive home to newer programmers that join my projects is that its always best to transform the data into the structure you need at the very end of the chain, not at the beginning or middle. Keep the data in it's purest form and then transform it right before displaying it to the user, or right before providing it in the final api for others to consume.

You never know how requirements are going to change over the next 5 years, and pure structures are always the most flexible to work with.

bluGill 9 hours ago||
Related: your business logic should work on metric units. It is a UI concern if the user wants to see some other measurement system. Convert to feet, chains, cubits... or whatever obscure measurement system the user wants at display time. (if you do get an embedded device that reports non-metric units convert when it comes in - you will get a different device in the future that reports different units anyway)

You still have to worry about someone using kg when you use g, but you avoid a large class of problems and make your logic easier.

dcuthbertson 11 hours ago|||
But doesn't No. 2 directly conflict with Pike's 5th rule? It seems to me these are all aphorisms that have to be taken with a grain of salt.

> 2. Functions delay binding; data structures induce binding. Moral: Structure data late in the programming process.

linhns 12 hours ago|||
Nice to see Perlis mentioned once in a while. Reading SICP again, still learning new things.
WillAdams 11 hours ago||
There is a matching video series for SICP:

https://ocw.mit.edu/courses/6-001-structure-and-interpretati...

which I found very helpful in (finally) managing to get through that entire text (and do all the exercises).

Hendrikto 12 hours ago|||
I feel like these are far more vague and less actionable than the 5 Pike rules.
JanisErdmanis 12 hours ago|||
With 100 functions and one datastructure it is almost as programming with a global variables where new instance is equivalent to a new process. Doesn’t seem like a good rule to follow.
embedding-shape 11 hours ago||
The scope of where that data structure or functions are available is a different concern though, "100 functions + 1 data structure" doesn't require globals or private, it's a separate thing.
JanisErdmanis 7 hours ago||
One can always look as global variables equivalent to a context object that’s is passed in every function. It’s just a syntactic difference whether one constructs such data structure or uses it implicitly via globals.

What I am getting at is that when one has such gigantic data structure there is no separation of concerns.

CyberDildonics 6 hours ago||
Does one need one's separation of concerns if one's concerns shouldn't be separated in the in the first place?

Anytime one has access to a database one has access to one large global data structure that one can access from anywhere is a program.

This same concept goes for one's global state in one's game if one is making a game.

JanisErdmanis 6 hours ago||
Separation of concerns is still a valid paradigm with a single global datastructure like GUI, Microservice, Database and etc. In such situation one can still seperate concerns via composing the global datastructure from a smaller units and define methods with respect to thoose smaller units. In that way one does not need to wonder whether there are some unattended side effects when calling a function that mutates the state.
CyberDildonics 5 hours ago||
Seems like one is backpedaling because one was just talking about one's separation of one's concerns and now one is defending one's separation of concerns with respect to one's global data structure.
JanisErdmanis 4 hours ago||
I still firmly believe that one ctx object and hundred functions/methods is as bad as programming with plain variables defined in the global scope. If the ctx is composed from smaller data structures with whom the functions are defined, then all is good. This is the opposite of the rule.
CyberDildonics 2 hours ago||
But why?

You keep saying you believe it, but that is literally what a database is, game state manipulation, string manipulation, iterator algorithms, list comprehensions, range algorithms, image manipulations, etc. These are all instances where you use the same data structures over and over with as many algorithms and functions and you need.

Pxtl 11 hours ago|||
As much as relational DBs have held back enterprise software for a very long time by being so conservative in their development, the fact that they force you to put this relationship absolutely front-of-mind is excellent.
embedding-shape 11 hours ago||
I'd personally consider "persistence" AKA "how to store shit" to be a very different concern compared to the data structures that you use in the program. Ideally, your design shouldn't care about how things are stores, unless there is a particular concern for how fast things read/writes.
mosura 10 hours ago|||
Often significant improvements to every aspect of a system that interacts with a database can be made by proper design of the primary keys, instead of the generic id way too many people jump to.

The key difficulty is identifying what these are is far from obvious upfront, and so often an index appears adjacent to a table that represents what the table should have been in the first place.

embedding-shape 10 hours ago|||
I guess that might be true also, to some extent. I guess most of the times I've seen something "messy" in software design, it's almost always about domain code being made overly complicated compared to what it has to do, and almost never about "how does this domain data gets written/read to/from a database", although it's very common. Although of course storage/persistence isn't non-essential, just less common problem than the typical design/architecture spaghetti I encounter.
Pxtl 9 hours ago|||
I'm a firm believer in always using an auto-generated surrogate key for the PK because domain PKs always eventually become a pain point. The problem is that doing so does real damage to the ergonomics of the DB.

This is why I fundamentally find SQL too conservative and outdated. There are obvious patterns for cross-cutting concerns that would mitigate things like this but enterprise SQL products like Oracle and MS are awful at providing ways to do these reusable cross-cutting concerns consistently.

Pxtl 9 hours ago|||
I meant to reply to a different comment originally, specifically the one including this quote from Torvalds:

> Good programmers worry about data structures and their relationships.

> -- Linus Torvalds

I was specifically thinking about the "relationship" issues. The worst messes to fix are the ones where the programmer didn't consider how to relate the objects together - which relationships need to be direct PK bindings, which can be indirect, which things have to be cached vs calculated live, which things are the cache (vs the master copy), what the cardinality of each relationship is, which relationships are semantically ownerships vs peers, which data is part of the system itself vs configuration data vs live, how you handle changes to the data, (event sourcing vs changelogging vs vs append-only vs yolo update), etc.

Not quite "data structures" I admit but absolutely thinking hard about the relationship between all the data you have.

SQL doesn't frame all of these questions out for you but it's good getting you to start thinking about them in a way you might not otherwise.

DaleBiagio 12 hours ago|||
" 9. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."

That's great

mpalmer 12 hours ago|||
Was the "J" short for "Cassandra"?

    When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
bandrami 11 hours ago|||
Also basically everything DHH ever said (I stopped using Rails 15 years ago but just defining data relationships in YAML and typing a single command to get a functioning website and database was in fact pretty cool in the oughts).
mosura 13 hours ago||
Perlis is just wrong in that way academics so often are.

Pike is right.

Intermernet 12 hours ago|||
Hang on, they mostly agree with each other. I've spoken to Rob Pike a few times and I never heard him call out Perlis as being wrong. On this particular point, Perlis and Pike are both extending an existing idea put forward by Fred Brooks.
mosura 12 hours ago||
Perlis absolutely is not saying the same thing, and as the commenter notes the functional community interpret it in a particularly extreme way.

I would guess Pike is simply wise enough not to get involved in such arguments.

jacquesm 12 hours ago||||
Perlis is right in the way that academics so often are and Pike is right in the way that practitioners often are. They also happen to be in rough agreement on this, unsurprisingly so.
hrmtst93837 12 hours ago||||
Treating either as gospel is lazy, Perlis was pushing back on dogma and Pike on theory, while legacy code makes both look cleaner on paper.
AnimalMuppet 11 hours ago|||
Could you be more specific?
mosura 11 hours ago||
Promoting the idea of one data structure with many functions contradicts:

“If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident.”

And:

“Use simple algorithms as well as simple data structures.”

A data structure general enough to solve enough problems to be meaningful will either be poorly suited to some problems or have complex algorithms for those problems, or both.

There are reasons we don’t all use graph databases or triple stores, and rely on abstractions over our byte arrays.

AnimalMuppet 9 hours ago||
I think you are badly misinterpreting the statement.

Let's say you're working for the DMV on a program for driver's licenses. The idea is to use one structure for driver's license data, as opposed to using one structure for new driver's licenses, a different one for renewals, and yet a third for expired ones, and a fourth one for name changes.

It is not saying that you should use byte arrays for driver's license records, so that you can use the same data structure for driver's license data and missile tracks. Generalize within your program, not across all possible programs running on all computers.

mosura 9 hours ago||
Your admittedly exaggerated example is arguing against the entire concept of relational databases, which is not a winning proposition.

You do not write programs with one map of id to thing as you are suggesting here.

oxag3n 5 hours ago||
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

I'm a big fan of Data Oriented Design. Once you conceptualize how data is stored and transformed in your program, it just has to be reflected in data structures that make it possible.

Modern design approaches tend to focus on choosing a right abstraction like columnar/row layout, caching etc. They mostly fail to optimally work with the data. Optimal in this case meaning getting most of all underlying hardware capabilities, for example reading large and preferably continuous blocks of data from magnetic storage, parallel data processing, keeping intermediate results in CPU caches, utilizing all physical SSD queues.

aaronblohowiak 5 hours ago|
The fundamental tension between nouns and verbs and the attempts to unify them like events have made programming a long art form to study.

It's all use-case and priority-specific, and I think the more varied your experience and more tools you have in the tool belt, the better off you can be to bring the right solution to bear. Of course, then you think you have the right solution in mind (lets say using partitions in postgres for something) but you find the ORM your service is using doesn't support it, then what is "best" becomes not only problem-specific but also tool-specific. Finally, even if you have the best solution and your existing ecosystem supports it but the rest of the engineering staff you have is unfamiliar with it, it may again no longer be "best".

this ladder of problem-fit, ecosystem-fit, staffing-fit is something I have grappled with in my career.

LLMs are only so-so at any of the above (even when including the agent as "staff".)

R0m41nJosh 7 hours ago||
When I was young I took time to "optimize" my code where it obviously had no impact, like on simple python scripts. It was just by ego, to look smart. I guess the "early optimization" takes are aimed at young developers who want to show their skills on completely irrelecant places.

Of course with experience, you start to feel when the straight forward suboptimal code will cause massive performance issued. In this case it's critical to take action up front to avoid the mess. Its called software architecture, I guess.

GuB-42 8 hours ago||
The opposite conclusion can be taken from the premise of rule #1 "You can't tell where a program is going to spend its time"

If you can't tell in advance what is performance critical, then consider everything to be performance critical.

I would then go against rule #3 "Fancy algorithms are slow when n is small, and n is usually small". n is usually small, except when it isn't, and as per rule #1, you may not know that ahead of time. Assuming n is going to be small is how you get accidentally quadratic behavior, such as the infamous GTA bug. So, assume n is going to be big unless you are sure it won't be. Understand that your users may use your software in ways you don't expect.

Note that if you really want high performance, you should properly characterize your "n" so that you can use the appropriate technique, it is hard because you need to know all your use cases and their implications in advance. Assuming n will be big is the easy way!

About rule #4, fancy algorithms are often not harder to implement, most of the times, it means using the right library.

About rule #2 (measure), yes, you absolutely should, but it doesn't mean you shouldn't consider performance before you measure. It would be like saying that you shouldn't worry about introducing bugs before testing. You should do your best to make your code fast and correct before you start measuring and testing.

What I agree with is that you shouldn't introduce speed hacks unless you know what you are doing. Most of performance come from giving it consideration on every step. Avoiding a copy here, using a hash map instead of a linear search there, etc... If you have to resort to a hack, it may be because you didn't consider performance early enough. For example, if took care of making a function fast enough, you may not have to cache results later on.

As for #5, I agree completely. Data is the most important. It applies to performance too, especially on modern hardware. To give you an very simplified idea, RAM access is about 100x slower than running a CPU instruction, it means you can get massive speed improvement by making your memory footprint smaller and using cache-friendly data structures.

epolanski 8 hours ago||
> If you can't tell in advance what is performance critical, then consider everything to be performance critical.

As for rule 2: first you measure.

keyle 12 hours ago||
Rule 5 is definitely king. Code acts on data, if the data is crap, you're already lost.

edit: s/data/data structure/

andsoitis 12 hours ago|
… if the data structures are crap.

Good software can handle crap data.

keyle 12 hours ago||
That is not what I meant. I meant crap data structures. Sorry it's late here.
DaleBiagio 12 hours ago||
The attribution to Hoare is a common error — "Premature optimization is the root of all evil" first appeared in Knuth's 1974 paper "Structured Programming with go to Statements."

Knuth later attributed it to Hoare, but Hoare said he had no recollection of it and suggested it might have been Dijkstra.

Rule 5 aged the best. "Data dominates" is the lesson every senior engineer eventually learns the hard way.

YesThatTom2 10 hours ago||
If Dijkstra blamed Knuth it would have been the best recursive joke ever.
zabzonk 11 hours ago||
I've always thought it was Dijkstra - it even sounds Dijkstra-ish.
vishnugupta 7 hours ago||
I can’t emphasize the importance of rule-5 enough.

I learnt about rule-5 through experience before I had heard it was a rule.

I used to do tech due diligence for acquisition of companies. I had a very short time, about a day. I hit upon a great time saver idea of asking them to show their DB schema and explain it. It turned out to be surprisingly effective. Once I understood the scheme most of the architecture explained itself.

Now I apply the same principle while designing a system.

SoftTalker 6 hours ago|
Yes, fully agree. Rule 5 has been the center of my approach to designing and writing software for over 30 years now. Fad methodologies and platforms come and go but Rule 5 works as well for me today as it did in 1995.
artyom 6 hours ago||
I can't agree more. I live and breath by rule #5.

Getting competent at it, however, is no joke and takes time.

dasil003 10 hours ago||
These rules aged well overall. The only change I would make these days is to invert the order.

Number 5 is timeless and relevant at all scales, especially as code iterations have gotten faster and faster, data is all the more relevant. Numbers 4 and 3 have shifted a bit since data sizes and performance have ballooned, algorithm overhead isn't quite as big a concern, but the simplicity argument is relevant as ever. Numbers 2 and 1 while still true (Amdahl's law is a mathematical truth after all), are also clearly a product of their time and the hard constraints programmers had to deal with at the time as well as the shallowness of the stack. Still good wisdom, though I think on the whole the majority of programmers are less concerned about performance than they should be, especially compared to 50 years ago.

PaulHoule 7 hours ago||
The "bottleneck" model of performance has limitations.

There are a lot of systems where useless work and other inefficiencies are spread all over the place. Even though I think garbage collection is underrated (e.g. Rustifarians will agree with me in 15 years) it's a good example because of the nonlocality that profilers miss or misunderstand.

You can make great prop bets around "I'll rewrite your Array-of-Structures code to Structure-of-Arrays code and it will get much faster"

https://en.wikipedia.org/wiki/AoS_and_SoA

because SoA usually is much more cache friendly and AoS makes the memory hierarchy perform poorly in a way profilers can't see. The more time somebody spends looking at profilers and more they quote Rule 1 the more they get blindsided by it.

tracker1 7 hours ago|
Pretty much live by these in practice... I've had a lot of arguments over #3 though... yes nested loops can cause problems... but when you're dealing with < 100 or so items in each nested loop and outer loop, it's not a big deal in practice. It's simpler and easier to reason with... don't optimize unless you really need to for practical reasons.

On #5, I think most people tend to just lean on RDBMS databases for a lot of data access patterns. I think it helps to have some fundamental understandings in terms of where/how/why you can optimize databases as well as where it make sense to consider non-relational (no-sql) databases too. A poorly structured database can crawl under a relatively small number of users.

More comments...