Top
Best
New

Posted by vismit2000 10 hours ago

Rob Pike's Rules of Programming (1989)(www.cs.unc.edu)
708 points | 370 comments
IvyMike 32 minutes ago|
Once upon a time in the 90's I was at work at 2am and I needed to implement a search over a data set. This function was going to be eventually called for every item, thus if I implemented it as a linear search, it would be n^2 behavior. Since it was so late and I was so tired, I marked it as something to fix later, and just did linear search.

Later that week, now that things were working, I profiled the n^2 search. The software controlled a piece of industrial test equipment, and the actual test process would take something around 4 hours to complete. Using the very worst case, far-beyond-reasonable data set, if I left the n^2 behavior in, would have added something like 6 seconds to that 4 hour runtime.

(Ultimately I fixed it anyways, but because it was easy, not because it mattered.)

nlawalker 3 hours ago||
This reminds me of a portion of a talk Jonathan Blow gave[1], where he justifies this from a productivity angle. He explains how his initial implementation for virtually everything in Braid used arrays of records, and only after finding bottlenecks did he make changes, because if he had approached every technical challenge by trying to find the optimal data structure and algorithm he would never have shipped.

"There's a third thing [beyond speed and memory] that you might want to optimize for which is much more important than either of these, which is years of your life required per program implementation." This is of course from the perspective of a solo indie game developer, but it's a good and interesting perspective to consider.

[1] https://www.youtube.com/watch?v=JjDsP5n2kSM

dgb23 3 hours ago||
It's also notable that video games are programs that run for hours and iterate over large sets of very similar entities at 60 frames or more per second repeatedly and often do very similar operations on each of the entities.

That also means that "just do an array of flat records" is a very sane default even if it seems brutish at first.

quietbritishjim 21 minutes ago||
I think when he said "just do an array of flat records" he meant as opposed to record of arrays (i.e. row oriented vs column oriented), as opposed to fancy data structures which I think you're assuming he was implying. Separate arrays for each data member are common in game engines exactly because they're good for iterating over, which as you said is common.
O3marchnative 8 minutes ago||
That's also great for letting the compiler unlock auto-vectorization opportunities without delving into the world of manual SIMD.

Even storing something simple such as array of complex numbers as an structure of arrays (SoA) rather than an array of structures (AoS) can unlock a lot of optimizations. For example, less permutes/shuffles and more arithmetic instructions.

Depending on how many fields you actually need when you iterate over the data, you prevent cache pollution as well.

Insanity 3 hours ago|||
It's easy to see this outside of the perspective of a solo game developer. We all have deadlines to manage even in the regular 'corporate world'. And engineering time spent on a problem also translates to an actual cost.

It's a good consideration tbh.

skeeter2020 3 hours ago|||
I'd be careful extending learnings from games (solo or team efforts) to general programming as the needs and intent seem to be so different. We rarely see much code re-use in games outside of core, special-purpose buy largely isolated components and assets. Outside of games there's a much bigger emphasis on the data IME, and performance is often a nice-to-have.
awesome_dude 49 minutes ago||
Yeah - A game is (generally) a one and done enterprise. Like a start up it's all about getting it out the door, there's little to no expectation of having to maintain it for any real length of time.
suzzer99 1 hour ago|||
> how his initial implementation for virtually everything in Braid used arrays of records

This is me with hash maps.

Fraterkes 1 hour ago|||
This is slightly surprising to me, since Braid is kinda infamous for being in development for a pretty long time for a puzzle platformer
SatvikBeri 1 hour ago||
Was 3 years a long time for an indie platformer in the early 2000s? Looking at some similar examples:

* Braid was 3 years

* Cave Story was 5 years

* World of Goo was 2 years

* Limbo was about 3 years (but with 8-16 people)

So Braid seems pretty average.

IshKebab 13 minutes ago||
I feel like a game is a bit different though because you control the input. You can't profile input you've never seen.
ryguz 3 hours ago||
The interesting thing about Rule 1 is that it makes Rules 3-5 follow almost mechanically. If you genuinely accept that you cannot predict where the bottleneck is, then writing straightforward code and measuring becomes the only rational strategy. The problem is most people treat these rules as independent guidelines rather than as consequences of a single premise.

In practice what I see fail most often is not premature optimization but premature abstraction. People build elaborate indirection layers for flexibility they never need, and those layers impose real costs on every future reader of the code. The irony is that abstraction is supposed to manage complexity but prematurely applied it just creates a different kind.

silisili 3 hours ago||
> In practice what I see fail most often is not premature optimization but premature abstraction

This matches my experience as well.

Someone here commented once that abstractions should be emergent, not speculative, and I loved that line so much I use it with my team all the time now when I see the craziness starting.

hugey010 26 minutes ago||
I completely agree with you, and that is an amazing quote.
hugey010 27 minutes ago|||
Premature abstraction is one of the worst pitfalls when writing software. It's paid for up front, it costs developer time to understand and work with, it adds complexity (tech debt), increases likeliness of bugs, and increases refactor time. All for someone to say "if we ever want to do X". If we wanted to do it, we'd do it now.

I truly believe this comes from devs who want to feel smart by "architecting" solutions to future problems before those problems have become well defined.

pc86 50 minutes ago|||
I was just dealing with this is some home-grown configuration nightmare where every line of the file was pulled from some other namespace to the point where you end up with two dozen folders, each with half a dozen partial configuration files in them, and it takes half a hour to figure out where a single value is coming from.

I'm sure it's super flexible but the exactly same thing could have been achieved with 8 YAML files and 60% of the content between them would be identical.

eru 3 hours ago|||
> In practice what I see fail most often is not premature optimization but premature abstraction.

Compare and contrast https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf

fl0ki 3 hours ago|||
I only agree if you have a bounded dataset size that you know will never grow. If it can grow in future (and if you're not sure, you should assume it can), not only will many data structures and algorithms scale poorly along the way, but they will grow to dominate the bottleneck as well. By the time it no longer meets requirements and you get a trouble ticket, you're now under time pressure to develop, qualify, and deploy a new solution. You're much more likely to encounter regressions when doing this under time pressure.

If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly.

I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy.

9rx 2 hours ago||
> You're much more likely to encounter regressions when doing this under time pressure.

There is nothing to suggest you should wait to optimize under pressure, only that you should optimize only after you have measured. Benchmark tests are still best written during the development cycle, not while running hot in production.

Starting with the naive solution helps quickly ensure that your API is sensible and that your testing/benchmarking is in good shape before you start poking at the hard bits where you are much more likely to screw things up, all while offering a baseline score to prove that your optimizations are actually necessary and an improvement.

munk-a 2 hours ago|||
As someone who believes strongly in type based programming and the importance of good data structure choice I'm not seeing how Rule 5 follows Rule 1. I think it's important to reinforce how impactful good data structure choice is compared to trying to solve everything through procedural logic since a well structured coordination of data interactions can end up greatly simplifying the amount of standalone logic.
astrobe_ 2 hours ago|||
Data cache issues is one case of something being surprising slow because of how data is organized. That said, Structure of Arrays vs Array of structures is an example where rule 4 and 5 somewhat contradict each other, if one confuses "simple" and "easy" - Structure of Array style is "harder" because we don't see it often; but then if it's harder, it is is likely more bug-prone.
akkad33 2 hours ago|||
But good data structure is not always evident from the get go. And if your types are too specific it would make future development hard if the specs change. This is what I struggle with
munk-a 25 minutes ago||
Professionally I'm a data architect. Modeling data in a way that is functional, performant and forward facing is not an easy problem so it's perfectly fine to struggle with it. We do our best job with what we've got within reasonable constraints - we can't do anything more than that.

I found that over time my senses have been honed to more quickly identify things that are important to deeply study and plan right now and areas where I can skimp more and fix it later if problems develop. I don't know if there was a short cut to honing those senses that didn't involve a lot of pain as I needed to pick apart and rework oversights.

mkehrt 2 hours ago|||
This comment is fascinating to me, as it indicates an entirely different mindset than mine. I'm much more interested in code readability and maintainabilty (and simplicty and elegance) than performance, unless it's necessary. So I would start by saying everything flows from rule 4 or maybe 5. Rule 1 is a consequence of rule 4 for me.
Eiriksmal 1 hour ago||
Maybe it's because the comment you are replying to is from a new account posting paragraphs of LLMese in multiple comments in the same minute. It's unsurprising that soulless LLM output doesn't match your mindset!
rob 3 hours ago|||
Really need that [flag bot] button added to HN.
rd 2 hours ago|||
It would be easier if we could just block comments from green users. I get that it loses ~.1% of authors who might have made an account to comment on a blogpost of theirs that was posted here. I'd rather have that loss than have to deal with the 99.9% of spam.
suzzer99 1 hour ago||
TIL green means new. I thought it was special for some reason.
tech_hutch 3 hours ago|||
Are you saying the parent comment seems like a bot?
macintux 2 hours ago||
Comment history is suspect.
tracker1 3 hours ago||
You tend to see it a lot in "Enterprise" software (.Net and Java shops in particular). A lot of Enterprise Software Architects will reach for their favored abstractions out of practice as opposed to if they fit. Custom solution providers will build a bunch of the same out of practice.

This is something I tend to consider far, far worse than "AI Slop" in practice. I always hated Microsoft Enterprise Library's Data Access Application Block (DAAB) in practice. I've literally only ever seen one product that supported multiple database backends that necessitated that level of abstraction... but I've seen that library well over a dozen times in practice. Just as a specific example.

IMO, abstractions should generally serve to make the rest of the codebase reasonable more often than not... abstractions that hide complexity are useful... abstractions that add complexity much less so.

01100011 5 hours ago||
Rule 3 gets me into trouble with CS majors a lot. I'm an EE by education and entered into SW via the bottom floor(embedded C/ASM) so it was late in my career before I knew the formal definition of big-O and complexity.

For most of my career, sticking to rule 3 made the most sense. When the CS major would be annoying and talk about big-O they usually forgot n was tiny. But then my job changed. I started working on different things. Suddenly my job started sounding more like a leetcode interview people complain about. Now n really is big and now it really does matter.

Keep in mind that Rob Pike comes from a different era when programming for 'big iron' looked a lot more like programming for an embedded microcontroller now.

kevincox 3 hours ago||
I actually disagree with Rule 3! While numbers are usually small being fast on small cases generally isn't as important as performing acceptably on large cases. So I prefer to take the better big-O so that it doesn't slow down unacceptably on real-world edge-case stresses. (The type of workloads that the devs often don't experience but your big customers will.)

Of course there is a balance to this, the engineering time to implement both options is an important consideration. But given both algorithms are relatively easy to implement I will default to the one that is faster at large sizes even if it is slower at common sizes. I do suspect that there is an implicit assumption that "fancy" algorithms take longer and are harder to implement. But in many cases both algorithms are in the standard library and just need to be selected. If this post focused on "fancy" in terms of actual time to implement rather than speed for common sizes I would be more inclined to agree with it.

I wrote an article about this a while back: https://kevincox.ca/2023/05/09/less-than-quadratic/

Jensson 2 hours ago||
Rule 3 was true 1989, back then computers were so slow and had barely any ram so most things you did only was reasonable for small number of inputs. Today we almost always have large amounts of inputs so its different.
danielmarkbruce 1 hour ago||
This very much depends on where you work... and basically isn't true for most people. It's extremely true for some people.
grogers 1 hour ago|||
Well it is hedged with the word "fancy". I think a charitable reading is to understand the problem domain. If N is always small then trying to minimize the big-O is just showing off and likely counterproductive in many ways. If N is large, it might be a requirement.

Most people don't need FFT algorithm for multiplying large numbers, Karatsuba's algorithm is fine. But in some domains the difference does matter.

Personally I usually see the opposite effect - people first reach for a too-naive approach and implement some O(n^2) algorithm where it wouldn't have even been more complex to implement something O(n) or O(n log n). And n is almost always small so it works fine, until it blows up spectacularly.

SoftTalker 3 hours ago||
My father did some programming in Fortran and Assembly of various flavors. He was always partial to lookup tables where they could replace complicated conditionals or computations. Memory was precious in his day but it could still be worth it if your program did something repeatedly (which most do).
ta20211004_1 7 hours ago||
Can't agree more on 5. I've repeatedly found that any really tricky programming problem is (eventually) solved by iterative refinement of the data structures (and the APIs they expose / are associated with). When you get it right the control flow of a program becomes straightforward to reason about.

To address our favorite topic: while I use LLMs to assist on coding tasks a lot, I think they're very weak at this. Claude is much more likely to suggest or expand complex control flow logic on small data types than it is to recognize and implement an opportunity to encapsulate ideas in composable chunks. And I don't buy the idea that this doesn't matter since most code will be produced and consumed by LLMs. The LLMs of today are much more effective on code bases that have already been thoughtfully designed. So are humans. Why would that change?

alain94040 4 hours ago||
Agreed, in my experience, rule 5 should be rule 1. I think I also heard it said (paraphrased) as "show we your code and I'll be forever confused, show me your database schema and everything will become obvious".

Having implemented my shared of highly complex high-performance algorithms in the past, the key was always to figure out how to massage the raw data into structures that allow the algorithm to fly. It requires both a decent knowledge of the various algorithm options you have, as well as being flexible to see that the data could be presented a different way to get to the same result orders of magnitude faster.

beachy 52 minutes ago|||
I think you are referring to:

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

skeeter2020 3 hours ago|||
I have seen a huge decline in data first over the past decade-plus; maybe related to a lot more pragmatic training where code-first and abstraction helped you go faster, earlier but I definitely came of age starting with the schema and there are an awful lot of problems & systems that essentially are UI and functions on top of the schema.
danielmarkbruce 1 hour ago||
UI + functions on top of schema if you've designed the schema well. Otherwise, it's a whole other thing.
zer00eyz 4 hours ago||
> refinement of the data structures (and the APIs they expose / are associated with)

I think rule 5 is often ignored by a lot of distributed services. Where you have to make several calls, each with their own http, db and "security" overhead, when one would do. Then these each end up with caching layers because they are "slow" (in aggregate).

nostrademons 3 hours ago||
If you're doing it right, you start with a centralized service; get the product, software architecture, and data flows right while it's all in one process; and then distribute along architectural boundaries when you need to scale.

Very few software services built today are doing it right. Most assume they need to scale from day one, pick a technology stack to enable that, and then alter the product to reflect the limitations of the tech stack they picked. Then they wonder why they need to spend millions on sales and marketing to convince people to use the product they've built, and millions on AWS bills to scale it. But then, the core problem was really that their company did not need to exist in the first place and only does because investors insist on cargo-culting the latest hot thing.

This is why software sucks so much today.

skeeter2020 3 hours ago||
>> If you're doing it right, you start with a centralized service; get the product, software architecture, and data flows right while it's all in one process; and then distribute along architectural boundaries when you need to scale.

I'll add one more modification if you're like me (and apparently many others): go too far with your distribution and pull it back to a sane (i.e. small handful) number of distributed services, hopefully before you get too far down the implementation...

thecodemonkey 5 hours ago||
Running the same codebase for 10+ years with a small team is what finally made me fully internalize these rules.

I've always been a KISS/DRY person but over a decade there are plenty of moments where you're tempted to reach for a fancier database or rewrite something in a trendier stack. What's actually kept things running well at scale is boring, known technologies and only optimizing in the places where it actually matters.

We wrote our principles down recently and it basically just reads like Pike's rules in different words: https://www.geocod.io/code-and-coordinates/2025-09-30-develo...

dkarl 6 hours ago||
I think it's fine and generous that he credited these rules to the better-known aphorisms that inspired them, but I think his versions are better, they deserve to be presented by themselves, instead of alongside the mental clickbait of the classic aphorisms. They preserve important context that was lost when the better-known versions were ripped out of their original texts.

For example, I've often heard "premature optimization is the root of all evil" invoked to support opposite sides of the same argument. Pike's rules are much clearer and harder to interpret creatively.

Also, it's amusing that you don't hear this anymore:

> Rule 5 is often shortened to "write stupid code that uses smart objects".

In context, this clearly means that if you invest enough mental work in designing your data structures, it's easy to write simple code to solve your problem. But interpreted through an OO mindset, this could be seen as encouraging one of the classic noob mistakes of the heyday of OO: believing that your code could be as complex as you wanted, without cost, as long as you hid the complicated bits inside member methods on your objects. I'm guessing that "write stupid code that uses smart objects" was a snappy bit of wisdom in the pre-OO days and was discarded as dangerous when the context of OO created a new and harmful way of interpreting it.

zemo 4 hours ago||
> but I think his versions are better, they deserve to be presented by themselves, instead of alongside the mental clickbait of the classic aphorisms

keeping the historical chain of thinking alive is good, actually

Matthyze 35 minutes ago||
"mental clickbait of the classic aphorisms" is one way to phrase 'attribution"
jkaptur 3 hours ago||
It's interesting to contrast "Measure. Don't tune for speed until you've measured" with Jeff Dean's "Latency Numbers Every Programmer Should Know" [0].

Dean is saying (implicitly) that you can estimate performance, and therefore you can design for speed a priori - without measuring, and, indeed, before there is anything to measure.

I suspect that both authors would agree that there's a happy medium: you absolutely can and should use your knowledge to design for speed, but given an implementation of a reasonable design, you need measurement to "tune" or improve incrementally.

0: https://gist.github.com/jboner/2841832

SatvikBeri 46 minutes ago||
I've had the pleasure of working with some truly fast pieces of code written by experts. It's always both. You have to have a good sense of what's generally fast and what's not in order to design a system that doesn't contain intractable bottlenecks. And once you have a good design you can profile and optimize the remaining constraints.

But e.g. if you want to do fast math, you really need to design your pipeline around cache efficiency from the beginning – it's very hard to retrofit. Whereas reducing memory allocations in order to make parallel algorithms faster is something you can usually do after profiling.

sifar 59 minutes ago|||
Yeah, the latency numbers provide a ceiling for your algorithm. The actual performance depends on the implementation, code generation, runtime hazards, small dependencies one may have overlooked etc.
eschneider 2 hours ago||
I mean...you should always design with speed in mind (In that Jeff Dean sense :) but what 'premature optimization' is referring to, is more like localized speed optimizations/hacks. Don't do those until a) you know you'll need it and b) you know where it will help.
munro 2 hours ago||
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

It's so true, when specing things I always try to focused on DDL because even the UI will fall into place as well, and a place I see claude opus fail as well when building things.

cleaver 2 hours ago|
I recall a similar statement from Ed Yourdon in one of his books (90's?)
EvanAnderson 2 hours ago||
The article makes reference to Fred Brooks and "The Mythical Man Month", but doesn't make a direct quote. The quote I'd have referenced is:

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

CharlieDigital 8 hours ago|
I feel like 1 and 2 are only applicable in cases of novelty.

The thing is, if you build enough of the same kinds of systems in the same kinds of domains, you can kinda tell where you should optimize ahead of time.

Most of us tend to build the same kinds of systems and usually spend a career or a good chunk of our careers in a given domain. I feel like you can't really be considered a staff/principal if you can't already tell ahead of time where the perf bottleneck will be just on experience and intuition.

PaulKeeble 8 hours ago||
I feel like every time I have expected an area to be the major bottleneck it has been. Sometimes some areas perform worse than I expected, usually something that hasn't been coded well, but generally its pretty easy to spot the computationally heavy or many remote call areas well before you program them.

I have several times done performance tests before starting a project to confirm it can be made fast enough to be viable, the entire approach can often shift depending on how quickly something can be done.

projektfu 8 hours ago|||
It really depends on your requirements. C10k requires different design than a web server that sees a few requests per second at most, but the web might never have been invented if the focus was always on that level of optimization.
pydry 8 hours ago|||
The number 1 issue Ive experienced with poor programmers is a belief that theyre special snowflakes who can anticipate the future.

It's the same thing with programmers who believe in BDUF or disbelieve YAGNI - they design architectures for anticipated futures which do not materialize instead of evolving the architecture retrospectively in line with the future which did materialize.

I think it's a natural human foible. Gambling, for instance, probably wouldnt exist if humans' gut instincts about their ability to predict future defaulted to realistic.

This is why no matter how many brilliant programmers scream YAGNI, dont do BDUF and dont prematurely optimize there will always be some comment saying the equivalent of "akshually sometimes you should...", remembering that one time when they metaphorically rolled a double six and anticipated the necessary architecture correctly when it wasnt even necessary to do so.

These programmers are all hopped up on a different kind of roulette these days...

tbrownaw 5 hours ago|||
Sure, don't build your system to keep audit trails until after you have questions to answer so that you know what needs to go in those audit trails.

Don't insist on file-based data ingestion being a wrapper around a json-rpc api just because most similar things are moving that direction; what matters is whether someone has specifically asked for that for this particular system yet.

.

Not all decisions can be usefully revisited later. Sometimes you really do need to go "what if..." and make sure none of the possibilities will bite too hard. Leaving the pizza cave occasionally and making sure you (have contacts who) have some idea about the direction of the industry you're writing stuff for can help.

CharlieDigital 3 hours ago|||

    > Sure, don't build your system to keep audit trails until after you have questions to answer so that you know what needs to go in those audit trails...what matters is whether someone has specifically asked for that for this particular system yet.
I spent ~15 years in life sciences.

You're going to build an audit trail, no matter what. There's no validated system in LS that does not have an audit trail.

It's just like e-commerce; you're going to have a cart and a checkout page. There's no point in calling that a premature optimization. Every e-commerce website has more or less the same set of flows with simply different configuration/parameters/providers.

pydry 2 hours ago|||
Going "what if?" and then validating a customer requirement that exists NOW is NOT the same thing as trying to pre-empt a customer's requirement which might exist in the future.

Audit trails are commonly neglected coz somebody didnt ask the right questions, not coz somebody didnt try to anticipate the future.

rcxdude 7 hours ago|||
Aye. The number one way to make software amenable to future requirements is to keep it simple so that it's easy to change in future. Adding complexity for anticipated changes works against being able to support the unanticipated ones.
Bengalilol 8 hours ago|||
> you can kinda tell where you should optimize ahead of time

Rules are "kinda" made to be broken. Be free.

I've been sticking to these rules (and will keep sticking to them) for as long as I can program (I've been doing it for the last 30 years).

IMHO, you can feel that a bottleneck is likely to occur, but you definitely can't tell where, when, or how it will actually happen.

HunterWare 8 hours ago|||
ROFL, I wish Pike had known what he was talking about. /s ;)
CharlieDigital 3 hours ago||
Rob Pike and I (and probably most of us) work(ed) on different kind of things.

Notice my use of the word "Novelty".

I get hired because I'm very good at building specific kinds of systems so I tend to build many variants of the same kinds of systems. They are generally not that different and the ways in which the applications perform are similar.

I do not generally write new algorithms, operating systems, nor programming languages.

I don't think this is so hard to understand the nuance of Pike's advice and what we "mortals" do in or day-to-day to earn a living.

relaxing 8 hours ago||
Rob Pike wrote Unix and Golang, but sure, you’re built different.
Intermernet 8 hours ago|||
Rob Pike is responsible for many cool things, but Unix isn't one of them. Go is a wonderful hybrid (with its own faults) of the schools of Thompson and Wirth, with a huge amount of Pike.

If you'd said Plan 9 and UTF-8 I'd agree with you.

jacquesm 8 hours ago||
Rob Pike definitely wrote large chunks of Unix while at Bell Labs. It's wrong to say he wrote all of it like the GP did but it is also wrong to diminish his contributions.

Unless you meant to imply that UNIX isn't cool.

relaxing 6 hours ago||
I did not say he wrote all of it. “Write” can include co-authorship.

A lot of people are learning some history today, beautiful to see.

jacquesm 6 hours ago||
I think that if you meant co-authorship you could have made that clearer. A 'contributed to' would have saved some unique ids.
andsoitis 8 hours ago||||
> Rob Pike wrote Unix

Unix was created by Ken Thompson and Dennis Ritchie at Bell Labs (AT&T) in 1969. Thompson wrote the initial version, and Ritchie later contributed significantly, including developing the C programming language, which Unix was subsequently rewritten in.

9rx 8 hours ago||
Pike didn’t create Unix initially, but was a contributor to it. He, with a team, unquestionably wrote it.
andsoitis 7 hours ago||
> but was a contributor to it. He, with a team, unquestionably wrote it.

contribute < wrote.

His credits are huge, but I think saying he wrote Unix is misattribution.

Credits include: Plan 9 (successor to Unix), Unix Window System, UTF-8 (maybe his most universally impactful contribution), Unix Philosophy Articulation, strings/greps/other tools, regular expressions, C successor work that ultimately let him to Go.

9rx 7 hours ago||
Are you under the impression he was, like, a hands-off project manager or something? His involvement was in writing it. Not singlehandedly, but certainly as part of a team. He unquestionably wrote it. He did not envision it like he did the other projects you mention, but the original credit was only in the writing of.
amw-zero 3 hours ago||
To say "Rob Pike wrote Unix" is completely inaccurate. He joined after v7, in 1980.
9rx 3 hours ago||
Nobody seems to be questioning that he was involved in Unix. Given that he didn't write it, what did he do for the project? Quality assurance? Support? Marketing? Court jester?
my-next-account 8 hours ago|||
Do you think Rob Pike ever decided that maybe what was done before isn't good enough? Stop putting artificial limits on your own competency.
More comments...