Posted by milanm081 5 days ago
YAGNI and "you will ship the org chart" are the two most commonly useful things to remember, but they aren't laws.
Also they used the graph for the Gartner Hype Cycle as the icon, not the Dunning-Kruger graph.
> Premature Optimization (Knuth's Optimization Principle)
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.
This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.
I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.
The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.
And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).
But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".
When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.
I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.
To be fair, a linear search through an array is, most of the time, faster than a hash table for sufficiently small data sizes.
It doesn't take long for hash or tree lookups to start outperforming linear search and, for small datasets, it's not frequently the case that the search itself is a performance bottleneck.
> The less you know about something, the more confident you tend to be.
From the first line on the wiki article:
> systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.
Or, said another way, the more you know about something the more complexities you're aware of and the better assessment you can make about topics involving such. At least, that's how I understand it in a nutshell without explaining the experiments run and the observations that led to the findings.
Just because some things were observed frequently during a certain period, doesn't mean it's a "Law" or even a "Principle"; it's merely a trend.