Posted by signa11 5 days ago
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
vvvvvvvvvv
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
^^^^^^^^^^
This makes it clear, in context, that Knuth defines "Premature Optimization" as "optimizing before you profile your code"@OP, I think you should lead with this. I think it gets lost by the time you actually reference it. If I can suggest, place the second paragraph after
> People always use this quote wrong, and to get a feeling for that we just have to look at the original paper, and the context in which it was written.
The optimization part gets lost in the middle and this, I think, could help provide a better hook to those who aren't going to read the whole thing. Which I think how you wrote works good for that but the point (IMO) will be missed by more inattentive readers. The post is good also, so this is just a minor critique because I want to see it do better.https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-hub.se/10.1145/356635.356640
No amount of parallelization will make your program faster than the slowest non-parallelizable path. You can be as clever as you want and it won’t matter squat unless you fix the bottleneck.
This extends to all types of optimization and even teamwork. Just make the slowest part faster. Really.
Simple problems, e.g. solving a system of equations, will usually include some non-negligible sequential part, which, according to Amdahl’s Law will limit the amount of speed-up provided by hardware parallelism.
On the other hand, complex problems, e.g. designing an integrated circuit, can usually be decomposed in a very great number of simpler subproblems that have weaker dependencies between them, than between the parts of a subproblem, so that by distributing the execution of the simple subproblems over parallel hardware that executes sequentially each subproblem you can obtain much greater acceleration factors than when attempting to parallelize the execution of each simple subproblem.
With clever decomposition of a complex problem and with good execution planning for its subtasks, it is much easier to approach the performance of an embarrassingly parallel problem, than when trying to find parallel versions of simple algorithms, whose performance is frequently limited to low values by Amdahl’s Law.
Amdahl’s Law frequently prevents you from reducing the execution time of some task from 1 minute to 1 second, but it normally does not prevent you from reducing the execution time of some task from 1 year to 1 week, because a task so complex to have required weeks, months or years before parallelization normally contains a great enough number of weakly-coupled subproblems.
Or at least, I find it trivially scalable to parallel case. Often it helped for me in modeling how interconnect would be a possibly limiting element for a task even for embarassingly parallel tasks.
Rather, than the slowest non-parallelized path. Ultimately you may reach maximum speed on that path but the assumptions that we are close to it often turn out to be poorly considered, or considered before 8 other engineers added bug fixes and features to that code.
From a performance standpoint you need to challenge all of those assumptions. Re-ask all of those questions. Why is this part single threaded? Does the whole thing need to be single threaded? What about in the middle here? Can we rearrange this work? Maybe by adding an intermediate state?
> It sounds obvious when spelled out but it blew my mind.
I think there's a weird thing that happens with stuff like this. Cliches are a good example, and I'll propose an alternative definition to them. A cliche is a phrase that's so obvious everyone innately knows or understands it; yet, it is so obvious no one internalizes it, forcing the phrase to be used ad nauseam
At least, it works for a subset of cliches. Like "road to hell," "read between the lines," Goodheart's Law, and I think even Amdahl's Law fits (though certainly not others. e.g. some are bastardized, like Premature Optimization or "blood is thicker than water"). Essentially they are "easier said than done," so require system 2 thinking to resolve but we act like system 1 will catch them.Like Amdahl's Law, I think many of these take a surprising amount of work to prove despite the result sounding so obvious. The big question is if it was obvious a priori or only post hoc. We often confuse the two, getting us into trouble. I don't think the genius of the statement hits unless you really dig down into proving it and trying to make your measurements in a nontrivially complex parallel program. I think that's true about a lot of things we take for granted
In it's original context it means the opposite of how people use it today.
When optimizing, always consider the cost of doing the optimization vs. it's impact.
In a project where you are looking a 45/30/25 type split. The 45 may actually be well optimized, so the real gains may be in the 30 or 25.
The key is to understand the impact you CAN have, and what the business value of that impact is. :)
The other rule I've learned is: There is always a slowest path.
Like I tell everyone in system design interviews: AWS will rent you a machine with 32TB of RAM. Are you still sure about all this extra complexity?
I’ve heard that some companies spend hundreds of millions per year on cloud hosting and it’s still worth it. I can’t even imagine that level of scale.
PS: The context in which I bring this up is a design exercise that would deal with maybe 5gb of data per month. People really over-complicate it :)
How long does it take your team once to make that resource consumption go away permanently? An hour? Even a month? You'd still be ahead.
Sometimes leaving the spaghetti alone IS correct. Sometimes it isn't.
But most awful spaghetti happens because what it was asked to do was awful, IMHO.
I'd expect even rather junior programmers to recognize that the more time you spend working on a project the more new and unexpected things you find. New features, better ways to implement things, whatever. And ultimately, we all look back at code we wrote a year ago and are going to say "what idiot wrote this garbage?" I mean, if you don't, it is probably a sign that you aren't improving. It's impossible to write perfect code, and even if it was, what would be "perfect" is a moving target. So either way, you should be writing better code today when compared to yesterday.
> without the expensive big rewrite.
While it isn't possible to completely put this off, I find there's a helpful tactic to reduce spaghetti-creep. "Maintenance is cheaper than repair." People say "don't fix what isn't broken" but I think that's wrong. You should "fix" things before they are broken. Because frankly, it is cheaper to replace your visibly degrading water pipe than it is to wait for it to burst. You not only have to pay for the new pipe, but all the damage it caused when it broke. Maintenance increases longevity and helps you monitor so you can replace things before they break. Doesn't matter if you're plumbing water pipes or plumbing spaghetti, the tactic is the same.If I need three weeks to optimize a code that will run for 2 hours per month, it's not worth it.
But if code happens to be used 10 times a week and takes about a day or two to run, it's a no brainer: spending a month optimizing for 10% speed increase is worth it !
Exercise for the reader, given an unlimited budget to hire 1800's clerks how many FFS could you achieve running doom. (obviously the number is too low to make the game playable)
you're still releasing resources - so you might not become faster overall but you can compute more in the same time later if necessity arises (althougth that might be somewhat premature but can be good for library code - so it becomes more applicable in different environments)
and there are some rare but tricky scenarios like:
hardware is mobile phone: app seem to be bottlenecked on arithmetics according to the profiler, so it feels obvious to start optimization there
in reality what happens - hardware has limit on power, so it can't give full power to CPU, GPU and memory all at the same time
since app uses too much memory - it has to redirect power there, also memory emits heat, so CPU becomes throttled
By optimizing memory, everything runs colder -> CPU gets more power, can run sustained higher frequencies - app becomes faster
Perhaps restated: If the optimization cannot be felt (ie, impact on the product experience), it is not an optimization worth pursuing.
Oh, that might still be good for battery life (or power consumption in general).
It's a single line phrase, I wouldn't interpret it too literally. Usually you got to be pretty liberal when interpreting those easy to remember lines. It's a lot harder to remember the literally multiple books we could fill when bringing up exceptions.
1. Decide if optimization is even necessary.
2. Then optimize the slowest path
"Curiosity killed the cat, but satisfaction brought it back." Is practically on the same level.
If you're careful to exclude creeping featurism and architectural astronautics from the definition of 'optimization', then very few people I've seen be warned off of digging into that sort of work actually needed to be reined in. YAGNI covers a lot of those situations, and generally with fewer false positives. Still false positives though. In large part because people disagree on what "The last responsible moment" in part because our estimates are always off by 2x, so by the time we agree to work on things we've waited about twice as long as we should have and now it's all half assed. Irresponsible.
A big thing I rail against is the meaning of an engineer and that our job is about making the best product, not making the most profitable product (most times I bring this up someone will act like there's no difference between these. That itself is concerning). The contention between us engineers and the business people is what creates balance, but I'm afraid we've turned into yesmen instead. Woz needs Jobs, but Jobs also needs Woz (probably more than the other way around). The "magic" happens at the intersection of different expertise.
There's just a lot of weird but subtle ways these things express themselves. Like how a question like "but what about x problem" is interpreted as "no" instead of "yes, but". Or like how people quote Knuth and use it as a thought terminating cliche. In ML we see it with "scale is all you need."
In effect, by choosing to do things the easy way we are choosing to do things the hard way. Which this really confuses me, because for so long in CS the internalization was to "be lazy." Not in the way that you put off doing the dishes now but in the way that you recognize that doing the dishes now is easier than doing them tomorrow when you 1) have more dishes 2) the dishes you left out are harder to clean as the food hardens on the plate. What happened to that "efficient lazy" mindset and how did we turn into "typical lazy"?[0]
[0] (I'm pretty sure I need to add this) https://en.wikipedia.org/wiki/Rhetorical_question
Here we are sitting at four to seven orders of magnitude separated from Knuth, depending on whether you mean number of devs or number of machines or size of problems tackled.
But that's really understating the case, because those were large shared mainframes. Today's equivalent would be not a 2-socket rackmount server, or even a whole rack, but quite plausibly a whole data center, three more orders of magnitude. 9 orders of magnitude more RAM, 10 orders of magnitude more arithmetic.
Probably also worth mentioning that the absolute number of computers has also increased a lot; every USB-C power brick contains a multi-MIPS computer.
I agree that the number of devs has increased by only about four or five orders of magnitude. In 01974 I'd guess there might have been 50,000 programmers; my grandparents took programming classes around that time involving batch submission of punched cards, and that was a reasonably common thing at US universities. Today, Roblox has 380 million monthly active users; what percentage of them write their own games in it? And the popular programming environments Microsoft Excel and Google Sheets have over a billion users each.
Your comment reads like a strawman argument. No one is arguing against "improving code". What are you talking about? It reads like you are misrepresenting any comment that goes against your ideas, no matter how misguided they are, by framing your ideas as obvious improvements that can only be conceivably criticized by anyone who is against good code and in favor of bad code.
It's a rehash of the old tired software developer cliche of "I cannot do wrong vs everyone around me cannot do right".
Ironically, you are the type of people Knuth's quote defends software from: those who fail to understand that using claims of "optimization" as a cheat code to push through unjustifiable changes are not improving software, and are actually making it worse.
> "Curiosity killed the cat, but satisfaction brought it back." Is practically on the same level.
This is the same strawman. It's perfectly fine to be curious. No one wants to take the magic out of you. But your sense of wonder is not a cheat code that grants you the right to push nonsense into production code. Engineers propose changes based on sound reasoning. If the engineers in your team reject a change it's unlikely you're surrounded by incompetent fools who are muffling your brilliant sense of wonder.
> If you're careful to exclude creeping featurism and architectural astronautics from the definition of 'optimization', (...)
Your comment has a strong theme of accusing anything not aligned with your personal taste as extremely bad and everything aligned with your personal taste as unquestionably good that can only possibly be opposed if there's an almost conspiratorial persecution. The changes you like are "improving code" whereas the ones proposed by third parties you don't like suddenly receive blanket accusations such as "creeping featurism and architectural astronautics".
Perhaps the problems you experience, and create, have nothing to do with optimization? Food for thought.
> No one is arguing against "improving code". What are you talking about?
This is actually a frequent occurrence. But honestly it usually doesn't come with that exact same phrasing. It usually comes with "let's spend our time on this other thing that isn't important but is new therefore important". It's true, sometimes you need to triage, but there's a clear bias that once something "works" (no matter how poorly) there is much less incentive to make it work not poorly.Some food for thought: your comment entirely relies upon wisdom of the crowds. Which I agree is usually a good bet to go with. But there are conditions where it fails, even at scale. It's actually fairly easy to upend. All you need to do is remove independence of actors. Which, you can probably guess is fairly common in our field. The more homogeneous we become the less reliable wisdom of the crowds is.
Plus, people have different experiences and different environments. What has been true is your experience need not be true in mine or any others.
I'll give you an example of irrationality here. I was working at a big tech company and as part of breaking down my project I was playing around with another part of the code. I more then doubled the prediction accuracy on customer data and made the code run faster and use less memory. Objectively better. It even happened "for free" because the tasks involved were part of my original job, though the goal was not explicitly. What happened? Last I know, the PR is still open. Boss never pushed for it because they were more invested in their other work that was yet to be completed but had more buzzwords. That work was only a bit more accurate than mine but 100x shower and required 10x more memory. I even told them what I did would work for the other one too, yet it never happened.
I've seen many instances like this. Sometimes people just don't want things to be better unless it's better in a very specific kind of way (and not necessarily via performance). Sometimes it's just politics.
On the other hand, I've seen very good teams where there's the exact opposite experience. It's a very common thread that those teams openly discuss and explain why a seemingly good idea is actually bad. Frequently, they'll let you have a go at it too, because it's a no lose situation. If you're right, we all win. If you're wrong there's an important lesson about the code that's leaned because there's hidden complexity that makes the seemingly good idea bad and it's often hard to explain. But you end up with a lot more knowledge about the code, which helps you in the long run
Exactly. “It technically functions and therefore doesn’t need attention” has become an industry norm. There’s a massive bias towards piling on more features over making older ones good.
> “It technically functions and therefore doesn’t need attention” has become an industry norm.
Most commonly seen with the aphorism "don't fix what isn't broken." But I really hate that aphorism. There's some truth to it, but it is commonly used as a thought terminating cliche. If you see a rusty pipe you should sure as hell fix it. Sure, it isn't "broken" in the sense that it has burst and is leaking water everywhere, but it sure is a ticking time bomb. And boy, is it a hell of a lot cheaper to fix a rusty pipe that isn't leaking than to fix a burst pipe and also pay for all the water damage.The aphorism misses that "maintenance" is much cheaper than "repair". The aphorism misses that you can get major cost savings by doing thing differently. Sure, it costs more in the moment, but are we trying to run a business paycheck to paycheck or are we trying to create something sustainable. I sure hope your business is thinking more than just 3 months out. "More expensive now" is far too common of an excuse, enough that I see it used to justify not implementing things that would have a ROI within 6 months! Which not doing anything with under a year ROI is absolutely batshit insane (unless you're a startup living on the edge or something, but I see this in big tech companies and I hope we understand that's the context here).
But there were parts we could sort out at build or deploy time, and I was able to fix a number of those.
One guy was tasked with a project I got turned into an epic: build fallback error pages at deploy time and push them into CDN. I told him not to build it the way he was, copy the one I had been working on. I got busy and he left, and we discovered that they hadn’t been updating when a customer changed their contact info and noticed months later that we still had the old info.
The build trigger had no downstream dependencies and there was no alert being sent for failure. The job was timing out for reasons similar to the ones I’d fixed. I tried to bandaid it still errored out an hour into what looked like at least a 100-120 minute workload.
I discovered the main problem was that our customers could have multiple domain names and he was frobbing through three or four service calls trying to find the canonical one, and making one call per customer to see if they were still customers (we had an endpoint that would paginate all active customers, so ~400x more efficient and rising due to turnover).
The main table I had been using had one column I didn’t use, which had a url in it. I confirmed with our SME that this was in fact the canonical domain name, I just had to add the field to the QL and do a `new URL(url).hostname`. At the end of the day I ended up extracting about half the code from my tool, deleting over half of his code and replacing it with calls into mine (do it like I did it, but literally).
Four and a half minutes, and no more intermittent failures, because my version was built on back pressure to throttle requests under high server load, and total outbound requests around 1/8 of the original.
Tuning the JSON parser is not nearly as effective as replacing it with something less stupid, which is, in turn, not nearly as effective as finding ways to not do the RPC at all.
Most really performant systems of a certain age are also full of layer violations for this reason. If you care about performance (as in you are paid for it), you learn these realities pretty quickly. You also focus on retaining optionality for future optimizations by not designing the semantics of your system in a way that locks you permanently into low performance, which is unfortunately very common.
Another example of this is using the right algorithm for the job. I see a lot of code like let's say you have two lists and you need to go through the first one and find the corresponding element by id in the second. The naive implementation uses linear search of the second list, making the algorithm O(n^2). This will stall and take hours, days or longer to run when the lists get large, say into the tens to hundreds of thousands of elements.
If both lists come from your own database then you should have used a foreign key constraint and had the database join them for you. If you can't do that, let's say one or both lists come from a third-party api, you can create a dictionary(hashmap) of the second list to allow O(1) lookup rather than O(n) which brings your full algorithm's complexity to O(n). Now the you can have millions of elements and it'll still run in milliseconds or seconds. And if this is something you need to do very often then consider storing the second list in your database so you can use a join instead. Avoid doing the work, as you say.
These are the most common mistakes I see people make. That and situations where you make one request, wait for a response, then use that to make another request, wait for response and so on.
You don't need a profiler to avoid this type of problem, you just need to understand what you're doing at a fundamental level. Just use a dictionary by default, it doesn't matter if it might be slightly slower for very small inputs - it'll be fast enough and it'll scale.
It's kind of astounding how few people seem to understand these things. We all (me and my colleagues who are generally university educated) learned about complexity analysis, algorithms and data structures, but it seems like quite few actually understood the material.
The reason for this is they have a frontend where they do stuff like
let a = await fetchA(); let b = await fetchB(a.id); let c = await fetchC(b.id); // ...
This on its own is obviously slow. But then you look at the backend code and it turns out the backend code for A, B and C also do this same kind of thing. Even worse, it's fetching way more data than is actually needed so even a single request is quite slow simply because there's a lot of data. And worse yet, it's fetching the data in a shape that is completely wrong for the application, so the frontend has thousands of lines of code that reshapes the data before it can be used - making things even slower, and making our jobs difficult because it's really hard to wrap your head around everything that's happening.
And there's no technical reason for it, it doesn't have to be like this. We're working on refactoring it at the moment, we have a deadline approaching quite fast so we don't really want to spend time refactoring but the system they've created is complete ass and we eventually decided we have to spend the time to refactor now. It'll be much more difficult to fix the database schema and such when we're in prod and need to worry about losing data.
Most people I see who get offended are reacting to the claim that optimization is never useful. But it's pretty easy to knock that claim over.
I don't deny that plenty of people use adjectives very sloppily, and that much writing is improved by just ignoring them, but Knuth is not one of them.
I also see ignoring qualifiers as a frequent cause for online arguments. They're critical words when communicating and dropping them can dramatically change what's being communicated. And of course people are fighting because they end up talking about very different things and don't even realize it.
While I agree with you, that it's common for people to use these words sloppily, I think it is best to default to not ignore them. IME, in the worst case, it aids in communication as you get clarification.
Isn't that more clear and at least as concise as the original one, also using some idiomatic metaphor while avoiding the reference to mythical dark forces.
They are all worth reading.
As with that Google testing talk ("We said all your tests are bad, but that's not entirely true because most of you don't have any tests"), the reality is that most people don't have any profiling.
If you don't have any profiling then in fact every optimisation is premature.
BTW, my junior developers don't know what a debugger is.
- argue against thinking about any kind of design choice for performance reasons, eg the data structure decisions suggested in this article
- argue for a ‘fix it later’ approach to systems design. I think for lots of systems you have some ideas for how you would like them to perform, and you could, if you thought about it, often tell that some designs would never meet them, but instead you go ahead with some simple idea that handles the semantics without the performance only to discover that it is very hard to ‘optimise’ later.
> a ‘fix it later’ approach
Oh man, I hate how often this is used. Everyone knows there's nothing more permanent than a temporary fix lol.But what I think people don't realize is that this is exactly what tech debt is. You're moving fast but doing so makes you slow once we are no longer working in a very short timeline. That's because these issues compound. Not only do we repeat that same mistake, but we're building on top of shaky ground. So to go back and fix things ends up requiring far more effort than it would have taken to fix it early. Which by fixing early your efforts similarly compound, but this time benefiting you.
I think a good example of this is when you see people rewrite a codebase. You'll see headlines like "by switching to rust we got a 500% improvement!" Most of that isn't rust, most of that is better algorithms and design.
Of course, you can't always write your best code. There's practical constraints and no code can be perfect. But I think Knuth's advice still fits today, despite a very different audience. He was talking to people who were too obsessed with optimization while today were overly obsessed with quickly getting to some checkpoint. But the advice is the same "use a fucking profiler". That's how you find the balance and know what actually can be put off till later. It's the only way you can do this in an informed way. Yet, when was the last time you saw someone pull out a profiler? I'm betting the vast majority of HN users can't remember and I'd wager a good number never have
I realize this is a very personal preference and it obviously can't be applied to everyone. Someone with less understanding might find a profiler very useful and I think those people will learn the same things I'm talking about - as you find the slow code and learn how to make it fast you'll stop making the same mistakes.
A profiler might be useful if I was specifically working to optimize some code, especially code I hadn't written myself. But for my daily work it's almost always good enough to keep performance in mind and design the system to be fast enough from the bottom up.
Most code doesn't have to be anywhere near optimal, it just has to be reasonably fast so that users don't have to sit and stare at loading spinners for seconds at a time. Some times that's unavoidable, some times you're crunching huge amounts of data or something like that. But most of the time, slow systems are slow because the people who designed and implemented them didn't understand what they were doing.
> I consider the time complexity of the code I'm writing
First, I applauded you for doing this. This is very good practice and I want to encourage everyone to do it.Second, it's important to remember that big O is very naïve, especially when you drop constant. You're size of n can make a big difference. O(n) is worse than O(n^2) for small n. When considering constants it's also reasonable to have O(n^3) algos be better than O(n)! This throws a wrench in analysis making it more time consuming.
But where the profiler really shines is *you don't write all your code from scratch*. So it tells you when to use a library and when to rewrite. The only other option is to painstakingly go through every library you use.
So the profiler is a big time saver. Big O for quick and dirty first pass and profiler for moving from alpha to beta and beyond.
Don't get me wrong, I'm not full of best habits either! I definitely don't do this for most personal projects nor most research code (which is most of what I write, though I profile a different thing...) but a profiler is the most cost effective way to write *production* code. It becomes exponentially more important as code size grows and team size grows. After all, not everyone is using your best practices. So you need practices that scale and work as you relinquish control.
Unfortunately, you need to profiler routinely. Luckily you can automate this and attach it to the CI pipeline so by doing that the cost isn't high
It can be, it isn't necessarily. And I don't care about small values of n. If I'm spinning through two lists of size 5 it doesn't matter if option A is slightly faster than option B, both options will run on the order of nanoseconds. The lower time complexity solution will continue to be reasonably fast as the input grows, pretty soon the difference will be measured in seconds, minutes, hours, days... Using a worse complexity solution for small inputs is a micro optimization you can use some times but it is a micro optimization. Using the best time complexity is what I like to call a macro optimization. It's the default solution. You might deviate from it at times, but you're much better off using complexity to guide your decisions than not doing it. Once you know what you're doing you can deviate from it when appropriate, some times worse complexity is better in specific cases. But 99.9% of the time, best time complexity is good enough either way.
I usually don't need the code I write to be optimal. It doesn't matter if it's a bit slower than it technically could be - as long as it's fast enough and will continue to be fast when the input grows.
Some times you may want to squeeze the absolute maximum performance and in that case you may be able to micro optimize by choosing a worse complexity solution for cases where you know the inputs will always be small. If that assumption ever breaks, your code is now a bottle neck. It can be useful thing to do in certain niche situations, but for the vast majority of code you're better off just using the best complexity solution you can. Otherwise you may come back to find that the code that ran in a millisecond when you profiled it, now takes 20 minutes because there's more data.
How do you run a profiler in CI? Do you hook it up to your tests? You need some code that runs with actual input so I guess you either profile the test suite or write tests specifically for profiling? Or maybe you write benchmarks and hook a profiler up to that?
This sounds like it could be a useful technique but very time consuming if you have to write the profiler tests/benchmarks specifically, and kind of useless if you just hook it up to your general test suite. I want my tests to run fast so I don't use big data for testing, and you won't really see where the bottlenecks are unless you're testing with a lot of data.
Essentially, I'm saying use the profiler to double check your estimates. Because that's what (typical) complexity analysis is, an estimate. But a profiler gives you so much more. You can't always trust the libraries ands even when you can you need to remember liberties have different goals than you. So just grab a profiler and check lol. It isn't that hard
As for connecting to a CI, you're over thinking it.
How do you normally profile your code? Great! Can you do that programmatically? I bet you can. Because you're profiling routines.
You're already writing the test cases, right? RIGHT?
Btw, you can get the CI to work in nontrivial ways. It doesn't have to profile every push. You could, idk, profile every time you merge into a staging branch? You can also profile differently on different branches. M There's a lot of options between "every commit" and "fuck it, do it in prod". I'm sure you're more than smart enough to figure out a solution that works for your case. Frankly, there's no one size fit's all solution here, so you gotta be
I mostly work on web apps. If I was working on an application with a large user base and it was struggling to keep up with a large number of requests during peak hours, I might try to optimize the most frequently used or the most performance sensitive endpoints to ease the load. Then I might profile a call to these endpoints to see where I should focus my efforts. But if the app is responding quickly during peak traffic and generally just working perfectly, I see no reason to spend extra time profiling things just for the sake of it. I'm not paying the cloud bills and the people who are aren't asking me to reduce them. Realistically it would probably take years or decades to recoup the cost of having me investigate these things anyway, it's not worth it.
Yes, I already write tests. But like I said in one of my earlier responses, those tests exist to test functionality not performance. For example the test dataset might run super fast with your O(n^2) algorithm but the prod dataset might be much larger and take hours to run. Profiler won't tell you that unless you have a test with a huge dataset to provoke the issue. So now you're writing specialized tests just for profiling, which in my opinion falls under premature optimization. It also makes your test suite take longer to run which makes you less likely to run the tests as often as you otherwise would.
I'd rather just go with the low complexity option and revisit it later if necessary. I very rarely have any issues with this approach, in fact I'm not sure I have ever had a performance problem caused by my code. If there's code that stalls the application it's always someone else's work. My code isn't perfect, but it's generally fast enough. In my mind that's pragmatic programming - make it work, make it clean, make it fast enough. Most code doesn't need to be anywhere near optimal.
Most web devs I’ve met do not meet that criteria, but sadly also have rarely used a profiler. They’re not opposed to doing so, they’ve just never been asked to do so.
The strategy works well, as you point out, for a single person but doesn't scale and can't be applied to library code without doing a deep dive into the library. Which at that point, a profiler is far more time effective.
> They’re not opposed to doing so, they’ve just never been asked to do so.
I think the best option is to integrate profiling into the CI framework. Like a git pre-hook or even better, offload so that your VMs profile while you're testing different environments.I think mostly it is a matter of habit. I mean let's be honest, we probably don't care about profiling our for fun project codes. Where profiling really matters is in production. But I think the best way to normalize it is just to automate it. Which, honestly, I've never seen done and I'm really not sure why.
It really seems to me like you don't understand the advice about premature optimization. This is it, this is what Knuth is talking about. Profiling everything, optimizing everything. That's premature optimization.
Mature optimization is when you have an application and you see that part of it is slow. You investigate, for example with a profiler. Maybe you write benchmarks. Then you try a different approach and benchmark that as well (either with an actual benchmark library or just by running the code locally) and see if the new solution is faster.
I like to refer to what I do as macro optimization. I don't spend significant time worrying about performance, I just write code and kind of just keep complexity, IO and such in mind, avoiding badly scaling algorithms if I can. I don't care if the badly scaling algorithm might be slightly better than the one that scales, because that difference is nearly always negligible.
Like you said in your other comment, some times the constant can make the O(n^2) algorithm faster than the O(n) one for small inputs. But that difference is nothing, it's gonna be like one runs in 5 nanoseconds and the other runs in 10. So you saved 5 nanoseconds and nobody noticed nor cared. Then the input grows and now your badly scaling algorithm runs in 2 hours whereas the scaling one runs in a few milliseconds. That's why you use the best scaling algorithm by default. Even if you think the input will never grow you might be wrong, and the consequence of being wrong is bad. So you have a high risk and low reward.
> It really seems to me like you don't understand the advice about premature optimization. This is it, this is what Knuth is talking about. Profiling everything, optimizing everything. That's premature optimization
You greatly misunderstand what I'm saying 1) profile your code
2) optimize what needs to be optimized
3) there is no step 3
Why did you suddenly assume I said you should optimize everything? We don't have infinite time on our handsIf you have a good workflow that includes profiling your code that's cool, it's probably a good way to improve the performance of your code. But personally I think it's enough to test the code and make sure it works and is reasonably fast. I'm not going to spend time eliminating nano and low digit milliseconds.
If something actually keeps users waiting I'll look into it. As long as it's near instant it's good enough for me.
No wonder the classical complexity analysis of algorithms generally took memory access to be instantaneous: because it, essentially, was instantaneous.
[0] https://en.wikipedia.org/wiki/Honeywell_316#Hardware_descrip...
Notably, pretty much the entire body of discourse around structured programming is totally lost on modern programmers failing to even imagine the contrasts.
The biggest way I see this is picking an architecture or programming language (cough Python) that is inherently slow. "We'll worry about performance later" they say, or frequently "it's not performance critical".
Cut to two years later, you have 200k lines of Python that spends 20 minutes loading a 10GB JSON file.
If you develop in an environment where you have high velocity (eg python) you can much sooner learn that you are building the wrong thing and iterate.
Most systems do not have very high performance requirements. A typical python web application is not going to need to service many requests and the slow stuff it does is likely going to be waiting on responses from a database or other system. The thing to make such a program faster is to send better db queries rather than optimising the O(1 web page) work that is done in python.
In my experience all great programmers think about performance from the moment they start thinking about a problem.
Like, you're writing an O(n^2) nested loop over an array that's not going to be tiny? Get out of here! That just shouldn't happen and talking about "premature optimization" points in exactly the wrong direction.
The funny thing is, we forgot how to write programs that spend most of their time in 3% of the code.
Profile a modern application and you see 50 level deep stacks and tiny slices of 0.3% of CPU time spent here and there. And yet these slices amount to 100%.
> In every .NET release, there are a multitude of welcome PRs that make small improvements. These changes on their own typically don’t “move the needle,” don’t on their own make very measurable end-to-end changes. However, an allocation removed here, an unnecessary bounds check removed there, it all adds up. Constantly working to remove this “peanut butter,” as we often refer to it (a thin smearing of overhead across everything), helps improve the performance of the platform in the aggregate.
[0]: https://devblogs.microsoft.com/dotnet/performance-improvemen...
Practically every part of C# is a hot path because it runs on probably billions of devices every minute of every day. This code is worth micro-optimizing. A 1% improvement in a popular C# library probably saves megawatts of electricity (not to mention the time saves) on a global scale.
Most code is not like that at all. Most code is not worth putting this kind of effort into. Most code is fine being 10% or even 500% slower than it technically could be because computers are lightning fast and can crunch numbers at an incredible speed. Even if the computer is doing 5 times as much work as it needs to it'll still be instant for most use cases. The problem for these kinds of applications arises when it's not doing 5 times as much work as it needs to but 5000 times or 5 million times. Or when the data is just really large and even the optimal solution would take a noticeable amount of time.
Most programs - after dealing with bugs & low hanging fruit - don't have hotspots.
But, in any case, this is why we build large programs using modular architectures. You don't profile the Linux kernel hoping to find a hotspot in some obscure driver somewhere, you profile the driver directly. Same for filesystems, networking etc. Similarly we build libraries for things like arithmetic and SAT solvers etc. that will be used everywhere and optimise them directly.
Your comment reads like you're disagreeing with your parent, but in fact you are reinforcing the point. We've forgotten how to build programs like this and end up with big balls of mud that can't be profiled properly.
Then they add caching to compensate for their crappy system design and stuff like that, bloating the system with lots of unnecessary code. They store the data naively then spend significant processing time preparing the data for delivery rather than just storing the data in an ideal format in the first place. Lots of stuff like that.
beware too of premature pessimization. Don't write bubble sort just because you haven't benchmarked your code to show it is a bottleneck - which is what some will do and then incorrectly cite premature optimization when you tell them they should do better. Note that any compitenet languare has sort in the standard library that is better than bubble sort.
Also there's Knuth admitting he avoids GO TO because he is afraid of being scolded by Edsger Dijkstra.
From the paper:
"It is clearly better to write programs in a language that reveals the control structure, even if we are intimately conscious of the hardware at each step; and therefore I will be discussing a structured assembly language called PL/MIX in the fifth volume of The art of computer programming"
Looking forward to that!
I personally find that "break/continue" with (optionally labelled) loops cover about 95% of GOTO's use cases today (including "one-and-a-half" loops), although e.g. Rust and Zig, with their expression-oriented design (and support for sum types) offer an option to experiment with Zahn's event mechanism... but usually factoring an inner loop body into a separate function is a more clear-to-read approach (as for efficiency? Hopefully you have a decent inliner idk).
The only thing I truly miss, occasionally, is a way to concisely write an one-and-a-half range/FOR loop. Something like this:
for key, value in enumerate(obj):
emit(f'{key}={value}')
between:
emit(',')
When you have a generic "while True:" loop, conditional "break" works fine enough, but here? Ugh. i = START;
if (i > END) goto _after_loop;
goto _loop_body;
while (1) {
BETWEEN;
_loop_body:
BODY;
_loop_test: // "continue" would desugar into "goto _loop_test".
i++;
if (i > END) break;
}
_after_loop:
...
which, if you throw away the "BETWEEN" part, is a standard way to compile for-loops, with pre-test and loop inversion. But writing this by hand is just... no. I'd rather add either a boolean flag, or throw in "if i > 0: ..." or "if i != END: ..." guard. Sometimes those even get optimized away, see e.g. [0] — both of loops there desugar into pretty much this; but it's brittle, changing "if (i > START)" into "if (i != START)" in the second function duplicates the loop body, and if you make it sufficient larger than a single function call, the elided conditional branch rematerializes again.But yes, the goto doesn't help much with
it = iter(...)
try: loop_var = next(it) except StopIteration: return
body(loop_var)
while True:
try: loop_var = next(it) except StopIteration: break
between(loop_var)
body(loop_var)
loop where you must crank the iterator before every iteration; there you have to either duplicate some code, or count your iterations and check the current number.https://www.gnu.org/software/sather/docs-1.2/tutorial/iterat...
With this machinery, you could declare an iterator like so:
first!(): BOOL is
yield true;
loop
yield false;
end;
end;
And then the case that you describe would be something like: a: ARRAY{INT} := |1,2,3|;
loop
if not first! then
emit(',');
end;
x := a.aelt!;
emit(x);
end;
Which in practice amounts to the same thing as an explicit boolean flag - it's just hidden inside the internal state of first! that the loop has to maintain - but it sure is a lot clearer.In Python and the likes, I suppose you could do the same with zip() if you aren't already using enumerate():
def first_or_not():
yield True
while True: yield False:
for value, is_first in zip(obj, first_or_not()):
if not is_first:
emit(',')
emit(f'{key}={value}')
But this gets unnecessary verbose compared to Sather's implicit zip.Regarding the ability to optimize away the version with an index check or equivalent, I was curious if C++ compilers these days are up to snuff if you choose to do something similar to Python. Turns out that Clang can indeed turn this:
void emit_array(span<char*> xs) {
for (auto [x, i] : views::zip(xs, views::iota(0))) {
if (i) {
emit(",");
}
emit(x);
}
}
into optimized assembly that doesn't perform the check inside the loop. But it does it by duplicating emit(x) so that it always runs once before the first iteration, and then checking for >1 item and only looping then. Ditto for this Rust code: fn emit_array(xs: &[String]) {
for (i, x) in xs.into_iter().enumerate() {
if i != 0 {
emit(",");
}
emit(&x);
}
}
Unfortunately both g++ and MSVC struggle to optimize this equally well; both end up using a flag that is repeatedly checked in loop body.This statement in the introduction applies to so many things in CS:
I have the
uncomfortable feeling that others are making
a religion out of it, as if the conceptual
problems of programming could be solved by
a single trick, by a simple form of coding
discipline!
Like Brooks said: No Silver BulletsIn my experience the latter is actually often expressed. What else would “premature” mean, other than you don’t know yet whether the optimization is worth it?
The disagreement is usually more about small inefficiencies that may compound in the large but whose combined effects are difficult to assess, compiler/platform/environment-dependent optimizations that may be pessimizations elsewhere, reasoning about asymptotic runtime (which shouldn’t require benchmarking — but with cache locality effects sometimes it does), the validity of microbenchmarks, and so on.
If you burn into your soul the table of NUMA latencies and really accept the inevitable conclusions regarding practical computation, a lot of the optimization discussion simply boils down to keeping the closest cache as hot as possible as often as possible. Put differently, if you can get the working set to ~fit in L1, then you have nothing to worry about. The CPU is faster than most people can reason about now. You can talk to L1 1000x before you can talk to the GPU 1x. This stuff makes a ridiculous difference when applied correctly.
Context switching and communication between threads is where most developers begin to lose the rabbit. Very often a problem can be solved on one physical core much faster than if you force it to be solved across many physical cores.
For parallel code, you basically have to know in advance that it is needed. You can't normally just take a big stateful / mutable codebase and throw some cores at it.
Software engineers and computer scientists are often reluctant to parallelize their code, because they have learned that it's difficult and full of hidden pitfalls. But then they lose the performance gains from multithreaded loops and other simple approaches, which are often essentially free. Multithreaded code is not particularly difficult or dangerous, as long as you are not trying to be clever.
In terms of deciding on what programming language to use for a project, that is a complicated one. Usually it is some combination of the team and the problem I suppose.
On a long enough timeline, the probability that someone will call your code in a for-loop approaches 1.