Posted by Bogdanp 5 days ago
Lukas Bergdoll and Orson Peters did some really important heavy lifting to get this stuff from "If you're an expert you could probably tune a general purpose sort to have these nice properties" to "Rust's general purpose sort has these properties out of the box" and that's going to make a real difference to a lot of software that would never get hand tuned.
A couple years later I was doing my PhD and I spent a lot of time optimizing a stable sort called glidesort. Around the same time Lukas Bergdoll started work on their own and started providing candidate PRs to improve the standard library sort. I reached out to him and we agreed to collaborate instead of compete, and it ended up working out nicely I'd say.
Ultimately I like tinkering with things and making them fast. I actually really like reinventing the wheel, find out why it has the shape that it does, and see if there's anything left to improve.
But it feels a bit sad to do all that work only for it to disappear into the void. It makes me the happiest if people actually use the things I build, and there's no broader path to getting things in people's hands than if it powers the standard library.
I expect that in this case, like in all cases, as the datasets become gallactically large, the O(n) algorithm will start winning again.
Also, memory access is constant time only to some upper limit allowed by the hardware, which requires significant changes to the implementation when the data does not fit the system memory. So, the hash algorithm will not stay O(n) once you go past the available memory.
The sorting algorithms do not suffer from these complexities quite as much, and similar approaches can be used with data sets that do not fit a single system's memory. The sorting-based algorithms will likely win in the galactically large cases.
Edit: Also, once the hash table would need to grow beyond what the hash function can describe (e.g. beyond 64 bit integers), you need to grow the function's data type. This is essentially a hidden log(n) factor, as the required length of the data type is log(n) of the maximum data size.
Also if items take up k bytes then the hash must typically be O(k), and both the hashing and radix sort are O(n k).
Really radix sort should be considered O(N) where N is the total amount of data in bytes. It can beat the theoretical limit because it sorts lexicographically, which is not always an option.
Basically, the hashed sorting approach is effectively actually O(n), and is so for the same reason that the "length of hash table" approach is. The degenerate case is counting sort, which is a single pass with a radix base of k. (Which is analogous to pointing out that you don't get hash collisions when the hash table is as big as the hashed key space.)
One more fun fact: this is also the reason why Turing machines are a popular complexity model. The tape on a Turing machine does not allow random access, so it simulates the act of "going somewhere to get your data". And as you might expect, hash table operations are not O(1) on a Turing machine.
Sorting (really scanning through an array) is very efficient in cache lines, while hash tables are very inefficient.
In the example, every memory access in hashing gets 8 byte of useful data, while every memory access in sorting gets 64 bytes of useful data.
So you have to be 8x more efficient to win out.
For this problem, the radix sort is log_1024(n) passes, which for a key size of 64 bits can’t ever get above 7 passes.
If you increase the key size then the efficiency win of sorting drops (128 bit keys means you have to run less than 4 passes to win).
Essentially the size of the key compared to the cache line size is a (hidden) part of the big O.
I could imagine the hash table wins again beyond a even greater threshold. Like what about 120GB and beyond?
So it comes down to the cost of hashing, hash misses, and other factors as discussed in the article.
I remember learning that radix sort is (almost?) always fastest if you're sorting integers.
A related fun question is to analyze hash table performance for strings where you have no bound on string length.
Here's my attempt at it, based on that analysis:
Suppose each item key requires s bytes
For the hash table, assuming s <= 64, our cache line size, then we need to read one cache line and write one cache line.
The bandwidth to sort one key is p(N) * 2 * s where p(N) is the number of passes of 1024-bucket radix sort required to sort N elements, and 2 comes from 1 read + 1 write per 1024-bucket radix sort pass
p(N) = ceil(log2(N) / log2(buckets))
Suppose N is the max number of items we can distinguish with an s=8 byte item key, so N = 2^64
then p(N) = ceil(64 / 10) = 7 passes
7 radix passes * (1 read + 1 write) * 8 bytes per item key = 112 bytes of bandwidth consumed, this is still less than the bandwidth of the hash table 2 * 64.
We haven't found the threshold yet.
We need to either increase the item count N or increase the item key size s beyond 8 bytes to find a threshold where this analysis estimates that the hash map uses less bandwidth. But we cant increase the item count N without first increasing the key size s. Suppose we just increase the key size and leave N as-is.
Assuming N=2^64 items and an item size of b=10 bytes gives an estimate of 140 bytes of bandwidth for sorting vs 128 bytes of bandwidth. we expect sorting to be slower for these values, and increasing b or N further should make it even worse.
(the bandwidth required for hash map hasnt increased as our 10 byte b is still less than the size of a cache line)
edit: above is wrong as i'm getting mixed up with elements vs items. elements are not required to be unique items. if elements are unique items then the problem is trivial. So N is not bounded by the key size, and we can increase N beyond 2^64 without increasing the key size s beyond 8 bytes.
keeping key size s fixed at 8 bytes per item suggests the threshold is at N=2^80 items
given by solving for N for the threshold where bandwidth estimate for sort = bandwidth estimate for hash table
2 * 8 * (log2(N) / log2(1024)) = 2 * 64
It's ok to waste bandwidth, it doesn't directly impact performance. However, the limitation you are hitting (which directly impacts performance) is the number of memory accesses you can do (per cycle for instance) and the latency of each memory access. With linear access, after some initial read, data is ready instantly for the CPU to consume. For scattered data (hash tables), you have to pay a penalty on every read.
So the bandwidth ratio is not the right factor to look at to estimate performance.
[edit] I should have said "basic big O complexity" makes assumptions that break down. You can ofc decide to model memory access as part of "big O" which is a mathematical model
However, in this particular case, the higher-complexity sorting algorithms gets better than the hash algorithm as N gets larger, even up to pretty large values of N. This is the counter-intuitive part. No one is surprised if an O(n log n) algorithm beats an O(n) algorithm for n = 10. But if the O(n log n) algorithm wins for n = 10 000 000, that's quite surprising.
It's O(k * n), where k is constant with respect to the key size.
For 64-bit keys with 1024 buckets, k==8, so it's O(8 * n) = O(n)
EDIT: ah no I'm being dense, you'd aggregate the union of all the set-columns indices across rows and the union of the set-row indices across the columns, keeping track of the source locations, and do the hashed sorting on those union vectors to find all the collision points. You could still get a small win I think by sorting the row-aggregation and column-aggregation separately though?
If the original table has n entries, the hash must have at least log(n) bits to get most of the time a different hash. I don't know the current state of the art, but I think the recommendation was to have at most a 95% filled array, so the are not too many collision. So both methods are O(n log n), one under the hood and the other explicitly.
Maybe relevant comments [1, 2, 3]
[1] https://news.ycombinator.com/item?id=44854520 [2] https://news.ycombinator.com/item?id=43005468 [3] https://news.ycombinator.com/item?id=45214106
Interestingly, recently I've been thinking that basically the Big-O notation is essentially a scam, in particular the log(N) part.
For small values of N, log(N) is essentially a constant, <= 32, so we can just disregard it, making sorting simply O(N).
For large values, even so-called linear algorithms (e.g. linear search) are actually O(N log(N)), as the storage requirements for a single element grow with log(N) (i.e. to store distinct N=2^32 elements, you need N log(N) = 2^32 * 32 bits, but to store N=2^64 elements, you need 2^64 * 64 bits).
Cache locality consideration make this effect even more pronounced.
How can I unread this?
However, when someone says an operation is O(1) vs O(log N), it still tells you something important. Very broadly speaking (tons of caveats depending on problem domain, of course) O(log N) usually implies some kind of tree traversal, while O(1) implies a very simple operation or lookup. And with tree traversal, you're chasing pointers all over memory, making your cache hate you.
So, like, if you have a binary tree with 65000 elements in it, we're talking a height of 15 or 16, something like that. That's not that much, but it is 15 or 16 pointers you're chasing, possibly cache-missing on a significant amount of them. Versus a hash-table lookup, where you do a single hash + one or two pointer dereferences. If this is in a hot path, you're going to notice a difference.
Again, lots of caveats, this article provides a good exception. In this case, the sorting has much more beneficial cache behavior than the hash table, which makes sense. But in general, log(N) hints at some kind of tree, and that's not always what you want.
But yes, don't be afraid of log(N). log(N) is tiny, and log(N) operations are very fast. log(N) is your friend.
What's the complexity of computing the nth fibonacci number? Make a graph of computation time with n=1..300 that visualizes your answer.
There are those that very quickly reply linear but admit they can't get a graph to corroborate, and there are those that very quickly say linear and even produce the graph! (though not correct fibonacci numbers...)
However you do it it probably can't be linear, since multiplication is probably at best O(n log(n)), though that lower bound hasn't been proven. A naive recursive calculation will be even worse since that has exponential complexity.
Another reason to do this is that O(1) is typically a lie. Basic operations like addition are assumed to be constant time, but in practice, even writing down a number, n, is O(log(n)). More commonly thought of as O(b) where b is the bit-length of n.
N is the number of bits, not the number of elements, so no.
It is helpful to use N=# of elements, since the elements are often fixed/limited size. If elements aren't a fixed size, it's necessary to drop down to # of bits.
You can use a simple hash table without probing to classify each number as either (1) first instance of a number in the table, (2) known duplicate of a number in the table, or (3) didn't fit in the table.
If you use a hash table with n buckets and one item per bucket and the input is random, then at least 63% of the distinct numbers in the input will fall into cases 1 or 2 in expectation. Increasing table size or bucket size improves this part of the math.
On really big inputs, it's probably still healthy to do a pass or two of radix sort so that the hash table accesses are cheap.
Do I understand correctly, that the data being tested is fully random? If so I'd be curious to see how the results change if a Zipfian distribution is used. I don't have hard data for sorting specifically, but I suspect the majority of real-world data being sorted - that isn't low cardinality or already nearly or fully sorted - to follow a Zipfian distribution rather than true randomness. The Rust std lib sort_unstable efficiently filters out common values using the pdqsort repeat pivot flip partition algorithm.
One important trick is ruling out cases where what you said is nonsense. That can't be what you meant so by ruling it out we're helping you fall into the pit of success and Rust does an admirable job of that part.
Unfortunately because of Rice we must also rule out some cases where what you said wasn't nonsense when we do this. Hopefully not too many. It's pretty annoying when this happens, however, it's also logically impossible to always be sure, Rice again - maybe what you wrote was nonsense after all. So I find that acceptable.
For me Rust rarely hits the sweet spot since there are easier languages for high level programs (python, C#, Swift...) and for low level programs you usually want to do unsafe stuff anyway.
I can see usefull applications only where you cannot have a garbage collector AND need a safe high level language. Even there Swift's ARC could be used.
By definition the Undefined Behaviour can't have been what you meant. So, all the languages where you can just inadvertently "say" Undefined Behaviour aren't meaningfully achieving this goal.
I must say I'm also extremely dubious about some defined cases. Java isn't UB when we try to call abs() on the most negative integer... but the answer is just the most negative integer again, and I wonder whether that's what anybody meant.
Or the "why we can't have nice things" theorem. Colloquially, anything interesting about a Turing-complete program is unprovable. You may have proved specific instance of this if you went through a formal computer science program and reduced problems like "this program never uses more than X cells on the tape" to the halting problem.