If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better. eg: a human searching a physical paper dictionary can zoom into the right bunch of pages faster than pure idealized binary search; it's a separate matter it's hard for humans to continue binary search till the very end and we might default to scanning linearly for the last few iterations (cognitive convenience / affordances of human wetware / etc).
In mathematical language, searching a sorted list is basically inverting a monotonic function, by using a closed-loop control algorithm. Often, we could very well construct a suitable cost function and use gradient descent or its accelerated cousins.
More generally, the best bet to solving a problem more efficiently is always to use more information about the specific problem you want to solve, instead of pulling up the solution for an overly abstract representations. That can offer scalable orders of magnitude speedup compared to constant factor speedups from just using hardware better.
https://github.com/protocolbuffers/protobuf/blob/44025909eb7...
1. Check for dense list O(1) 2. Check upper bound 3. Constant trip count binary search
The constant trip count is great for the branch predictor, and the core loop is pretty tightly optimized for the target hardware, avoiding multiplies. Every attempt to get more clever made the loop worse and did not pay for itself. It's hard because it's an array-of-structs format with a size of 12, and mostly pretty small N.
I know people train themselves into grokking this and reading and emitting this way, but it sounds like writing "bork bork bork bork" runes to me.
I'm glad Rust feels more like Ruby and Python and that method and field names are legible.
My eyes just glaze over:
UPB_API_INLINE
const struct upb_MiniTableField* upb_MiniTable_FindFieldByNumber(
const struct upb_MiniTable* m, uint32_t number) {
const uint32_t i = number - 1; // 0 wraps to UINT32_MAX
// Ideal case: index into dense fields
if (i < m->UPB_PRIVATE(dense_below)) {
UPB_ASSERT(m->UPB_ONLYBITS(fields)[i].UPB_ONLYBITS(number) == number);
return &m->UPB_ONLYBITS(fields)[i];
}
// Early exit if the field number is out of range.
uint32_t hi = m->UPB_ONLYBITS(field_count);
uint32_t lo = m->UPB_PRIVATE(dense_below);
UPB_ASSERT(hi >= lo);
uint32_t search_len = hi - lo;
if (search_len == 0 ||
number > m->UPB_ONLYBITS(fields)[hi - 1].UPB_ONLYBITS(number)) {
return NULL;
}
// Slow case: binary search
const struct upb_MiniTableField* candidate;
#ifndef NDEBUG
candidate = UPB_PRIVATE(upb_MiniTable_ArmOptimizedLowerBound)(
m, lo, search_len, number);
UPB_ASSERT(candidate ==
UPB_PRIVATE(upb_MiniTable_LowerBound)(m, lo, search_len, number));
#elif UPB_ARM64_ASM
candidate = UPB_PRIVATE(upb_MiniTable_ArmOptimizedLowerBound)(
m, lo, search_len, number);
#else
candidate = UPB_PRIVATE(upb_MiniTable_LowerBound)(m, lo, search_len, number);
#endif
return candidate->UPB_ONLYBITS(number) == number ? candidate : NULL;
}Honestly I don't see much difference between
upb_MiniTable_FindFieldByNumber
and upb::MiniTable::FindFieldByNumberThat's why b-trees are the standard in databases. The data could be anything, and its characteristics could massively change at any time, as you suddenly import a whole bunch of new rows at once.
And while you can certainly design algorithms around e.g. gradient descent to try to accelerate lookup, b-trees are already incredibly fast, and have lots of other benefits like predictable worse-case performance and I/O requirements, supporting range scans, ordered traversal, prefix conditions, etc.
So yes, you can certainly design lookup algorithms that are more efficient for particular data distributions, but they will also often lack other important properties. And b-trees are already so fast, improvements are often negligible -- like even if another algorithm produces a closer initial guess, it may be slower to locate the final item, or it may be faster on average but have horrible worst-case performance that makes it unusable.
Even with a paper dictionary, I've always used pretty much a binary search beyond the first initial guess, which only saves you a couple of hops. And actually, once I get to the right handful of pages I'm probably more linear than I should be, and I'd probably be faster if I tried to do a rigorous binary search, but I have to balance that with how long it takes to flip pages.
I did not bookmark it and about twice a year I go searching for it again. Some say he’s still searching to this day.
[1]: https://dl.acm.org/doi/epdf/10.1145/1411203.1411220
IOW your prior on the data distribution lets you skip the first 4-5 binary chops.
Also if you do not learn anything about the data while performing the binary search, no? Like, if you are constantly below the estimate, you could gess that the distribution is biases toward large values and adjust your guess based on this prediction.
Yes, absolutely!
I forgot to share this general perspective above, and it's too late to edit, so I'll add it here...
Since binary search assumes only monotonicity; splitting your interval into two equal parts extracts one bit of information per step, and any other choice would extract less information on average. One bit of information per step is how you end up needing log(n) steps to find the answer.
To accelerate your search, you basically need to extract those log(n) bits as fast as you can. You can think of that as leveraging both the prior, and everything you learn along the way -- to adaptively design each step to be the optimal experiment to extract maximum amount of information. And adaptive local models of your search space (gradient / hessian / etc) allow you to extract many more bits of information from each query / experiment, provided the function you are inverting has some local structure.
PS: That is why we leverage these ideas to "search" for the optimum, among a space of solutions.
If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right. And if we guess incorrectly, in the worst case, the algorithm degrades to a linear scan.
Unless we have prior knowledge. For example: if there is a particular distribution, or if we know we're dealing with integers without any repetition (i.e. each element is strictly greater than the previous one), etc.
This is true for abstract and random data. I don't think it's true for real world data.
For example, python's sort function "knows nothing" about the data you're passing in. But, it does look for some shortcuts and these end up saving time, on average.
You have another piece of information, you don't only know if the element was before or after the compared element. You can also know the delta between what you looked at and what you're looking for. And you also have the delta from the previous item you looked at.
Actually deciding what to do with that information without incurring a bunch more cache misses in the process may be tricky.
Depending on the problem space such assumptions can prove right enough to be worth using despite sometimes being wrong. Of course if you've got the compute to throw at it (and the problem is large) take the Contact approach: why do one when you can do two in parallel for twice the price (cycles)?
Thinking about it--yeah, if you can anticipate anything like a random distribution it's a few extra instructions to reduce the number of values looked up. In the old days that would have been very unlikely to be a good deal, but with so many algorithms dominated by the cache (I've seen more than one case where a clearly less efficient algorithm that reduced memory reads turned out better) I suspect there's a lot of such things that don't go the way we learned them in the stone age.
You can do better if the list is stable by reusing information.
But gathering that information during searches is going to require great complexity to leverage, as searches are an irregular information gathering scheme.
So create RAM for speedup optimizations up front.
1) Create a table that maps the first 8 bits to upper and lower indexes in the list. Then binary search over the last 8 bits. That reduces the search time in half.
2) Go all the way, and create an array of 32,768 indexes, with all 1's for misses. Either way, search returns O(1).
Stable lists allow for sliding parametric trade offs between RAM-lookup vs. binary search. From full lookup, to full binary.
Never thought about it this way. Brilliant!
Do you mean using a better estimator for the median value? Or something else?
You don't even need priors. See interpolation search, where knowing the position and value of two elements in a sorted list already allows the search to make an educated guess about where the element it's searching for is by estimating the likely place it would be by interpolating the elements.
That's a prior about the distribution, if a relatively weak one (in some sense, at least).
Essentially, this means that all loops are already unrolled from the processors point of view, minus a tiny bit of overhead for the loop itself that can often be ignored. Since in binary search the main cost is grabbing data from memory (or from cache in the "warm cache" examples) this means that the real game is how to get the processor to issue the requests for the data you will eventually need as far in advance as possible so you don't have to wait as long for it to arrive.
The difference in algorithm for quad search (or anything higher than binary) is that instead of taking one side of each branch (and thus prefetching deeply in one direction) is that you prefetch all the possible cases but with less depth. This way you are guaranteed to have successfully issued the prefetch you will eventually need, and are spending slightly less of your bandwidth budget on data that will never be used in the actual execution path.
As others are pointing out, "number of comparisons" is almost useless metric when comparing search algorithms if your goal is predicting real world performance. The limiting factor is almost never the number of comparisons you can do. Instead, the potential for speedup depends on making maximal use of memory and cache bandwidth. So yes, you can view this as loop unrolling, but only if you consider how branching on modern processors works under the hood.
For instance, if you have 8 elements, 01234567, and you're looking for 1, with binary, you'd fetch 4, 2, and then 1. With quaternary, you'd fetch 2, 4, 6, then 1. Obviously, if you only have 8 elements, you'd just delegate to the SIMD instruction, but if this was a much larger array, you'd be doing more work.
I guess on a modern processor, eliminating the data dependency is worth it because the processor's branch prediction and speculation only follows effectively a single path.
Would be interesting to see this at a machine cycle level on a real processor to understand exactly what is happening.
Regardless, both kinds of search are O(log N) with different constants. The constants don’t matter so much in algorithms class but in the real world they can matter a lot.
[1] https://lalitm.com/post/exponential-search/ [2] https://en.wikipedia.org/wiki/Exponential_search
HEAD /users/1 -> 200 OK
HEAD /users/2 -> 200 OK
HEAD /users/4 -> 200 OK
...
HEAD /users/2048 -> 200 OK
HEAD /users/4096 -> 404 Not Found
And then a binary search between 2048 and 4096 to find the most recent user (and incidentally, the number of users). Great info to have if you're researching competing SaaS companies.For 4 byte keys and 4 byte child pointers (or indexes in to an array) your inner nodes would have 7 keys, 8 child pointers and 1 next pointer, completely filling a 64 byte cache-line and your tree depth for 1 million entries would go down from ~20 to ~7, the top few levels of which are likely to remain cache resident.
With some thought, it's possible to use SIMD on B-tree nodes to speed up the search within the node, but it's all very data dependent.
It seems to me like there should be a sort order that stores the items as a fully-dense left-shifted binary tree from top-to-bottom (e.g. like the implicit heap in an in-place heap sort, but a binary search tree instead of a hea). Is there a name for this? Does it show any performance wins in practice?
The outline would be:
a) use a gather to grab multiple elements from 16 evenly spaced locations
b) compare these in parallel using a SIMD instruction
c) focus in on the correct block
d) if the block is small revert to linear search, else repeat the gather/compare cycle
Even though the gather instruction is reading from non-contiguous memory and reading more than you normally would need to with a binary sort, enabling a multiway compare and collapsing 4 levels of binary search should be a win on large tables.
You also may not be able to do a full 16-way comparison for all data types. Searching for float64 will limit you to 8 way comparisons (with AVX-512) but int32, float32 will allow 16 way comparisons.
I bet you a binary search is in fact faster than any gather based methodology.
In general I love this article, it took what I’ve often wondered about and did a perfect job exploring with useful ablation studies.
The SIMD part is just in the last step, where it uses SIMD to search the last 16 elements.
The Quad part is that it checks 3 points to create 4 paths, but also it's searching for the right block, not just the right key.
The details are a bit interesting. The author chooses to use the last element in each block for the quad search. I'm curious how the algorithm would change if you used the first element in each block instead, or even an arbitrary element.
a: [1,3,5,7,8,9,10,15]
x: 8 (query value)
For this array, we would compare a[0], a[3], a[7] (left/mid/right) by subtracting 9.And we would get d=[-7, -1, 7]
Now, normally, with binary search, because 8 > mid, we would go to (mid+right)/2, BUT we already have some extra information: we see that x is closer to a[3] (diff of 1) than a[7] (diff of 7), so instead of going to the middle between mid and right, we could choose a new "mid" point that's closer to the desired value (maybe as a ratio of (d[right]-d[mid]).
so left=mid+1, right stays the same, but new mid is NOT half of left and right, it is (left+right)/2 + ratioOffset
Where ratioOffset makes the mid go closer to left or right, depending on d.
The idea is quite obvious, so I am pretty sure it already exists.
But what if we use SIMD, with it? So we know not only which block the number is in in, but also, which part of the block the number is likely in. Or is this what the article actually says?
Weird that I didn't hear about it before, it's not that used in practice?
One reason I could see is that binary search is fast enough and easy to implement. Even on largest datasets it's still just a few tens of loop iterations.
As soon as you move away from either (or both) of these assumptions then there are likely to be many tweaks you can make to get better performance.
What the classical algorithms do offer is a very good starting point for developing a more optimal/efficient solution once you know more about the specific shape of data or quirks/features of a specific CPU.
When you start to get at the pointy end of optimising things then you generally end up looking at how the data is stored and accessed in memory, and whether any changes you can make to improve this don't hurt things further down the line. In a job many many years ago I remember someone who spent way too long optimising a specific part of some code only to find that the overall application ran slower as the optimisations meant that a lot more information needed later on had been evicted from the cache.
(This is probably just another way of stating Rob Pike's 5th rule of programming which was itself a restatement of something by Fred Brooks in _The Mythical Man Month_. Ref: https://www.cs.unc.edu/~stotts/COMP590-059-f24/robsrules.htm...)