Posted by ksec 5 days ago
We based the implementation on two very new papers from Microsoft Research, SPANN and SPFresh. SPANN is a hybrid graph/tree algorithm that does a fantastic job of scaling larger-than-RAM indexes (https://arxiv.org/abs/2111.08566) and SPFresh expands upon it by defining a set of background operations that can be performed to maintain the index's performance and recall while it's continuously updated in-place (https://arxiv.org/html/2410.14452v1). The novel thing here is designing all the SPANN _and_ SPFresh operations to be transactional, and integrating them in MySQL's default storage engine.
This tight integration fundamentally means that inserting, updating and deleting vector data from MySQL is always reflected immediately in the index as part of committing your transaction. But it also means that the indexes are fully covered by the MySQL binlog; they recover from hard crashes just fine. They're also managed by MySQL's buffer pool, so they scale to terabytes of data, just like any other table. And also crucially, they're fully integrated with the query planner, so they can be used in any query, including JOINs and WHERE clauses (something that many other vector indexes really struggle with).
We plan to release a whitepaper on our transactionality extensions to SPFresh, which I think will be super interesting, but meanwhile please feel free to test the beta and ask me any questions (here, or by emailing PlanetScale support). Thanks!
To be honest using MySQL the fact that it is owned by a corp is quite a concern ATM.
I'd be curious how y'all represent the posting lists for each partition in InnoDB:
- what IDs are you storing in the posting lists?
- how are the posting lists represented on disk? are they using compression and/or some form of skip indexing? the paper seemed to use a pretty simple block-based representation, but I'm curious what works well in practice.
- how do the posting list data structures themselves handle incremental updates and MVCC?
We're storing IDs from a ghost column that is created in the table where you're inserting vector data. This works very well in practice and allows updating the value of the vectors in the table, because they're translated into a delete + insert in the vector index by updating the ghost ID.
We have abstracted away the quantization system from the index; for the initial release, vector data is stored in raw blocks, like in the paper. Query performance is good, but disk usage is high. We're actively testing different quantization algorithms to see which ones we end up offering on GA. We're hoping our beta users will help us guide this choice!
Incremental updates and MVCC are _extremely tricky_, for both correctness and performance. As you've surely noticed, the hard thing here is that the original paper is very focused on LSM trees, because it exploits the fact that LSM trees get compacted lazily to perform incremental updates to the posting lists ('merges'). MySQL (and Postgres, and all relational databases, really) are B-tree based, and in-place updates for B-trees are expensive! I think we came up with very interesting workarounds for the problem, but it's a quite a bit to drill down in a HN comment. Please stay tuned for our whitepaper. :)
I'd be curious if y'all end up supporting adding filter attributes to the inverted index that can then be pushed down into the posting list traversal.
for example, a restaurant search app may have (1) an embedding for each restaurant but also (2) a cuisine. then, if a restaurant has `cuisine = Italian`, we'd also store its ghost ID in a `cuisine:Italian` posting list.
at query time, the query planner could take a query like `SELECT * FROM t1 WHERE cuisine = 'Italian' ORDER BY DISTANCE(..)` and emit a plan that efficiently intersects the `cuisine:Italian` posting list with the union of the partitions' posting lists.
this feels to me like a potential strength of the inverted indexing approach compared to graph-based approaches, which struggle with general filtering (e.g. the Filtered-DiskANN paper).
Andy Pavlo, Jan 4, 2024
P.S.: R.I.P. Ottertune.
---
Will these converge on a common syntax for vector fields, indexes, and comparison functions in the near future? Or will vector implementations just add momentum to the increasing incompatibility in the MySQL-ish ecosystem?
I'm asking this because most of the vector databases have tight limits on max dimensions and OpenAI's biggest embeddings model has something like 3,000 dimensions. I was wondering if there is something to gain if, for example, OpenAI releases a new embeddings model with 8,000 dimensions or more.
As for separating storage and compute: we don't do that. One of our key sells here is that this is vector data fully integrated into your relational database with ACID semantics. Very hard to do separate storage and compute and keep this behavior!
“This customer has called our support line 20 times in the past year, pop up the most relevant prior tickets to their current request.”
In most cases the justification is performance and that the data is ephemeral but people want to store vectors in lock-step with their data.