Top
Best
New

Posted by jorangreef 2 days ago

Spiral(spiraldb.com)
258 points | 87 comments
mellosouls 2 days ago|
This is a pretty website but doesn't actually give us anything to actually look at, its just blurb.

For anybody confused, the "Vortex" stuff is the underlying data format used but isn't the database/whatever this website (by the creators of Vortex) is pushing.

kmoser 2 days ago|
> Spiral is our database built on Vortex [...]

No surprise there's nothing to look at, since it's basically a press release posted on their blog.

pauldix 2 days ago||
I've been following this team's work for a while and what they're doing is super interesting. The file format they created and put into the LF, Vortex, is very welcome innovation in the space: https://github.com/vortex-data/vortex

I'm excited to start doing some experimentation with Vortex to see how it can improve our products.

Great stuff, congrats to Will and team!

dist-epoch 2 days ago|
https://vortex.dev doesn't work in my Firefox:

Application error: a client-side exception has occurred while loading vortex.dev (see the browser console for more information).

Console: unable to create webgl context

miloignis 2 days ago|||
Presumably you don't have WebGL enabled or supported - the main page is just a cute 3D landing page.

You may be interested in https://github.com/vortex-data/vortex which of course has an overview and links to their docs and benchmark pages.

arusahni 2 days ago||||
Works for me. Mozilla/5.0 (X11; Linux x86_64; rv:142.0) Gecko/20100101 Firefox/142.0
brunohaid 2 days ago|||
If anyone ever writes a post of why that error keeps happening with browsers that should support it, I'd be incredibly grateful. Keep seeing it in our (unrelated to OP company) Sentry logs and zero chance to reproduce them.
shakna 2 days ago|||
Handful of causes:

+ No hardware acceleration enabled.

+ Multiple graphics cards, and browser can't decide which to use.

+ Race conditions that can rarely cause a mount of 3d onto a 2d context (often happens to Unity).

bflesch 2 days ago|||
I assume it's just people who do not have a graphics card
spankalee 2 days ago||
I'm curious... I'm not a database or AI engineer. The last time I did GPU work was over a decade ago. What is the point of the "saturate an H100" metric?

I would think that a GPU isn't just sitting there waiting on a process that's in turn waiting for one query to finish to start the next query, but that a bunch of parallel queries and scans would be running, fed from many DB and object store servers, keeping the GPUs as utilized as possible. Given how expensive GPUs are, it would seem like a good trade to buy more servers to keep them fed, even if you do want to make the servers and DB/object store reads faster.

otterley 2 days ago||
The idea is that in a pipeline of work, throughput is limited by the slowest component. H100 GPUs have a lot of memory bandwidth. The question then becomes how to eliminate any bottlenecks between the data store and the GPU's memory.

First is the storage bottleneck. Network-attached storage is usually a bottleneck for uncached data. Then there is CPU work decoding data. Spiral claims that their table format is ready to load by the GPU so they can bypass various CPU-bound decoding stages. Once you eliminate storage and CPU bottlenecks, the remaining bottleneck is usually the PCI bus that sits between the host memory and the GPU, and they can't solve that themselves. (And no amount of parallelization can help when the bus is saturated.) What they can do is use the network, the host bus, and the GPU more efficiently by compressing and packing data with greater mechanical sympathy.

They've left unanswered how they're going to commercialize it, but my guess is that they're going to use a proprietary fork of Vortex that provides extra performance or features, or perhaps they'll offer commercial services or integrations that make it easier to use. The open-source release gives its customers a Reason to Believe, in marketing parlance.

vouwfietsman 2 days ago||
My guess is that just the raw data size, combined with the physical limitations of your RU, makes it hard for the GPU to be fully utilized. Instead you will always be stuck on CPU (decompressing/interpreting/uploading parquet) or bandwidth (transfer from s3) being the bottleneck.

Seems that they are targeting a low-to-no overhead path from s3 bucket to GPU, by targeting: same compression/faster random access, streamed encoding from S3 while in flight, zero copy to GPU.

Not 100% clear on the details, but I doubt that they can actually saturate the cpu/gpu bus, but rather just saturate the GPU utilization, which is itself dependent on multiple possible bottlenecks but generally not on bus bandwidth.

That's not criticism: it literally means you can't do better unless you improve the GPU utilization of your AI model.

paxys 2 days ago||
Wasn't "3.0" supposed to be crypto? Is it AI now? It's had to keep track.
bee_rider 2 days ago||
No, Web 3.0 was the Semantic Web. Thankfully, the silly idea of having major-number versions for the entire internet died when that it happen. Now we can safely ignore anybody who tries to do it.
stronglikedan 2 days ago|||
I think we're in a new era, so I consider this version of the web to be "AAI 1", and next year it will be "AAI 2", and so on. This era will be hereafter referred to as "in the year of the AI overlord", or "Anno Domini Artificialis Intellegentiae Artificialis" (according to google translate).
jppope 2 days ago|||
I think some of the crypto companies tried to get cute and leapfrog 3.0 going straight to 4.0, so that would put us at either 5.0, 4.0, 3.1, 2.2, or 2.1 depending on how you feel about the crypto space, and which groups you were validating
ionwake 2 days ago|||
I think AI is 4.0

EDIT> Maybe its how some poeple call the 4th dimension time when there is infact a 4th spatial dimension. So I guess if this is the 3rd Data dimension like what is the 4th one?

adfm 2 days ago||
You’re conflating concepts. FWIW, Web3 is snake oil or wishful thinking at best. As much as people like to bang on the old Web 2.0, it still holds up conceptually. And if you only know it as a buzz word, I suggest you go back and familiarize yourself with it if you’re looking for incremental change.

Who knows, maybe a Web 3.1 will deliver us from Enshitification.

vouwfietsman 2 days ago||
Although I welcome a parquet successor, I am not particularly interested in a more complicated format. Random access time improvements are nice, but really what I would like just storing multiple tables in a single parquet file.

When I read "possible extension through embedded wasm encoders" I can already imagine the c++ linker hell required to get this thing included in my project.

I also don't think a lot of people need "ai scale".

drdaeman 2 days ago||
Storing multiple tables in a single file would be trivially solvable by storing multiple Parquet files in a most basic plain uncompressed tarball (to retain ability to access any part of any file without downloading the whole thing). Or maybe ar or cpio - tar has too many features (such as support for links) that are unnecessary here. Basically, anything well-standardized that implements a very basic directory structure, with a simple index located at a predictable offset.

If any tools would've supported that.

vouwfietsman 2 days ago||
Couldn't agree more. If tooling would just settle on an arbitrary archive format our lives would be better.
nylonstrung 2 days ago|||
Lance already exists to solve Parquet problems but with drastically faster random access time
vouwfietsman 2 days ago||
Lance is pretty far from a lingua franca. For instance the SDKs are only Rust/Python/Java, none of which I use.
nylonstrung 2 days ago||
Sounds like we need more SDKs, not a new format
gcr 2 days ago|||
If you want "several tables and database-like semantics in one file," then what you want is DuckDB.

If you want modern parquet, then you want the Lance format (or LanceDB for DB-like CRUD semantics).

alfalfasprout 2 days ago||
also what does "ai scale" even mean?
vouwfietsman 2 days ago|||
I think its a bit markety, but they explain it rather well: because of AI your data needs to be consumed by machines on an unprecedented scale, which requires new solutions to problems. Historically we mostly did large input -> small output, now we're doing large input -> large output. The existing tools are (supposedly) not ready.
alfalfasprout 2 days ago||
no, I read that. It doesn't really add any more practical detail.
aakkaakk 2 days ago|||
It’s obvious a jab at mongo’s ”web scale”. https://youtube.com/watch?v=b2F-DItXtZs
cryptonector 2 days ago||
I can't tell what this is about.
dkdbejwi383 2 days ago||
Do you remember the days of “mongodb is web-scale”? It’s that but “spiral is ai-scale”
nwhnwh 2 days ago||
So it will be irrelevant after a few years?
steve_adams_86 2 days ago|||
Mongo is still very relevant

For better or worse

zzzeek 2 days ago|||
maybe just a few months, AI scale is much faster than web scale of course
didibus 2 days ago|||
I think I understood it as the database will basically store data in a binary format that can be fed into the GPU directly, and will also be optimized for streaming/batching large chunks of data at ounce.

So it's "optimized for machines to consume" meaning the GPU.

Their use case was training ML models where you need to feed the GPU massive datasets as part of training.

They seem to claim that training is now bottlenecked by how quickly you can feed the GPU, that otherwise the GPU is basically "waiting on IO" most of the time and not actual computing because the time goes in just grabbing the next piece of data, transforming it for GPU consumption, and then feeding it into the GPU.

But I'm not an expert, this is just my take from the article.

znort_ 2 days ago|||
"I've been building data systems for long enough to be skeptical of “revolutionary” claims, and I’m uncomfortable with grandiose statements like “Built for the AI Era”. Nevertheless, ...

... i'm gonna make revolutionary claims and grandiose statements like "built for the ai era".

riku_iki 2 days ago|||
my reading that it will be some hyper-performant db thanks to some very low level optimization utilizing recent hw advancements and formats/pipelines unification and simplification.
bee_rider 2 days ago||
Probably either overcoming giant robots with the power of friendship and a giant drill, or a cursed village with an obsession-inducing whirlpool.
djfobbz 2 days ago||
So this Vortex engine is a combination of OLTP and OLAP on steroids?
didibus 2 days ago||
It sounded only OLAP from the article.
maxmcd 2 days ago||
Do they mention transactions anywhere? Maybe it will be OLAP?
donperignon 2 days ago||
“ We work in person at our offices in London and New York. Face to face is better: if uncertain, the answer is “yes, get on the plane”. On Wednesdays, we wear pink.”

No comments.

zzzeek 2 days ago||
This links to a super long winded blog post that sounds more like a toast at a wedding, so I went to the main page to try to see what their product is, and you just get a blitz of fancy animations of table diagrams and things and lots of very cheap sounding slogans pushed out like "Works with any data! Fully XYZ 2.0 compliant! Ties your shoes!"

basically im not sure where the product is hiding under all of this bluster but this doesnt feel very "hacker"-Y

reactordev 2 days ago|
Anyone that can improve upon the parquet hell that is my life is gladly welcomed...
riku_iki 2 days ago|
why you don't like parquet?
indoordin0saur 2 days ago||
Parquet seems easy and straight-forward. The only issue I see people having with it is if they aren't used to non-human-readable formats and have to use special tools to look at it (as opposed to something like CSV). In that case this new file format will absolutely be worse.
reactordev 1 day ago||
Not my issue at all. My issue is someone dumping 4gb of data into a parquet file thinking it’s fine…
riku_iki 1 day ago||
I operate xxxGB files. What do you think is wrong with this?..
More comments...