Spiral

170 jorangreef 53 9/11/2025, 3:45:38 PM spiraldb.com ↗

Comments (53)

mellosouls · 32m ago
This is a pretty website but doesn't actually give us anything to actually look at, its just blurb.

For anybody confused, the "Vortex" stuff is the underlying data format used but isn't the database/whatever this website (by the creators of Vortex) is pushing.

kmoser · 17m ago
> Spiral is our database built on Vortex [...]

No surprise there's nothing to look at, since it's basically a press release posted on their blog.

spankalee · 1h ago
I'm curious... I'm not a database or AI engineer. The last time I did GPU work was over a decade ago. What is the point of the "saturate an H100" metric?

I would think that a GPU isn't just sitting there waiting on a process that's in turn waiting for one query to finish to start the next query, but that a bunch of parallel queries and scans would be running, fed from many DB and object store servers, keeping the GPUs as utilized as possible. Given how expensive GPUs are, it would seem like a good trade to buy more servers to keep them fed, even if you do want to make the servers and DB/object store reads faster.

otterley · 42m ago
The idea is that in a pipeline of work, throughput is limited by the slowest component. H100 GPUs have a lot of memory bandwidth. The question then becomes how to eliminate any bottlenecks between the data store and the GPU's memory.

First is the storage bottleneck. Network-attached storage is usually a bottleneck for uncached data. Then there is CPU work decoding data. Spiral claims that their table format is ready to load by the GPU so they can bypass various CPU-bound decoding stages. Once you eliminate storage and CPU bottlenecks, the remaining bottleneck is usually the PCI bus that sits between the host memory and the GPU, and they can't solve that themselves. (And no amount of parallelization can help when the bus is saturated.) What they can do is use the network, the host bus, and the GPU more efficiently by compressing and packing data with greater mechanical sympathy.

They've left unanswered how they're going to commercialize it, but my guess is that they're going to use a proprietary fork of Vortex that provides extra performance or features, or perhaps they'll offer commercial services or integrations that make it easier to use. The open-source release gives its customers a Reason to Believe, in marketing parlance.

vouwfietsman · 43m ago
My guess is that just the raw data size, combined with the physical limitations of your RU, makes it hard for the GPU to be fully utilized. Instead you will always be stuck on CPU (decompressing/interpreting/uploading parquet) or bandwidth (transfer from s3) being the bottleneck.

Seems that they are targeting a low-to-no overhead path from s3 bucket to GPU, by targeting: same compression/faster random access, streamed encoding from S3 while in flight, zero copy to GPU.

Not 100% clear on the details, but I doubt that they can actually saturate the cpu/gpu bus, but rather just saturate the GPU utilization, which is itself dependent on multiple possible bottlenecks but generally not on bus bandwidth.

That's not criticism: it literally means you can't do better unless you improve the GPU utilization of your AI model.

pauldix · 1h ago
I've been following this team's work for a while and what they're doing is super interesting. The file format they created and put into the LF, Vortex, is very welcome innovation in the space: https://github.com/vortex-data/vortex

I'm excited to start doing some experimentation with Vortex to see how it can improve our products.

Great stuff, congrats to Will and team!

dist-epoch · 1h ago
https://vortex.dev doesn't work in my Firefox:

Application error: a client-side exception has occurred while loading vortex.dev (see the browser console for more information).

Console: unable to create webgl context

brunohaid · 7m ago
If anyone ever writes a post of why that error keeps happening with browsers that should support it, I'd be incredibly grateful. Keep seeing it in our (unrelated to OP company) Sentry logs and zero chance to reproduce them.
miloignis · 1h ago
Presumably you don't have WebGL enabled or supported - the main page is just a cute 3D landing page.

You may be interested in https://github.com/vortex-data/vortex which of course has an overview and links to their docs and benchmark pages.

arusahni · 1h ago
Works for me. Mozilla/5.0 (X11; Linux x86_64; rv:142.0) Gecko/20100101 Firefox/142.0
paxys · 1h ago
Wasn't "3.0" supposed to be crypto? Is it AI now? It's had to keep track.
bee_rider · 1h ago
No, Web 3.0 was the Semantic Web. Thankfully, the silly idea of having major-number versions for the entire internet died when that it happen. Now we can safely ignore anybody who tries to do it.
jppope · 1h ago
I think some of the crypto companies tried to get cute and leapfrog 3.0 going straight to 4.0, so that would put us at either 5.0, 4.0, 3.1, 2.2, or 2.1 depending on how you feel about the crypto space, and which groups you were validating
ionwake · 1h ago
I think AI is 4.0

EDIT> Maybe its how some poeple call the 4th dimension time when there is infact a 4th spatial dimension. So I guess if this is the 3rd Data dimension like what is the 4th one?

adfm · 56m ago
You’re conflating concepts. FWIW, Web3 is snake oil or wishful thinking at best. As much as people like to bang on the old Web 2.0, it still holds up conceptually. And if you only know it as a buzz word, I suggest you go back and familiarize yourself with it if you’re looking for incremental change.

Who knows, maybe a Web 3.1 will deliver us from Enshitification.

vouwfietsman · 35m ago
Although I welcome a parquet successor, I am not particularly interested in a more complicated format. Random access time improvements are nice, but really what I would like just storing multiple tables in a single parquet file.

When I read "possible extension through embedded wasm encoders" I can already imagine the c++ linker hell required to get this thing included in my project.

I also don't think a lot of people need "ai scale".

drdaeman · 24m ago
Storing multiple tables in a single file would be trivially solvable by storing multiple Parquet files in a most basic plain uncompressed tarball (to retain ability to access any part of any file without downloading the whole thing). Or maybe ar or cpio - tar has too many features (such as support for links) that are unnecessary here. Basically, anything well-standardized that implements a very basic directory structure, with a simple index located at a predictable offset.

If any tools would've supported that.

nylonstrung · 23m ago
Lance already exists to solve Parquet problems but with drastically faster random access time
alfalfasprout · 33m ago
also what does "ai scale" even mean?
vouwfietsman · 30m ago
I think its a bit markety, but they explain it rather well: because of AI your data needs to be consumed by machines on an unprecedented scale, which requires new solutions to problems. Historically we mostly did large input -> small output, now we're doing large input -> large output. The existing tools are (supposedly) not ready.
alfalfasprout · 25m ago
no, I read that. It doesn't really add any more practical detail.
aakkaakk · 30m ago
It’s obvious a jab at mongo’s ”web scale”. https://youtube.com/watch?v=b2F-DItXtZs
raziel2p · 20m ago
> Vortex is designed to support decoding data directly from S3 to GPU, skipping the CPU bottleneck entirely.

how is this significant? surely either the network or the GPU calculations is the bottleneck here?

cryptonector · 1h ago
I can't tell what this is about.
dkdbejwi383 · 1h ago
Do you remember the days of “mongodb is web-scale”? It’s that but “spiral is ai-scale”
nwhnwh · 55m ago
So it will be irrelevant after a few years?
steve_adams_86 · 1m ago
Mongo is still very relevant

For better or worse

zzzeek · 48m ago
maybe just a few months, AI scale is much faster than web scale of course
didibus · 40m ago
I think I understood it as the database will basically store data in a binary format that can be fed into the GPU directly, and will also be optimized for streaming/batching large chunks of data at ounce.

So it's "optimized for machines to consume" meaning the GPU.

Their use case was training ML models where you need to feed the GPU massive datasets as part of training.

They seem to claim that training is now bottlenecked by how quickly you can feed the GPU, that otherwise the GPU is basically "waiting on IO" most of the time and not actual computing because the time goes in just grabbing the next piece of data, transforming it for GPU consumption, and then feeding it into the GPU.

But I'm not an expert, this is just my take from the article.

znort_ · 1h ago
"I've been building data systems for long enough to be skeptical of “revolutionary” claims, and I’m uncomfortable with grandiose statements like “Built for the AI Era”. Nevertheless, ...

... i'm gonna make revolutionary claims and grandiose statements like "built for the ai era".

bee_rider · 1h ago
Probably either overcoming giant robots with the power of friendship and a giant drill, or a cursed village with an obsession-inducing whirlpool.
riku_iki · 1h ago
my reading that it will be some hyper-performant db thanks to some very low level optimization utilizing recent hw advancements and formats/pipelines unification and simplification.
djfobbz · 1h ago
So this Vortex engine is a combination of OLTP and OLAP on steroids?
didibus · 45m ago
It sounded only OLAP from the article.
maxmcd · 1h ago
Do they mention transactions anywhere? Maybe it will be OLAP?
reactordev · 1h ago
Anyone that can improve upon the parquet hell that is my life is gladly welcomed...
riku_iki · 1h ago
why you don't like parquet?
indoordin0saur · 18m ago
Parquet seems easy and straight-forward. The only issue I see people having with it is if they aren't used to non-human-readable formats and have to use special tools to look at it (as opposed to something like CSV). In that case this new file format will absolutely be worse.
4ndrewl · 1h ago
The three eras of database systems starts with a client-server Postgres, but missed the daddy of the generation before that - xBase (ie dBase, FoxPro etc).
khaledh · 1h ago
It goes way before that. It starts with IDS (Integrated Data Store) from GE (1964), which was a network database system. Next was IBM's hierarchical database system IMS (Information Management System, 1966), still in use today. Then the CODASYL model (late 1960s), which was an effort to standardize the network model. And then Codd came up with the relational model in the early 70s, upon which an explosion of database systems were built (first is IBM System R, SQL, Oracle, DB2, Ingres). Then came the PC-based database systems you mentioned.
4ndrewl · 57m ago
Oh for sure. To suggest we're only on generation 3 of "databases" is way off the mark.
all2 · 1h ago
Spelling error "sttill"

> P.S. If you're sttill managing data in spreadsheets, this post isn't for you. Yet.

---

Since I discovered the ECS pattern, I've been curious about backing it with a database. One of the big issues seems to be IO on the database side. I wonder if Spiral might solve this issue.

lordnacho · 1h ago
If the ECS data is grid-like, perhaps you could use a columnar database for time series?

Then you could save every single state change and scroll back and forth. But I'm not sure if you were looking for that.

harwoodr · 1h ago
Have a look at something like spacetimeDB - caveat, I've only read about it and not directly used it:

https://github.com/ClockworkLabs/SpacetimeDB

holoduke · 1h ago
So basically this is a file system that runs on your gpu?
mlhpdx · 28m ago
I stopped reading at “new era”. At this point in time with the deluge of content, start with a problem and solution in a concise statement if you want my attention. I’m not reading your opinion piece.
zzzeek · 49m ago
This links to a super long winded blog post that sounds more like a toast at a wedding, so I went to the main page to try to see what their product is, and you just get a blitz of fancy animations of table diagrams and things and lots of very cheap sounding slogans pushed out like "Works with any data! Fully XYZ 2.0 compliant! Ties your shoes!"

basically im not sure where the product is hiding under all of this bluster but this doesnt feel very "hacker"-Y

bflesch · 27m ago
Big ick from my side. Manifest-style marketing blog post talking about revolutionary things but it seems their main metric is in the image above the post: "hey, we've raised $22M in funding".

Landing pages of both spiral and vortex are GPU-hugging animations and void of any technical information. Empty nothing-statements like "machine scale". They claim 100x improvements but don't link any metrics.

Maybe this is a "don't hate the player, hate the game" situation but somehow the collective of likeminded AI engineers decided to upvote this post to #1 on HN.

msteffen · 14m ago
There's this: https://bench.vortex.dev/, which links to https://github.com/vortex-data/vortex/tree/develop/bench-vor.... I haven't tried pulling the repo or anything but it seems like they might be runnable?

Of course I don't know what benchmarks or performance metrics they might have for the db layer, but it is something.

indoordin0saur · 22m ago
> Vortex is designed to support decoding data directly from S3 to GPU, skipping the CPU bottleneck entirely.

If this is true I'm inclined to believe their claims.

bflesch · 5m ago
MY PERSONAL BOTTLENECK between S3 and GPU is my credit card and not some new cargo module by some already-rich AI engineer and a fancy marketing website that must've cost a couple hundred grand.

And if this module provides a benefit I'm sure it will find its way into our stack, just like PostgreSQL did. And PostgreSQL never had $22M to begin with - no shiny marketing, just technological skills.

The whole "donated by spiral" on the vortex.dev website also gives big tax write-off vibes.

IMO best case is that this will be a mongodb scenario, but with the current track record of tech grifters enshittifying everything they might find a creative new way.

SomeHacker44 · 1h ago
"100KiB images"... This is odd. Most of my images are 2.5-4 MB. My raw images are 3-10x larger.
turnsout · 1h ago
I bet this refers to some common training use case that leverages 512px or 1024px images. Or it’s just Palantir scanning security camera frames.