Seems to be nfs v3 [0] - curious to test it out - the only userspace nfsv4 implementation I’m aware of is in buildbarn (golang) [1]. The example of their nfs v3 implementation disables locking. Still pretty cool to see all the ways the rust ecosystem is empowering stuff like this.
I’m kinda surprised someone hasn’t integrated the buildbarn nfs v4 stuff into docker/podman - the virtiofs stuff is pretty bad on osx and the buildbarn nfs 4.0 stuff is a big improvement over nfs v3.
Anyhow I digress. Can’t wait to take it for a spin.
Seems like a really interesting project! I don't understand what's going on with latency vs durability here. The benchmarks [1] report ~1ms latency for sequential writes, but that's just not possible with S3. So presumably writes are not being confirmed to storage before confirming the write to the client.
What is the durability model? The docs don't talk about intermediate storage. Slatedb does confirm writes to S3 by default, but I assume that's not happening?
SlateDB offers different durability levels for writes. By default writes are buffered locally and flushed to S3 when the buffer is full or the client invokes flush().
I had to laugh out loud:
"In practice, you'll encounter other constraints well before these theoretical limits, such as S3 provider limits, performance considerations with billions of objects, or simply running out of money."
I am in no way affiliated with JuiceFS, but I have done a lot of benchmarking and testing of it, and the numbers claimed here for JuiceFS are suspicious (basically 5 ops/second with everything mostly broken).
moltar · 1h ago
Has anyone tried it as a cache dir in CI? I’m concerned about random reads looking pretty slow in the benchmarks.
siliconc0w · 3h ago
Very cool to see NFS, NDB, or even P9 supported.
Looking at those benchmarks, I think you must be using a local disk to sync writes before uploading to S3?
akshayKMR · 3h ago
Incredibly cool! It shows running Ubuntu and Postgres, and also supports full posix operations.
Questions:
- I see there is a diagram running multiple Postgres nodes backed by this store, very similar to horizontally distributed web-server. Doesn't Postgres use WAL replication? Or is it disabled and they are they running on same "views" of the filesystem?
- What does this mean for services that handle geo-distribution on app layer? e.g. CockroachDB?
Sorry if this sounds dumb.
monkaiju · 3h ago
Finally a way to get more than 2gb of local storage on digital ocean's app platform!
nodesocket · 2h ago
2gb? They have storage optimized nvme droplets that at the high end support 4.6tb (though absurdly expensive at $2,000/mo).
jauntywundrkind · 6h ago
Built stop the excellent SlateDB! Breaks files down into 256k chunks. Encrypted. Much much better posix compatibility than most FUSE alternatives. SlateDB has snapshots & clones, so that could be another great superpower of zerofs.
Incredible performance figures, rocketing to probably the best way to use object storage in an fs like way. There's a whole series of comparisons, & they probably need a logarithmic scale given the scale of the lead slatedb has! https://www.zerofs.net/zerofs-vs-juicefs
Speaks 9p, NFS, or NBD. Some great demos of ZFS with l2arc caches giving a near local performance while having s3 persistence.
I’m kinda surprised someone hasn’t integrated the buildbarn nfs v4 stuff into docker/podman - the virtiofs stuff is pretty bad on osx and the buildbarn nfs 4.0 stuff is a big improvement over nfs v3.
Anyhow I digress. Can’t wait to take it for a spin.
[0] https://github.com/Barre/zerofs_nfsserve
[1] https://github.com/buildbarn/bb-remote-execution/tree/master...
What is the durability model? The docs don't talk about intermediate storage. Slatedb does confirm writes to S3 by default, but I assume that's not happening?
[1] https://www.zerofs.net/zerofs-vs-juicefs
https://slatedb.io/docs/design/writes/
I am in no way affiliated with JuiceFS, but I have done a lot of benchmarking and testing of it, and the numbers claimed here for JuiceFS are suspicious (basically 5 ops/second with everything mostly broken).
Looking at those benchmarks, I think you must be using a local disk to sync writes before uploading to S3?
Questions:
- I see there is a diagram running multiple Postgres nodes backed by this store, very similar to horizontally distributed web-server. Doesn't Postgres use WAL replication? Or is it disabled and they are they running on same "views" of the filesystem?
- What does this mean for services that handle geo-distribution on app layer? e.g. CockroachDB?
Sorry if this sounds dumb.
Incredible performance figures, rocketing to probably the best way to use object storage in an fs like way. There's a whole series of comparisons, & they probably need a logarithmic scale given the scale of the lead slatedb has! https://www.zerofs.net/zerofs-vs-juicefs
Speaks 9p, NFS, or NBD. Some great demos of ZFS with l2arc caches giving a near local performance while having s3 persistence.
Totally what I was thinking of when in the Immich someone mentioned wanting a way to run it on cheap object storage. https://news.ycombinator.com/item?id=45169036