Show HN: Read-Through Cache for S3

1 shikhar 1 9/14/2025, 2:37:48 PM github.com ↗
Cachey (https://github.com/s2-streamstore/cachey) is an open source read-through cache for S3-compatible object storage.

It is written in Rust with a hybrid memory+disk cache powered by foyer [1], accessed over a simple HTTP API. It runs as a self-contained single-node binary – the idea is to distribute yourself and lean on client-side logic for key affinity and load balancing.

If you are building something heavily reliant on object storage, the need for something like this is likely to come up! A bunch of companies have talked about their approaches to distributed caching atop S3 (such as Clickhouse [2], Turbopuffer [3], WarpStream [4], RisingWave [5]).

Why we built it:

Recent records in s2.dev are owned by a designated process for each stream, and we could return them for reads with minimal latency overhead once they were durable. However this limited our scalability in terms of concurrent readers and throughput, as well as implied cross-zone network costs when the zones of the gateway and stream-owning process did not align.

The source of durability was S3, so there was a path to slurping recently-written data straight from there (older data would already be read directly), and take advantage of free bandwidth. But even S3 has RPS limits [6], and most object storage is HDD-backed so avoiding the latency overhead as much as possible is desirable.

Caching helps reduce S3 operation costs, improves the latency profile, and lifts the scalability ceiling. Now, regardless of whether records are recent or old, our reads always flow through Cachey.

Cachey internals:

- It borrows an idea from OS page caches by mapping every request into a page-aligned range read. This did call for requiring the typically-optional Range header, with an exact byte range. Standard tradeoffs around picking page sizes apply, and we went with fixing it at the high end of S3's recommendation (16 MB). If multiple pages are accessed, some limited intra-request concurrency is used. The sliced data is sent as a streaming response.

- It will coalesce concurrent requests to the same page (another thing an OS page cache will do). This was easy since foyer provides a native 'fetch' API that takes a key and thunk.

- It mitigates the high tail latency of object storage by maintaining latency statistics and making a duplicate request when a configurable quantile is exceeded, picking whichever response becomes available first. Jeff Dean discussed this technique in "The Tail at Scale" [7], and S3 docs also suggest such an approach.

A more niche thing Cachey lets you do is specify more than 1 bucket an object may live on, and attempt up to 2, prioritizing the client's preference blended with its own knowledge of recent operational stats. This is actually something we rely on since we offer regional durability with low latency by ensuring a quorum of zonal S3 express buckets for recently-written data, so the desired range may not exist on an arbitrary one. This capability may end up making sense to reuse for multi-region durability in future, too.

I'd love to hear your feedback and suggestions! Hopefully other projects will also find Cachey to be a useful part of their stack.

[1] https://foyer.rs [2] https://clickhouse.com/blog/building-a-distributed-cache-for... [3] https://turbopuffer.com/docs/architecture [4] https://www.warpstream.com/blog/minimizing-s3-api-costs-with... [5] https://risingwave.com/blog/risingwave-elastic-disk-cache [6] https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimi... [7] https://cacm.acm.org/research/the-tail-at-scale/#body-7

Comments (1)

shikhar · 23s ago
How we run it:

Auto-scaled Kubernetes deployments, one for each availability zone, currently on m*gd instances which give us local NVMe. The pods are able to easily push GiBps with 1-2 CPUs used — network is the bottleneck so we made it a scaling dimension (thanks KEDA).

On the client side, each gateway process uses kube.rs to watch ready endpoints in the same zone as itself, and frequently polls /stats exposed by Cachey for recent network throughput as a load signal.

To improve hit rates with key affinity, clients use rendezvous hashing for picking a node, with bounded load (https://arxiv.org/abs/1608.01350) – if a node exceeds a predetermined throughput limit, the next choice for the key is picked.

We may move towards consistent hashing – it would be a great problem to have, if we needed so many Cachey pods in a zone that O(n) hashing was meaningful overhead! An advantage with the current approach is it does not suffer from the cascaded overflow problem (https://arxiv.org/abs/1908.08762).