Show HN: Quack-Cluster – A serverless distributed SQL engine with DuckDB and Ray
I'm excited to share a project I've been working on: Quack-Cluster.
I love the speed and simplicity of DuckDB for analytics, but I often work with datasets spread across hundreds of files in object storage (like S3). I wanted a way to run distributed queries across all that data without the complexity of setting up and managing a full-blown Spark or Presto cluster. I'm also a big fan of Ray for its simplicity in distributed Python, so I decided to combine them.
How it works: You send a standard SQL query to a central coordinator. It uses SQLGlot to parse the query and identify the target files (e.g., s3://bucket/data/*.parquet). It then generates a distributed plan and sends tasks to a cluster of Ray actors. Each Ray actor runs an embedded DuckDB instance to process a subset of the files in parallel. The partial results (as Arrow tables) are then aggregated and returned to the user.
The goal is to provide a lightweight, high-performance, and serverless alternative for interactive SQL analytics directly on a data lake.
The core tech stack is:
Backend: Python, FastAPI
Distributed Computing: Ray
Query Engine: DuckDB
SQL Parsing: SQLGlot
The project is open-source and I've tried to make it easy to get started locally with Docker and make. I'm here to answer any questions and would be grateful for any feedback on the architecture, use case, or the code itself.
Thanks for checking it out!
Also, have you thought about caching? My team is working on a similar problem and we have caches for everything from contents of S3 list_objects_v2 calls to Parquet metadata to blocks read from object storage.
Query Latency Query latency is highly variable and depends on several factors:
Query Type: A simple SELECT with a WHERE clause on a single table will be much faster than a complex multi-table JOIN that requires shuffling data between workers.
Data Size: The total volume of data being scanned from disk or object storage is a primary driver of latency.
Execution Plan: The system chooses between different plans. A
- LocalExecutionPlan that runs on a single node is fastest. A
- DistributedBroadcastJoinPlan is used when one table is small and is generally faster than a DistributedShuffleJoinPlan, which is the fallback for large tables and tends to have the highest latency.
Fault Tolerance: If a worker node fails, the system will automatically retry the task up to a configured maximum, which can add to the total execution time.
Caching Yes, caching is a key feature! Your team's approach sounds very thorough. Our current implementation focuses on caching the final results of queries to avoid re-computation.
Here’s how it works:
In-Memory TTL Cache: We use a simple, time-to-live (TTL) in-memory cache for the /query endpoint. When a query is executed, a SHA256 hash of the SQL string and the requested format (e.g., "json" or "arrow") is used as the cache key.
Cache Check: For every incoming query, we first check the cache. If a valid, non-expired result is found, we return it immediately, which is significantly faster.
Cache Population: If it's a cache miss, the query is fully executed, and the final result is stored in the cache before being sent to the client. The TTL is configurable, defaulting to 300 seconds.
This approach caches the final output rather than lower-level data like file metadata or individual data blocks, but your point about caching Parquet metadata and S3 listings is excellent—that would be a great way to further optimize the planning phase.