Show HN: AnuDB– Backed on RocksDB, 279x Faster Than SQLite in Parallel Workloads
GitHub Links:
AnuDBBenchmark: https://github.com/hash-anu/AnuDBBenchmark
AnuDB (Core): https://github.com/hash-anu/AnuDB
Why Compare AnuDB and SQLite? SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required.
AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi.
Benchmark Setup Platform: Raspberry Pi 2 (ARMv7)
Benchmarked operations: Insert, Query, Update, Delete, Parallel
AnuDB uses RocksDB and MsgPack serialization
SQLite uses raw data, with WAL mode enabled for fairness
Key Results Insert:
AnuDB: 448 ops/sec
SQLite: 838 ops/sec
Query:
AnuDB: 54 ops/sec
SQLite: 30 ops/sec
Update:
AnuDB: 408 ops/sec
SQLite: 600 ops/sec
Delete:
AnuDB: 555 ops/sec
SQLite: 1942 ops/sec
Parallel (10 threads):
AnuDB: 412 ops/sec
SQLite: 1.4 ops/sec (!)
In the parallel case, AnuDB was over 279x faster than SQLite.
Why the Huge Parallel Difference? SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios.
RocksDB (used in AnuDB) supports:
Fine-grained locking
Concurrent readers/writers
Better parallelism using LSM-tree architecture
This explains why AnuDB significantly outperforms SQLite under threaded workloads.
Try It Yourself Clone the repo:
git clone https://github.com/hash-anu/AnuDBBenchmark cd AnuDBBenchmark ./build.sh /path/to/AnuDB /path/to/sqlite ./benchmark
Results are saved to benchmark_results.csv.
When to Use AnuDB Use AnuDB if:
You need embedded storage with high concurrency
You’re dealing with telemetry, sensor data, or parallel workloads
You want something lightweight and faster than SQLite under load
Stick with SQLite if:
You need SQL compatibility
You value mature ecosystem/tooling
Feedback Welcome This is an early experiment. We’re actively developing AnuDB and would love feedback:
Is our benchmark fair?
Where could we optimize further?
Would this be useful in your embedded project?
First as RocksDB is built to function as a backend to high throughput systems and has a lot of high complexity tuning parameters and background threads and is built to work on bigger workloads (with more threads obviously).
SQLite is built to be an easy option in "smaller" scenarios, in "larger" scenarios a common scenario is multiple SQLite databases (one per customer for example).
Also a dataset of 10000 entries is too small to really matter for many more complicated scenarios (one can probably hold it all in memory and just use SQLite to store things).
Does your document system handle indexing (or is there support?), an SQL user will often tune indexes (and the SQLite query handler will use properly setup indexes). I'm evaluating RocksDB in a project and from what I gathered it itself doesn't have a notion of it (but you can easily build them as separate column families).
The version of your Raspberry PI is not specified, I've used RPI's for benchmarking but the evolution of CPU's (And in later versions the peripherals like support for NVMe disks) makes each version behave differently both to each other and to "real" machines (I was able to use that to my advantage since the difference in benchmarking between the versions gave information about relative importance of code generation strategies for newer vs older CPU's).
MOST importantly, if you want to gain traction for your project you should _focus_ on the use-case that motivated you to build it (The entire MQTT thing mentioned on the GH page seems to show some other direction) rather than doing a halfbaked comparison to SQLite (that I guess you maybe used before but wasn't really suited for your use-case).
A few clarifications:
Indexing: AnuDB supports indexing via an explicit API — the user needs to define indexes manually. Internally, it's backed by RocksDB and uses a prefix extractor to optimize lookups. While it's not a full SQL-style index planner, it's efficient for our document-store model.
Parallel Writes: SQLite does well in many embedded use cases, but it struggles with highly parallel writes — even in WAL mode. RocksDB (and thus AnuDB) is built for concurrency and handles write-heavy parallel loads much better. That shows in our "Parallel" test.
Dataset Size: Agreed, 10K entries is small. We kept it modest to demonstrate behavior under low-latency edge conditions, but we’re planning larger-scale tests in follow-ups.
Hardware: The test was done on a Raspberry Pi 2 with 1GB RAM and microSD storage. Thanks for pointing out that CPU/peripheral differences could affect results — that’s something we’ll document better in future benchmarks.
Use Case Focus: You're spot on about the importance of use-case-driven evaluation. AnuDB was motivated by the need for a lightweight document database for IoT and edge scenarios with MQTT support — not as a direct SQLite replacement, but as an alternative where document flexibility and concurrent ingestion matter.
And the 279x number is for parallel deletes? If you have to do that many parallel deletes it's probably a maintenance operation and you might as well copy the remaining data out, drop the db and recreate it...
There goes any credibility.