Show HN: AnuDB– Backed on RocksDB, 279x Faster Than SQLite in Parallel Workloads

22 hashmak_jsn 11 5/6/2025, 10:13:31 AM github.com ↗
We recently benchmarked AnuDB, a lightweight embedded database built on top of RocksDB, against SQLite on a Raspberry Pi. The performance difference, especially for parallel operations, was dramatic.

GitHub Links:

AnuDBBenchmark: https://github.com/hash-anu/AnuDBBenchmark

AnuDB (Core): https://github.com/hash-anu/AnuDB

Why Compare AnuDB and SQLite? SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required.

AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi.

Benchmark Setup Platform: Raspberry Pi 2 (ARMv7)

Benchmarked operations: Insert, Query, Update, Delete, Parallel

AnuDB uses RocksDB and MsgPack serialization

SQLite uses raw data, with WAL mode enabled for fairness

Key Results Insert:

AnuDB: 448 ops/sec

SQLite: 838 ops/sec

Query:

AnuDB: 54 ops/sec

SQLite: 30 ops/sec

Update:

AnuDB: 408 ops/sec

SQLite: 600 ops/sec

Delete:

AnuDB: 555 ops/sec

SQLite: 1942 ops/sec

Parallel (10 threads):

AnuDB: 412 ops/sec

SQLite: 1.4 ops/sec (!)

In the parallel case, AnuDB was over 279x faster than SQLite.

Why the Huge Parallel Difference? SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios.

RocksDB (used in AnuDB) supports:

Fine-grained locking

Concurrent readers/writers

Better parallelism using LSM-tree architecture

This explains why AnuDB significantly outperforms SQLite under threaded workloads.

Try It Yourself Clone the repo:

git clone https://github.com/hash-anu/AnuDBBenchmark cd AnuDBBenchmark ./build.sh /path/to/AnuDB /path/to/sqlite ./benchmark

Results are saved to benchmark_results.csv.

When to Use AnuDB Use AnuDB if:

You need embedded storage with high concurrency

You’re dealing with telemetry, sensor data, or parallel workloads

You want something lightweight and faster than SQLite under load

Stick with SQLite if:

You need SQL compatibility

You value mature ecosystem/tooling

Feedback Welcome This is an early experiment. We’re actively developing AnuDB and would love feedback:

Is our benchmark fair?

Where could we optimize further?

Would this be useful in your embedded project?

Comments (11)

whizzter · 4h ago
Is your benchmark fair? No, it's really an apples to oranges comparison in many cases.

First as RocksDB is built to function as a backend to high throughput systems and has a lot of high complexity tuning parameters and background threads and is built to work on bigger workloads (with more threads obviously).

SQLite is built to be an easy option in "smaller" scenarios, in "larger" scenarios a common scenario is multiple SQLite databases (one per customer for example).

Also a dataset of 10000 entries is too small to really matter for many more complicated scenarios (one can probably hold it all in memory and just use SQLite to store things).

Does your document system handle indexing (or is there support?), an SQL user will often tune indexes (and the SQLite query handler will use properly setup indexes). I'm evaluating RocksDB in a project and from what I gathered it itself doesn't have a notion of it (but you can easily build them as separate column families).

The version of your Raspberry PI is not specified, I've used RPI's for benchmarking but the evolution of CPU's (And in later versions the peripherals like support for NVMe disks) makes each version behave differently both to each other and to "real" machines (I was able to use that to my advantage since the difference in benchmarking between the versions gave information about relative importance of code generation strategies for newer vs older CPU's).

MOST importantly, if you want to gain traction for your project you should _focus_ on the use-case that motivated you to build it (The entire MQTT thing mentioned on the GH page seems to show some other direction) rather than doing a halfbaked comparison to SQLite (that I guess you maybe used before but wasn't really suited for your use-case).

hashmak_jsn · 3h ago
Thanks for the thoughtful and constructive feedback — you're absolutely right that this isn't a strict apples-to-apples comparison. Our aim was to evaluate practical performance in edge workloads, especially for MQTT-style use cases on constrained devices like Raspberry Pi.

A few clarifications:

Indexing: AnuDB supports indexing via an explicit API — the user needs to define indexes manually. Internally, it's backed by RocksDB and uses a prefix extractor to optimize lookups. While it's not a full SQL-style index planner, it's efficient for our document-store model.

Parallel Writes: SQLite does well in many embedded use cases, but it struggles with highly parallel writes — even in WAL mode. RocksDB (and thus AnuDB) is built for concurrency and handles write-heavy parallel loads much better. That shows in our "Parallel" test.

Dataset Size: Agreed, 10K entries is small. We kept it modest to demonstrate behavior under low-latency edge conditions, but we’re planning larger-scale tests in follow-ups.

Hardware: The test was done on a Raspberry Pi 2 with 1GB RAM and microSD storage. Thanks for pointing out that CPU/peripheral differences could affect results — that’s something we’ll document better in future benchmarks.

Use Case Focus: You're spot on about the importance of use-case-driven evaluation. AnuDB was motivated by the need for a lightweight document database for IoT and edge scenarios with MQTT support — not as a direct SQLite replacement, but as an alternative where document flexibility and concurrent ingestion matter.

winkeltripel · 4h ago
So it's only much faster at delete, which is the least used operation? Am I reading the results correctly?
hashmak_jsn · 4h ago
SQLite is faster at insertion/updation/delete operations..
fidotron · 4h ago
You are talking a lot about performance but using JSON everywhere. You would be much better off using protobuf or flatbuffers for this.
hashmak_jsn · 3h ago
sure, that would be next item. thanks for suggestion
nottorp · 4h ago
Are they comparing a nosql database with no search/filtering with a sql database that has those operations by any chance?

And the 279x number is for parallel deletes? If you have to do that many parallel deletes it's probably a maintenance operation and you might as well copy the remaining data out, drop the db and recreate it...

There goes any credibility.

hashmak_jsn · 2h ago
Good point — just to clarify, the "279x" number isn’t about parallel deletes. The parallel test runs a mix of operations (insert, query, update, delete) across multiple threads. Each thread works on its own document range to simulate a real-world concurrent workload (like telemetry ingestion).

SQLite (even in WAL mode) hits write lock contention under concurrency, while AnuDB (using RocksDB) handles concurrent writes better due to its design.

Also, AnuDB supports indexing via an API using RocksDB's prefix extractor, so it’s not just a key-value store — basic filtering is supported.

Appreciate the feedback — will revise the post to make this clearer!

notpushkin · 4h ago
But is it web scale?
geodel · 4h ago
Its anuscale.
hashmak_jsn · 3h ago
yeah anu is everywhere