HorizonDB, a geocoding engine in Rust that replaces Elasticsearch

85 j_kao 26 8/8/2025, 12:57:50 PM radar.com ↗

Comments (26)

brunohaid · 1h ago
Bit thin on details and not looking like they’ll open source it, but if someone clicked the post because they’re looking for their “replace ES” thing:

Both https://typesense.org/ and https://duckdb.org/ (with their spatial plugin) are excellent geo performance wise, the latter now seems really production ready, especially when the data doesn’t change that often. Both fully open source including clustered/sharded setups.

No affiliation at all, just really happy camper.

sureglymop · 34m ago
These are great. I am eternally grateful that projects like this are open source, I do however find it hard to integrate them into your own projects.

A while ago I tried to create something that has duckdb + its spatial and SQLite extensions statically linked and compiled in. I realized I was a bit in over my head when my build failed because both of them required SQLite symbols but from different versions.

j_kao · 26m ago
These are great projects, we use DuckDB to inspect our data lake and for quick munging.

We will have some more blog posts in the future describing different parts of the system in more detail. We were worried too much density in a single post would make it hard to read.

jjordan · 40m ago
Typesense is an absolute beast, and it has a pretty great dev experience to boot.
maelito · 2h ago
I wonder if this could help Photon, the open source ElasticSearch/OpenSearch search engine for OSM data.

It's a mini-revolution in the OSM world, where most apps have a bad search experience where typos aren't handled.

https://github.com/komoot/photon

softwaredoug · 2h ago
It’s interesting as someone in the search space how many companies are aiming to “replace Elasticsearch”
j_kao · 35m ago
Author here! We were really motivated to turn a "distributed system" problem into a "monolithic system" from an operations perspective and felt this was achievable with current hardware, which is why we went with in-process, embedded storage systems like RocksDB and Tantivy.

Memory-mapping lets us get pretty far, even with global coverage. We are always able to add more RAM, especially since we're running in the cloud.

Backfills and data updates are also trivial and can be performed in an "immutable" way without having to reason about what's currently in ES/Mongo, we just re-index everything with the same binary in a separate node and ship the final assets to S3.

mikeocool · 1h ago
In my experience, the care and feeding that goes into an Elastic Search cluster feels like it's often substantially higher than that involved in the primary data store, which has always struck me as a little odd (particularly in cases where the primary data store is an RDBMS).

I'd be very happy to use simpler more bulletproof solutions with a subset of ES's features for different use cases.

dewey · 53m ago
To add another data point: After working with ES for the past 10 years in production I have to say that ES is never giving us any headaches. We've had issues with ScyllaDB, Redis etc. but ES is just chugging along and just works.

The one issue I remember is: On ES 5 we once had an issue early on where it regularly went down, turns out that some _very long_ input was being passed into the search by some scraper and killed the cluster.

itpragmatik · 20m ago
how many clusters, how many indexes and how many documents per index? do you use self hosted es or aws managed opensearch?
dewey · 13m ago
12 nodes, 200 million documents / node, very high number of searches and indexing operations. Self-hosted ES on GCP managed Kubernetes.
everfrustrated · 29m ago
How big is the team that looks after it?
dewey · 18m ago
Nobody is actively looking after it. Good alerting + monitoring and if there's an alert like a node going down because of some Kubernetes node shuffling or a version upgrade that has to be performed one of our few infra people will do that.

It's really not something that needs much attention in my experience.

mexxixan · 21m ago
Would love to know how they scaled it. Also, what happens when you lose the machine and the local db? I imagine there are backups but they should have mentioned it. Even with backups how do you ensure zero data loss.
trimbo · 1h ago
This article is lacking detail. For example, how is the data sharded, how much time between indexing and serving, and how does it handle node failure, and other distributed systems questions? How does the latency compare? Etc. etc.
pm90 · 1h ago
Slightly meta, but I find its a good sign that we're back to designing and blogging about in-house data storage systems/ Query engines again. There was an explosion of these in the 2010's which seemed to slow down/refocus on AI recently.
8n4vidtmkvmk · 1h ago
Is it good? What's left to innovate on in this space? I don't really want experimental data stores. Give me something rock solid.
cfors · 45m ago
I don't disagree that rock solid is a good choice, but there is a ton of innovation necessary for data stores.

Especially in the context of embedding search, which this article is also trying to do. We need database that can efficiently store/query high-dimensional embeddings, and handle the nuance of real-world applications as well such as filtered-ANN. There is a ton of innovation in this space and it's crucial to powering the next generation architectures of just about every company out there. At this point, data-stores are becoming a bottleneck for serving embedding search and I cannot understate that advancements in this are extremely important for enabling these solutions. This is why there is an explosion of vector-databases right now.

This article is a great example of where the actual data-providers are not providing the solutions companies need right now, and there is so much room for improvement in this space.

weego · 27m ago
Agreed. The only caveat to that being a global rule is: 'At scale in a particular niche, even an excellent generalist platform might not be good enough'

But then the follow on question begs: "Am I really suffering the same problems that a niche already-scaled business is suffering"

A question that is relevant to all decision making. I'm looking at you, people who use the entire react ecosystem to deploy a blog page.

jothirams · 1h ago
Is horizondb publicly available for us to try as well..
sophia01 · 2h ago
They're not open sourcing it though?
j_kao · 32m ago
It's a bit difficult at the moment, given we have a lot of proprietary data at the moment and a lot of the logic follows it. I'm hoping we can get it to a state where it can be indexed and serving OSM data but that is going to take some time.

That being said, we are currently working on getting our Google S2 Rust bindings open-sourced. This is a geo-hashing library that makes it very easy to write a reverse geocoder, even from a point-in-polygon or polygon-intersection perspective.

pbowyer · 2h ago
Doesn't sound like it, but it's a nice writeup of the tools they stitched together. For someone to copy and open source... hopefully :)
cicloid · 1h ago
Tempted, specially for switching H3 instead of S2… I prototyped a similar solution a couple of weeks ago, so I could probably do a second pass
reactordev · 1h ago
I mean, anything could replace elasticsearch, but can it actually?

It sounds like they had the wrong architecture to start with and they built a database to handle it. Kudos. Most would have just thrown cache at it or fine tuned a readonly postgis database for the geoip lookups.

Without benchmarks it’s just bold claims we’ll have to ascertain.

kosolam · 56m ago
Side note 1: ES can also be embedded in your app (on the JVM). Note 2: I actually used RocksDB to solve many use cases and it’s quite powerful and very performant. If anything from this post take this, it’s open source and a very solid building block. Note 3: I would like to test drive quickwit as an ES replacement. Haven’t got the time yet.