Show HN: GlassFlow – OSS streaming dedup and joins from Kafka to ClickHouse

45 super_ar 11 5/11/2025, 1:33:54 PM github.com ↗
Hi HN! We are Ashish and Armend, founders of GlassFlow. We just launched our open-source streaming ETL that deduplicates and joins Kafka streams before ingesting them to ClickHouse https://github.com/glassflow/clickhouse-etl

Why we built this: Dedup with batch data is straightforward. You load the data into a temporary table. Then, find only the latest versions of the record through hashes or keys and keep them. After that, move the clean data into your main table. But have you tried this with streaming data? Users of our prev product were running real-time analytics pipelines from Kafka to ClickHouse and noticed that the analyses were wrong due to duplicates. The source systems produced duplicates as they ingested similar user data from CRMs, shop systems and click streams.

We wanted to solve this issue for them with the existing ClickHouse options, but ClickHouse ReplacingMergeTree has an uncontrollable background merging process. This means the new data is in the system, but you never know when they’ll finish the merging, and until then, your queries return incorrect results.

We looked into using FINAL but haven't been happy with the speed for real-time workloads.

We tried Flink, but there is too much overhead to manage Java Flink jobs, and a self-built solution would have put us in a position to set up and maintain state storage, possibly a very large one (number of unique keys), to keep track of whether we have already encountered a record. And if your dedupe service fails, you need to rehydrate that state before processing new records. That would have been too much maintenance for us.

We decided to solve it by building a new product and are excited to share it with you.

The key difference is that the streams are deduplicated before ingesting to ClickHouse. So, ClickHouse always has clean data and less load, eliminating the risk of wrong results. We want more people to benefit from it and decided to open-source it (Apache-2.0).

Main components:

- Streaming deduplication: You define the deduplication key and a time window (up to 7 days), and it handles the checks in real time to avoid duplicates before hitting ClickHouse. The state store is built in.

- Temporal Stream Joins: You can join two Kafka streams on the fly with a few config inputs. You set the join key, choose a time window (up to 7 days), and you're good.

- Built-in Kafka source connector: There is no need to build custom consumers or manage polling logic. Just point it at your Kafka cluster, and it auto-subscribes to the topics you define. Payloads are parsed as JSON by default, so you get structured data immediately. As underlying tech, we decided on NATS to make it lightweight and low-latency.

- ClickHouse sink: Data gets pushed into ClickHouse through a native connector optimized for performance. You can tweak batch sizes and flush intervals to match your throughput needs. It handles retries automatically, so you don't lose data on transient failures.

We'd love to hear your feedback and know if you solved it nicely with existing tools. Thanks for reading!

Comments (11)

hodgesrm · 1h ago
How is this better than using ReplacingMergeTree in ClickHouse?

RMT dedups automatically albeit with a potential cost at read time and extra work to design schema for performance. The latter requires knowledge of the application to do correctly. You need to ensure that keys always land in the same partition or dedup becomes incredibly expensive for large tables. These are issues to be sure but have the advantage that the behavior is relatively easy to understand.

Edit: clarity

brap · 13m ago
Just wanna say I dig the design. In-house or outsourced?
caust1c · 1h ago
How does the deduplication itself work? The blog didn't have many details.

I'm curious because it's no small feat to do scalable deduplication in any system. You have to worry about network latencies if your deduplication mechanism is not on localhost, the partitioning/sharding of data in the source streams, and handling failures writing to the destination successfully, all of which cripples throughput.

I helped maintain the Segmentio deduplication pipeline so I tend to be somewhat skeptical of dedupe systems that are light on details.

https://www.glassflow.dev/blog/Part-5-How-GlassFlow-will-sol...

https://segment.com/blog/exactly-once-delivery/

saisrirampur · 1h ago
Neat project! Quick question, will this work only if the entire row is a duplicate? Or even if just a set of columns (ex: primary key) conflict and you guarantee only presence of the latest version of the conflict? I’m assuming former because you are deduping before data is ingested into ClickHouse. I could be missing something, wanted to confirm.

- Sai from ClickHouse

super_ar · 30m ago
Thanks, Sai! Great question. The deduplication works based on the user-defined key, not the entire row. You can specify which field (e.g. a primary key like event_id) to use as the deduplication key. Within a defined time window, GlassFlow guarantees that only the first event with a given key will be forwarded to ClickHouse. Subsequent duplicates are rejected. Our idea was to keep ClickHouse as clean as possible.
maxboone · 3h ago
Very cool stuff, good luck!

I didn't quickly find this in the documentation, but given that you're using the NATS Kafka Bridge, would it be a lot of work to configure streaming from NATS directly?

ashishbagri · 2h ago
Yes it would be easily possible to configure the tool to stream directly from NATs and skip Kafka completely. The reason we started with a managed Kafka connector (via the NATS Kafka Bridge) is because most of the early users sending data to clickhouse in real time had already a Kafka in place
the_arun · 3h ago
Congratulations!!

Questions:

1. Why only to ClickHouse, can’t we make it generic for any DB? Or is it reference implementation for ClickHouse?

2. Similarly, why only from Kafka?

3. Any default load testing done?

ashishbagri · 2h ago
Thanks for taking a look! 1. The current implementation is just for clickhouse as we started with the segment of users building real time analytics with clickhouse in their stack. However we already learned during the way that streaming deduplication is a challenge for other destination databases as well. The architecture of our tool is designed in a way that we can extend the sinks and add additional destinations. We would just have to write the sink component specific for that database. Do you have a specific DB in mind that you would like to use?

2. Again, we started with kafka because of our early target users. But the architecture inherently supports adding multiple sources. We already have experience in building multiple source and sink connectors (from our previous project) so adding additional sources would not be so challenging. which source do you have in mind?

3. Yes, running the tool locally on a macbook pro M2 docker, it was able to handle 15k requests per second. We have built a load testing infrastructure and happy to share the code if you are interested.

nine_k · 2h ago
AFAICT, there are native connector implementations for ClickHouse and Kafka, so it's plug and play with them specifically.

OTOH for deduplication you mostly need timestamps and a good hash (like SHA512), you don;t need to store the actual messages, so a naive approach should work with basically any even source; all you need is to look up the hash, compare the timestamps, and skip the message if the hashes match. But you need to write your own ingestion and output logic, maybe emulating whatever protocol you're using if you want the whole thing to be a drop-in node in your pipeline.

ashishbagri · 2h ago
Yes its true that if you just want to send data from Kafka to clickhouse and do not worry about duplicates, then there are several ways. we even covered them in a blog post -> https://www.glassflow.dev/blog/part-1-kafka-to-clickhouse-da...

However, the reason for us to start building this was because duplication is a sad reality in streaming pipelines and the methods to clean up duplicates on clickhouse is not good enough (again covered extensively on our blog with references to cickhouse docs).

The approach you mention about deduplication is 100% accurate. The goal in building this tool is to enable a drop-in node for your pipeline (just as you said) with optimised source and sink connectors for reliability and durability