Show HN: Pontoon – Open-source customer data syncs

36 alexdriedger 9 8/1/2025, 3:28:53 PM github.com ↗
Hi HN,

We’re Alex and Kalan, the creators of Pontoon (https://github.com/pontoon-data/Pontoon). Pontoon is an open-source data export platform that makes it really easy to create data syncs and send data to your enterprise customers. Check out our demo here: https://app.storylane.io/share/onova7c23ai6 or try it out with docker: https://pontoon-data.github.io/Pontoon/getting-started/quick...

While at our prior roles as data engineers, we’ve both felt the pain of data APIs. We either had to spend weeks building out data pipelines in house or spend a lot on ETL tools like Fivetran (https://www.fivetran.com/). However, there were a few companies that offered data syncs that would sync directly to our data warehouse (eg. Redshift, Snowflake, etc.), and when that was an option, we always chose it. This led us to wonder “Why don’t more companies offer data syncs?”. It turns out, building reliable cross-cloud data syncs is difficult. That’s why we built Pontoon.

We designed Pontoon to be:

- Easily deployed: we provide a single, self-contained Docker image for easy deployment and Docker Compose for larger workloads (https://pontoon-data.github.io/Pontoon/getting-started/quick...)

- Support modern data warehouses: we support syncing to/from Snowflake, BigQuery, Redshift, and Postgres.

- Sync cross cloud: sync from BigQuery to Redshift, Snowflake to BigQuery, Postgres to Redshift, etc.

- Developer friendly: data syncs can also be built via the API

- Open source: Pontoon is free to use by anyone

Under the hood, we use Apache Arrow (https://arrow.apache.org/) to move data between sources and destinations. Arrow is very performant - we wanted to use a library that could handle the scale of moving millions of records per minute.

In the shorter-term, there are several improvements we want to make, like:

- Adding support for DBT models to make adding data models easier

- UX improvements like better error messaging and monitoring of data syncs

- More sources and destinations (S3, GCS, Databricks, etc.)

- Improve the API for a more developer friendly experience (it’s currently tied pretty closely to the front end)

In the longer-term, we want to make data sharing as easy as possible. As data engineers, we sometimes felt like second class citizens with how we were told to get the data we needed - “just loop through this api 1000 times”, “you probably won’t get rate limited” (we did), “we can schedule an email to send you a csv every day”. We want to change how modern data sharing is done and make it simple for everyone.

Give it a try https://github.com/pontoon-data/Pontoon. Cheers!

Comments (9)

conormccarter · 13h ago
Congrats on the launch! I'm one of the cofounders of Prequel (I saw our name in the feature grid - small nit: we do support self-hosting). This is definitely a problem worth solving - the market is still early and I'd bet the rising tide will help all of us convince more teams to support this capability. I'm not a lawyer, but the latest EU Data Act might even make it an obligation for some software vendors?

Maybe I can save you a headache: Snowflake is actively deprecating single-factor username/password auth in favor of key pair auth, so the faster you support that, the fewer mandatory migrations you'll be emailing users about.

kalanm · 13h ago
Thanks! Kalan here, I appreciate the nit! PR is already merged. Definitely agreed on the market, it seems like there's a ton of opportunity. And thanks for the heads up re Snowflake auth! we're actively working that one, and a few other auth modes for Redshift and BQ as well.
mdaniel · 3h ago
> Open source: Pontoon is free to use by anyone

And yet, the top level license file contains two licenses, a goddamn if statement, and finally a default to Elastic License which is not open source

Then, because if this trickery, lawyers get to make good money because does "2. LICENSE file in the same directory as the work" combined with "4. Defaults to Elastic License 2.0 (ELv2)" mean that this zero byte file NAMED LICENSE but devoid of any content match clause 2 or 4?

https://github.com/pontoon-data/Pontoon/blob/v0.2.0/data-tra...

I hate cutesy licenses with all my heart. Just say you want to use Elastic, drop the Open Source pretense, and go back to selling software instead of trying to position yourself as "open"

No educated person is going to give you free commits in the current state so no need to be opaque in hopes of tricking them

hiatus · 11h ago
What does the row "First-class Data Products" in the comparison table entail?
alexdriedger · 10h ago
Great question. We think of data products as multi-tenant tables that are created with the intention of sending that data to a customer.

To compare with an ETL tool like Airbyte, it's really easy to sync a full table somewhere with Airbyte, but it get's more complicated if you have a multi-tenant table, where you want to sync only a subset of data to a customer.

When you're setting up a data model with Pontoon, you just define which column has the customer id (we call it a tenant id) and it handles sending the right data to the right customer.

a2128 · 11h ago
Not to be confused with Pontoon, a self-hostable translation platform made by Mozilla: https://github.com/mozilla/pontoon
alexdriedger · 10h ago
Another great self-hostable platform. I'm not sure where they got their name from though, translations don't have a connection to lakes like data does...
melson · 13h ago
Is it like an offline sync?
kalanm · 12h ago
Kalan here, syncs are batch based and scheduled, similar to conventional ETL / data pipelines