I don't really get it. If I'm understanding correctly, the goal of these CDC-to-Iceberg systems is to mirror, in near real-time, a Postgres table into an Iceberg database. The article states, repeatedly:
> In streaming CDC scenarios, however, you’d need to query Iceberg for the location on every delete: introducing random reads, latency, and drastically lowering throughput under high concurrency. On large tables, real-time performance is essentially impossible.
Let's consider the actual situation. There's a Postgres table that fits on whatever Postgres server is in use. It gets mirrored to Iceberg. Postgres is a full-fledged relational database and has indexes and such. Iceberg is not, although it can be scanned much faster than Postgres and queried by fancy Big Data tools (which, I agree, are really cool!). And, notably, there is no index mapping Postgres rows to Iceberg row positions.
But why isn't there? CDC is inherently stateful -- unless someone is going to build Merkle trees or similar to allow efficiently diffing table states (which would be awesome), the CDC process need to keep enough state to know where it is. Maybe this is O(1) in current implementations. But why not keep the entire mapping from Postgres rows to Iceberg positions? The Postgres database table is about N rows times however wide a row is, and it fits on a Postgres server. The mapping needed would be about the size of a single index on the table. Why not store it somewhere? Updates to it will be faster than updates to the source Postgres table, so it will keep up. Is the problem that this is awkward to do in a "serverless" manner?
For extra fun, someone could rig up Postgres (via an extension or just some clever tables) so that the mapping is stored in Postgres itself. It would be, roughly, one small table with CDC state and one moderate size table per mirrored table storing the row position mapping. It could be on the same server instance or a different one.
dkdcio · 2h ago
> Databricks recently spent $1 billion to acquire Neon, a startup building a serverless Postgres. Snowflake also spent about $250 million to acquire Crunchy Data, a veteran enterprise-grade Postgres provider.
Another chapter of the slowly-reimplementing-Vertica saga.
It's becoming clear that merge trees and compaction need to be addressed next, after delete vectors brought them onstage.
Vertica will actually look up the equality keys in a relevant projection if it exists, and then use the column values in the matching rows to equality-delete from the other projections; it's fairly good at avoiding table scans.
datadrivenangel · 3h ago
Change Data Capture is hard if you fall off the happy path, and data lakes won't save you.
ajd555 · 2h ago
> Postgres and Apache Iceberg are both mature systems
Apache Iceberg as mature? I mean, there's a lot of activity around it, but I remember a year ago the rust library didn't even have write capabilities. And it's not like the library is a client and there's an iceberg server - the library literally is the whole product, interacting with the files in s3
ajd555 · 2h ago
I suppose, in fairness, the Java library has been around for much longer
jsight · 1h ago
A lot of people will spend dozens of hours and tens of thousands of their company's money to avoid learning Java.
I'm not even sure if I'm joking. :)
kwillets · 57m ago
This is data engineering, where people spend thousands of dollars of their company's money to avoid learning SQL. The place with no Java is across the street (old Soviet joke, originally for meat/fish stores).
icedchai · 28m ago
Sad but true. Or they learn "something" about SQL but not about indexes, data types, joins, or even aggregate functions. I've seen some python horror shows that would select * entire tables into lists of dicts, only to do the equivalent of a where clause and a couple of sums.
pat2man · 42m ago
I mean RisingWave, the solution mentioned in the article, is a complete startup rewriting things in Rust mostly to avoid the larger Java solutions like Flink and Spark...
slt2021 · 1h ago
this use case of postgres + CDC + iceberg feel like the wrong architecture.
postgres is for relational data, ok
CDC is meant to capture changes and process the changes only (in isolation from all previous changes), not to recover the snapshot of the original table by reimplementing the logic inside postgres of merge-on-read
iceberg is columnar storage for large historical data for analytics, its not meant for relational data, and certainly not for realtime
it looks like they need to use time-series oriented db, like timescale, influxdb, etc
nxm · 18m ago
The goal is data replication into the data lake, and not in real-time. CDC is just a means to and end.
> In streaming CDC scenarios, however, you’d need to query Iceberg for the location on every delete: introducing random reads, latency, and drastically lowering throughput under high concurrency. On large tables, real-time performance is essentially impossible.
Let's consider the actual situation. There's a Postgres table that fits on whatever Postgres server is in use. It gets mirrored to Iceberg. Postgres is a full-fledged relational database and has indexes and such. Iceberg is not, although it can be scanned much faster than Postgres and queried by fancy Big Data tools (which, I agree, are really cool!). And, notably, there is no index mapping Postgres rows to Iceberg row positions.
But why isn't there? CDC is inherently stateful -- unless someone is going to build Merkle trees or similar to allow efficiently diffing table states (which would be awesome), the CDC process need to keep enough state to know where it is. Maybe this is O(1) in current implementations. But why not keep the entire mapping from Postgres rows to Iceberg positions? The Postgres database table is about N rows times however wide a row is, and it fits on a Postgres server. The mapping needed would be about the size of a single index on the table. Why not store it somewhere? Updates to it will be faster than updates to the source Postgres table, so it will keep up. Is the problem that this is awkward to do in a "serverless" manner?
For extra fun, someone could rig up Postgres (via an extension or just some clever tables) so that the mapping is stored in Postgres itself. It would be, roughly, one small table with CDC state and one moderate size table per mirrored table storing the row position mapping. It could be on the same server instance or a different one.
It's kinda funny to not mention that Databricks acquired Tabular, the Iceberg company, for a billion dollars: https://www.databricks.com/company/newsroom/press-releases/d...
It's becoming clear that merge trees and compaction need to be addressed next, after delete vectors brought them onstage.
Vertica will actually look up the equality keys in a relevant projection if it exists, and then use the column values in the matching rows to equality-delete from the other projections; it's fairly good at avoiding table scans.
Apache Iceberg as mature? I mean, there's a lot of activity around it, but I remember a year ago the rust library didn't even have write capabilities. And it's not like the library is a client and there's an iceberg server - the library literally is the whole product, interacting with the files in s3
I'm not even sure if I'm joking. :)
postgres is for relational data, ok
CDC is meant to capture changes and process the changes only (in isolation from all previous changes), not to recover the snapshot of the original table by reimplementing the logic inside postgres of merge-on-read
iceberg is columnar storage for large historical data for analytics, its not meant for relational data, and certainly not for realtime
it looks like they need to use time-series oriented db, like timescale, influxdb, etc