Show HN: PgDog – Shard Postgres without extensions

242 levkk 50 5/26/2025, 4:55:49 PM github.com ↗
Hey HN! Lev here, author of PgDog (https://github.com/pgdogdev/pgdog). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.

Here’s a walkthrough of how it works: https://www.youtube.com/watch?v=y6sebczWZ-c

Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.

If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.

So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)

Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:

     SELECT id, email FROM users
     WHERE admin = true
     ORDER BY 1 DESC;
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.

All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.

How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.

PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.

My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (https://pgdog.dev/blog/pgbouncer-vs-pgdog).

Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!

Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.

One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (https://pgdog.dev/blog/hacking-postgres-wire-protocol). You can just use it without thinking about cluster topology, e.g.:

    COPY temperature_records (sensor_uuid, created_at, value)
    FROM STDIN CSV;
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.

What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: https://pgdog.dev/blog/multi-tenant-pg-can-be-easy.

What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.

PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.

Happy hacking!

Lev

Comments (50)

denysvitali · 4h ago
I know this is just a small feature and probably a less meaningful one compared to the rest of the project - but for me being able to use pgdog as a way to redirect reads to read replicas and writes to the primary (w/o doing that in code) is a huge plus. Many applications out there do not support R/W splits, and having something that does that for you (at the proxy level) has always brought speed improvements for me in the past.

Such a cool project, good job Lev!

jashmatthews · 7h ago
Hey Lev!

I've been looking into PgDog for sharding a 40TB Postgres database atm vs building something ourselves. This could be a good opportunity to collaborate because what we need is something more like Vitess for PostgreSQL. The scatter gather stuff is great but what we really need is config management via something like etcd, shard splitting, best-effort transactions for doing schema changes across all shards etc.

Almost totally unrelated but have you had good success using pg_query.rs to re-write queries? Maybe I misunderstood how pg_query.rs works but re-writing an AST seems like a nightmare with how the AST types don't really support mutability or deep cloning. I ended up using the sqlparser crate which supports mutability via Visitors. I have a side project I'm chipping away at to build online schema change for PG using shadow tables and logical replication ala gh-ost.

Jake

levkk · 7h ago
Hey Jake!

I would love to collaborate. Email me: lev@pgdog.dev. Config management is a solved problem, we can use K8s or any number of CD tools. PgDog config reloading can be synchronized.

Best effort transactions for schema changes across shards are working today. Ideally, schema changes are idempotent so it's safe to retry in case of failure. Otherwise, we can try 2-phase commit. It'll need a bit of management to make sure they are not left uncommitted (they block vacuum).

Shard splitting can be done with logical replication. I've done this at Instacart with 10TB+ databases. At that scale, you need to snapshot it with a replication slot open, restore to N instances, delete whatever data doesn't match the shard #, and re-sync with logical replication. Another thing I wanted to try was using Pg 17 logical replication from streaming replicas. I feel like it's possible to parallelize resharding with like 16 replicas, without affecting the primary. In that situation, it might be feasible to just COPY tables through foreign tables with postgres_fdw or PgDog (my choice of sharding function was very intentional). Something to consider.

pg_query.rs seems to be mutable now, as far as I can tell. I've been rewriting and generating brand new queries. I like that it's 100% Postgres parser. That's super important. They have the "deparse" method (struct -> SQL) on pretty much every NodeEnum, so I think it's good to go for doing more complex things.

Lev

Existenceblinks · 10h ago
Looks neat, the first thing I search for in the docs is:

    Unique indexes  Not currently supported. Requires query rewriting and separate execution engine to validate uniqueness across all shards.
But still looks promising.
levkk · 9h ago
Small consolation prize is we can generate unique primary keys: https://docs.pgdog.dev/features/sharding/primary-keys/.

I would like to implement cross-shard unique indexes, but they are expensive to check for every query. Open to ideas!

guiriduro · 5h ago
Aren't UUIDs the a priori collision-free index of choice ?
reactordev · 4h ago
The problem is which version? Also does the cluster index need to be embedded? Or shard index? Or whatever you are using to track the storage of the record? Should we care?
williamdclt · 12h ago
Really impressive stuff! Very interesting, well done!

I don’t know that I’d want my sharding to be so transparently handled / abstracted away. First, because usually sharding is on the tenancy boundary and I’d want friction on breaking this boundary. Second, because the implications of joining across shards are not the same as in-shard (performance, memory, cpu) and I’d want to make that explicit too

That takes nothing out of this project, it’s really impressive stuff and there’s tons of use cases for it!

levkk · 12h ago
Thanks! I'm curious:

> I’d want friction on breaking this boundary

Why do you want friction?

> implications of joining across shards are not the same

That's usually well understood and can be tracked with real time metrics. Ultimately, both are necessary and alternative solutions, like joining in the app code, are not great.

williamdclt · 17m ago
> Why do you want friction?

Because 99% of the time, breaking tenancy boundary is not the right thing to do. Most likely it's a sign that the tenant ID has been lost along the way, and that it should be fixed. Or that the use-case is shady and should be thought about more careful ("what are you _actually_ trying to do" type of thing).

A tenet I truy to stick to is "make the right thing look different (and be easier) than the wrong thing": in this case I think that breaking tenancy boundary should be explicit and more difficult than respecting it (ie sticking to one shard).

That's of course on the assumption that cross-shard queries mean (potentially) cross-tenancy, and that this isn't something that's usually desirable. That's the case in the apps I tend to work on (SaaS) but isn't always the case.

> That's usually well understood

By who? Certainly wouldn't be well-understood by the average dev in the average SaaS company I don't think! Especially if normal joins and cross-shard joins look the exact same, I don't think 90% of devs would even think about it (or know they should think about it).

---

This sounds like negative feedback: it's not! I fully believe that this is a really good tool, I'm really happy it exists and I'll absolutely keep it in my back pocket. I'm saying that the ergonomics of it aren't what I'd (ideally) want for the projects I work on professionally

paulryanrogers · 11h ago
>> I’d want friction on breaking this boundary

> Why do you want friction?

Probably because it makes accidental or malicious attempts to leak among tenants harder, therefore less likely.

levkk · 11h ago
Check this out and let me know what you think: https://pgdog.dev/blog/multi-tenant-pg-can-be-easy

I think there are a few good solutions for multi-tenant safety. We just need ergonomic wrappers at the DB layer to make them easy to use.

grncdr · 4h ago
It’s an interesting idea, but how would such a system handle queries that should cross tenant boundaries? (E.g. system-level reporting)
xnickb · 15h ago
Very interesting.

For me the key point in such projects is always handling of distributed queries. It's exciting that pgDog tries to stay transparent/compatible while operating on the network layer.

Of course the limitations that are mentioned in the docs are expected and will require trade-offs. I'm very curious to see how you will handle this. If there is any ongoing discussion on the topic, I'd be happy to follow and maybe even share ideas.

Good luck!

levkk · 15h ago
Absolutely. Join our Discord, if you'd like: https://discord.com/invite/CcBZkjSJdd
simplecto · 41m ago
very cool to see people go deep in the weeds to make it easier for lazy devs like me.
mijoharas · 17h ago
We've been keeping an eye on PgDog for a while, and it seems like very impressive stuff.

Congrats on the launch Lev, and keep it up!

levkk · 16h ago
Thanks! Will do. 15 year journey starts now.
aeyes · 13h ago
One of the most interesting Postgres projects I have seen in many years.

The benchmarks presented only seem to address standard pooling, I'd like to see what it looks like once query parsing and cross-shard join come into play.

levkk · 12h ago
Thank you! Good feedback. The query parser is cached, so if you're using prepared statements or just the extended protocol with placeholders, it's almost free: cost of a mutex + hash lookup.

Cross-shard joins will be interesting. I suspect the biggest cost there will be executing suboptimal queries without a good join filter: that's what it takes to compute a correct result sometimes. Also, the result set could get quite large, so we may need to page to disk.

I'm focusing on OLTP use cases first with joins pushed down as much as possible. I suspect cross-shard joins will be requested soon afterwards.

anurag · 16h ago
Much-needed innovation in scaling Postgres. Congratulations on the launch!
jimmyl02 · 13h ago
this is awesome but I'm wondering does pgdog plan to handle high availability scenarios (multiple frontend proxies)? I know this can lead to much more difficult problems with consensus and handling split brain scenarios.

if not, what is the approach to enable restarts without downtime? (let's say one node crashes)?

levkk · 13h ago
It's config driven, so no split brain. All proxies have the same config and deployments are synchronized:

1. pause traffic 2. reload config 3. resume traffic

This can be done in under a second.

Restarts without downtime can be handled with blue/green using and a TCP load balancer or DNS.

rco8786 · 17h ago
This looks pretty amazing. Congrats on the launch.
levkk · 17h ago
Thank you. Years in the making.
ewalk153 · 12h ago
Is your plan to stick with a hashing algorithm for tenant sharding, or allow for more fine grain control to shift large tenants between and shards?

Hot shard management is a job in of itself and adds lot of operational complexity.

levkk · 12h ago
I've been thinking about other algorithms as well, like range-based. We can definitely override the algorithm for hot shards. Ideally, we match what Postgres does with partitions: range, hash, and list. That way, we can shard both inside Postgres (e.g. with postgres_fdw) and at the proxy.

I think the first step is to add as much monitoring as possible to catch the problem early. There is also the dry-run mode [1] that you can try before sharding at all to make sure your traffic would be evenly split given a choice of sharding key.

[1] https://pgdog.dev/blog/sharding-a-real-rails-app#dry-run-mod...

sroussey · 17h ago
Nice! Reminds me of MySQLProxy back in 2007-2014 and later ProxySQL.

What’s the long term (business) plan to keep it updated?

levkk · 17h ago
Thanks!

Business plan is managed deployments and support, pretty standard for an infra product I believe.

achanda358 · 11h ago
This is very cool, congrats on the launch. Do you think making this CLI/API compatible with Vitess or Citus is worth it?
levkk · 11h ago
Thanks! I never considered this. What would that look like?
JyB · 13h ago
This looks amazing.
theusus · 17h ago
I wish there was something similar for SQL Server.
mdaniel · 15h ago
I believe this project was accelerated by having access to the actual SQL parser, the source code of both ends of the wire protocol, and (if necessary) the source of the engine. SQL Server is starting out underwater on a lot of those measures

That said, you probably aren't the first person to ask; so are there similar projects that don't meet all of your criteria or you've not seen anything in that space?

theusus · 9h ago
Haven't explored
mdaniel · 9h ago
Then hurray, your project could be out there just waiting for you to meet it! https://github.com/topics/sqlserver https://gitlab.com/explore/projects/topics/sqlserver
stackskipton · 13h ago
On top of not having source concerns, licensing makes sharding less useful. MSSQL on Linux exists but it comes with a bunch of caveats so most people will stick with Windows. Therefore, each instance of the server will MIGHT be Windows License (depending on various factors) + SQL Server licensing and that licensing is not cheap, therefore, with MSSQL, if at all possible, scaling up and not horizontally is generally preferred. You do have Availability groups if Write Primary + Read Only Secondaries work for you.

Also, MSSQL is clearly on maintenance mode. Microsoft continues to support it and sale it because $$$$ but it's not a focus.

iamdanieljohns · 15h ago
How does this compare to Supabase/Supavisor?
levkk · 15h ago
I don't think Supavisor actually does sharding.
deadbabe · 10h ago
Can someone give tangible real world metrics on when you should consider sharding a Postgres database? Thanks in advance.
iFire · 15h ago
is there a network agpl exception?
levkk · 15h ago
No. If you're using it internally though, there are no issues. You don't have to share anything. If you're using it externally, e.g. building a PgDog cloud service, you'll need to share any changes you make to PgDog _only_.
xyzzy_plugh · 15h ago
> If you're using it internally though, there are no issues.

Unfortunately this advice is incompatible with that of most legal departments.

I get that this is your interpretation, by your interpretation doesn't have any value when it comes to possible IP issues.

levkk · 15h ago
No issues. PgDog is a company, email me if you want to use it internally and we'll work it out.

lev@pgdog.dev

wredcoll · 7h ago
> Unfortunately this advice is incompatible with that of most legal departments.

I see this comment pop up every now and then on HN in specific, but I've never personally had a lawyer tell me this; is there any chance anyone could share an actual example of this?

qaq · 4h ago
Well in all large orgs legal has rules which licenses are allowed for dependencies generally it's MIT, Apache, BSD and the like
Y_Y · 17h ago
The name is dangerously close to the classic insult "pigdog"

https://en.wiktionary.org/wiki/pig_dog

michelpp · 15h ago
FRENCH GUARD: You don't frighten us, English pig-dogs! Go and boil your bottom, sons of a silly person. I blow my nose at you, so-called Arthur King, you and all your silly English k-nnnnniggets. Thpppppt! Thppt! Thppt!

GALAHAD: What a strange person.

bonki · 14h ago
Git would like to have a word with you.
underyx · 16h ago
What’s the danger exactly?