> By Jan 2024, our largest table had roughly 100 million rows.
I did a double take at this. At the onset of the article, the fact they're using a distributed database and the mention of a "mid 6 figure" DB bill made me assume they have some obscenely large database that's far beyond what a single node could do. They don't detail the Postgres setup that replaced it, so I assume it's a pretty standard single primary and a 100 million row table is well within the abilities of that—I have a 150 million row table happily plugging along on a 2vCPU+16GB instance. Apples and oranges, perhaps, but people shouldn't underestimate what a single modern server can do.
hliyan · 7m ago
Call me old fashioned, but when records start reaching the 100 million range, it's usually an indication that either your dataset is too wide (consider sharding) or too deep (consider time based archival) to fit into a monolithic schema. For context, I've dealt with multiple systems that generate this volume of data between 2003 - 2013 (mostly capital markets, but also some govt/compliance work) with databases and hardware from that era, and we rarely had an issue that could not be solved by either query optimization, caching, sharding or archival, usually in that order.
Secondly, we did most of these things using SQL, Bash scripts, cron jobs and some I/O logic built directly into the application code. They were robust enough to handle some extremely mission critical systems (a failure could bring down a US primary market and if it's bad enough, you hear it on the news).
icedchai · 31m ago
You don't even need to be that "modern." Back in 2010 I was working on a MySQL 5.x system with about 300 million rows on a dual Xeon box with 16 gigs RAM and a few hundred gigs of RAID 10. This was before SSDs were common.
The largest table was over 100 million rows. Some migrations were painful, however. At that time, some of them would lock the whole table and we'd need to run them overnight. Fortunately, this was for an internal app so we could do that.
luhn · 26m ago
The improvements to migrations have been the biggest boon for running even modestly-sized Postgres DBs. It wasn't that long ago that you couldn't add a column with a default value without rewriting the whole table, or adding NOT NULL without an exclusive lock while the whole table was scanned. That becomes unfeasible pretty quickly.
casper14 · 45m ago
Nice! What optimizations have you put in llace yo support 150 mil? Just some indexing or other fancy stuff?
luhn · 38m ago
You don't need to optimize anything beyond appropriate indices, Postgres can handle tables of that size out of the box without breaking a sweat.
thomasfromcdnjs · 37m ago
Does mid six figure mean ~$500k?
That sounds insane for a crud app with one million users.
What am I missing?
gbear605 · 9m ago
I’ve seen startups with a thousand active users paying $50k/month (though that’s overall costs, not just db). It’s really easy to waste a lot of money doing nothing.
sreekanth850 · 13m ago
It's wild and hilarious, how often startups and companies go for distributed databases like CockroachDB/TiDB/Yugabyte before they actually need distribution, this trends sucks. 100 million rows is nothing that a well-tuned Postgres or MySQL instance (or even read-replicated setup) can't handle comfortably. Scale when you hit the wall.
esafak · 1h ago
I read it as: Why You Shouldn't Use Prisma and How Cockroach Hung Us Out To Dry
I'm not the most experienced in huge DBs and can't write anything off, but I've never seen a horizontally sharded DBMS work well, even Citus which allegedly does. There's always been a catch that seems worse than manually doing sharding at a higher level than your DB, not that that's easy either.
banashark · 1h ago
Vitess and planetscale seem to have quite a number of high profile users who have lauded its capabilities. A search through hn history pops up a few.
As someone who has primarily worked with Postgres for relational concerns, I’ve envied the apparent robustness of the MySQL scaling solutions.
caffeinated_me · 1h ago
I'd argue that horizontally sharded databases can work well, but they do tend to have significant non obvious tradeoffs that can be pretty painful.
There's a handful of companies that have scaled Citus past 1PB for production usage, but the examples I'm aware of all had more engineering to avoid capability or architecture limitations than one might like. I'd love to see someone come back with a fresh approach that covered more use cases effectively.
Disclaimer: former Citus employee
pier25 · 1h ago
Prisma is so bad... can you believe it's by far the most downloaded ORM in NPM?
VWWHFSfQ · 1h ago
Every ORM is bad. Especially the "any DB" ORMs. Because they trick you into thinking about your data patterns in terms of writing application code, instead of writing code for the database. And most of the time their features and APIs are abstracted in a way that basically means you can only use the least-common-denominator of all the database backends that they can support.
I've sworn off ORMs entirely. My application is a Postgres application first and foremost. I use PG-specific features extensively. Why would I sacrifice all the power that Postgres offers me just for some conveniences in Python, or Ruby, or whatever?
Nah. Just write the good code for your database.
pier25 · 1h ago
I use PG with Entity Framework in .NET and at least 90% of my queries don't need any PG-specific features.
When I need something PG specific I have options like writing raw SQL queries.
Having most of my data layer in C# is fantastic for productivity and in most cases the performance compared to SQL is negligible.
romanhn · 25m ago
Entity Framework really is such a time saver. The PG adapter makes it a breeze not just with common queries, but also more recent stuff, like working with embeddings for vector search.
reillyse · 10m ago
Every ORM except Active Record is awful. Active Record is amazing.
ScalaHanSolo · 46m ago
Author here. Yeah, that's not a bad take away either. I've also been really vocal in Primsa issues for all sorts of things. We are about to embark on a big migration away from Prisma and onto Drizzle once the Drizzle team lands 1.0
We will absolutely share our findings when that migration happens!
coverj · 1h ago
I didn't mind prisma for managing the schema etc but also seen your linked github issue. I found other people recommend combining Prisma with Kysley. I have only used this in toy projects so take this with a grain of salt.
> It's true that Prisma currently doesn't do JOINs for relational queries. Instead, it sends individual queries and joins the data on the application level.
..........I'm sorry, what? That seems........absurd.
edit: Might as well throw in: I can't stand ORMs, I don't get why people use it, please just write the SQL.
jjice · 1h ago
I believe it’s either released now or at least a feature flag (maybe only some systems). It’s absolutely absurd it took so long. I can’t believe it wasn’t the initial implementation.
Funny relevant story: we got an OOM from a query that we used Prisma for. I looked into it - it’s was a simple select distinct. Turns out (I believe it was changed like a year ago, but I’m not positive), event distincts were done in memory! I can’t fathom the decision making there…
etblg · 1h ago
> event distincts were done in memory! I can’t fathom the decision making there…
This is one of those situations where I can't tell if they're operating on some kind of deep insight that is way above my experience and I just don't understand it, or if they just made really bad decisions. I just don't get it, it feels so wrong.
Tadpole9181 · 55m ago
> I can't tell if they're operating on some kind of deep insight that is way above my experience and I just don't understand it
> I want to elaborate a bit on the tradeoffs of this decision. The reason Prisma uses this strategy is because in a lot of real-world applications with large datasets, DB-level JOINs can become quite expensive...
> The total cost of executing a complex join is often higher than executing multiple simpler queries. This is why the Prisma query engine currently defaults to multiple simple queries in order to optimise overall throughput of the system.
> But Prisma is designed towards generalized best practices, and in the "real world" with huge tables and hundreds of fields, single queries are not the best approach...
> All that being said, there are of course scenarios where JOINs are a lot more performance than sending individual queries. We know this and that's why we are currently working on enabling JOINs in Prisma Client queries as well You can follow the development on the roadmap.
Though this isn't a complete answer still. Part of it is that Prisma was, at its start, a GraphQL-centric ORM. This comes with its own performance pitfalls, and decomposing joins into separate subqueries with aggregation helped avoid them.
ketzo · 38m ago
Not 100% parallel, but I was debugging a slow endpoint earlier today in our app which uses Mongo/mongoose.
I removed a $lookup (the mongodb JOIN equivalent) and replaced it with, as Prisma does, two table lookups and an in-memory join
p90 response times dropped from 35 seconds to 1.2 seconds
pier25 · 1h ago
> I can't stand ORMs, I don't get why people use it, please just write the SQL.
I used to agree until I started using a good ORM. Entity Framework on .NET is amazing.
bob1029 · 1h ago
> Entity Framework on .NET is amazing.
I disagree. It is probably one of the less terrible ORMs, but it is far from amazing. The object-relational impedance mismatch will always dominate for anything that isn't trivial business. EF works great until you need different views of the model. It does support some kind of view mapping technique, but it's so much boilerplate I fail to see the point.
Dapper + SqlConnection is goldilocks once you get into the nasty edges. Being able to query a result set that always exactly matches your view models is pretty amazing. The idea of the program automagically upgrading and migrating the schemas is something that was interesting to me until I saw what you could accomplish with Visual Studio's SQL Compare tool & RedGate's equivalent. I feel a lot more comfortable running manual schema migrations when working with hosted SQL providers.
cyral · 1h ago
> It does support some kind of view mapping technique
Can you call .Select(entity => SomeSmallerModel() { Name = entity.Name }) or something like that to select what you need? If I am understanding your issue correctly.
I also agree that its one of the least worst but there are still things that annoy me.
neonsunset · 1h ago
> EF works great until you need different views of the model
You can easily project or use views with SQL then projected onto objects. It's very convenient with `.FromSql`:
Doesn’t entity framework have a huge memory footprint too?
neonsunset · 1h ago
Do you have any links that note memory usage issues with any of the semi-recent EF Core versions?
compton93 · 1h ago
It is. But wait... it doesn't join the data on the application level of your application. You have to deploy their proxy service which joins the data on the application level.
Tadpole9181 · 1h ago
It's pretty obvious when somebody has only heard of Prisma, but never used it.
- Using `JOIN`s (with correlated subqueries and JSON) has been around for a while now via a `relationLoadStrategy` setting.
- Prisma has a Rust service that does query execution & result aggregation, but this is automatically managed behind the scenes. All you do is run `npx prisma generate` and then run your application.
- They are in the process of removing the Rust layer.
The JOIN setting and the removing of the middleware service are going to be defaults soon, they're just in preview.
compton93 · 1h ago
They've been saying that for 3 years. We actually had a discount for being an early adopter. But hey its obvious Ive never used it and only heard of it.
Tadpole9181 · 49m ago
The JOIN mode has been in preview for over a year and is slated for GA release within a few months. Which has been on their roadmap.
The removal of the rust service is available in preview for Postgres as of 6.7.[1]
Rewriting significant parts of a complex codebase used by millions is hard, and pushing it to defaults requires prolonged testing periods when the worst case is "major data corruption".
It's a very Uber thing to do to enter a one way from the wrong end.
psionides · 56m ago
Yeah so there's basically just that one ;)
yen223 · 7m ago
I have participated in a Postgres -> Clickhouse migration, but I haven't bothered writing an article about it.
delish · 55m ago
Related: Oxide's podcast, "Whither CockroachDB," which reflects on experience with postgres at Joyent, then the choice to use cockroach in response to prior experiences with postgres.
I'm trying to avoid editorializing in my above summary, for fear of mischaracterizing their opinions or the current state of postgres. Their use of postgres was 10 years ago, they were using postgres for a high-availability use case -- so they (and I) don't think "postgres bad, cockroach good." But like Bryan Cantrill says, "No one cares about your workload like you do." So benchmark! Don't make technical decisions via "vibes!"
mmiao · 19m ago
a 100 million rows table is fairly small and you just don't need a distributed database. but you will need one if you hit 10 billion rows
frollogaston · 1h ago
Did I miss something, or does the article not mention anything about sharding in Postgres? Was that just not needed?
Also, query planner maturity is a big deal. It's hard to get Spanner to use the indexes you want.
monkeyelite · 1h ago
There are probably fewer than 100 websites that couldn’t be a single Postgres instance on nice server hardware, with good caching.
kevml · 37m ago
Not everything on the internet is a “website” and then there are several website hosting platforms that aggregate the individual concerns.
monkeyelite · 35m ago
All true points. I guess I just want to hear more about why they think sharding is important to them.
ScalaHanSolo · 45m ago
Yeah, this is our read with Postgres here at Motion. I believe that Motion will easily be able to 10x on modern hardware along with various optimizations along the way.
moonikakiss · 1h ago
great blog. It seems like you might benefit from columnar storage in Postgres for that slow query that took ~20seconds.
It's interesting that people typically think of columnstores for strict BI / analytics. But there are so many App / user-facing workloads that actually need it.
ps: we're working on pg_mooncake v0.2. create a columnstore in Postgres that's always consistent with your OLTP tables.
It might help for this workload.
I_am_tiberius · 27m ago
That sounds awesome. Are you saying you still use your normal OLTP table for writing data and the columnstore table is always in sync with that OLTP table (that's fantastic)? I ready it works with duckdb - how does it work? I guess there's no chance this is going to be available on Azure Flexible Server anytime soon.
compton93 · 1h ago
What are your thoughts on Fujitsu's VCI? I typically work for ERP's but im always advocating to offload the right queries to columnar DB's (not for DB performance but for end user experience).
compton93 · 2h ago
I'm curious about Motion's experience with "Unused Indices". They suggest Cockroach's dashboard listed used indexes in the "Unused Indices" list.
I think the indexes they suspect were used are unused but Motion didn't realize CockroachDB was doing zigzag joins on other indexes to accomplish the same thing, leaving the indexes that would be obviously used as genuinely not used.
It's a great feature but CRDB's optimizer would prefer a zig zag join over a covering index, getting around this required indexes be written in a way to persuade the optimizer to not plan for a zig zag join.
from-nibly · 1h ago
Feels like postgres is always the answer. I mean like there's gotta be some edge case somewhere where postgres just can't begin to compete with other more specialized database but I'd think that going from postgres to something else is much easier than the other way around.
999900000999 · 1h ago
Depends.
If you want to fully embrace the vibe tables are difficult.
Even before LLMs, I was at a certain company that preferred MongoDB so we didn’t need migrations.
Sometimes you don’t care about data structure and you just want to toss something up there and worry about it later.
Postgres is the best answer if you have a solid team and you know what you’re doing.
If you want to ride solo and get something done fast, Firebase and its NoSQL cousins might be easier .
pojzon · 5m ago
I really enjoy this comment.
> Postgres is the best answer if you have a solid team and you know what you’re doing.
Not every type of data simply fits into relational model.
Example: time series data.
So depending on your model - pick your poison.
But for relational models, there is hardly anything better than postgres now.
It makes me happy coz I always rooted for the project from earily 2000s.
coolcase · 27m ago
I hear MySQL can be better for some workloads?
coolcase · 51m ago
Why not optimise the bad queries first?
Aside. Job section says not 9-5. What does that mean? Long hours? Or not 9-5 attitude?
ScalaHanSolo · 48m ago
Author here. Optimizing bad queries was absolutely part of the issues with the performance. The issue with cockroach was that the visibility into those bad queries was not great. It wasn't until we had the superior tooling from the Postgres ecosystem that we were able to track them down more efficiently.
When I first stepped into a DBA role with CockroachDB I was confused why indexes we obviously need were in unused indexes. It wasn't until I did an explain on the queries I learned the planner was doing zig-zag joins instead.
Inviz · 1h ago
WHERE CONDITION AND 1=1 results in scanning whole table? I dont think so...
hobs · 3h ago
It still makes me sad when half the queries I see are json_* - I know its far too late, but a big sad trombone in query performance is constantly left joining to planner queries that are going to give you 100 rows as an estimate forever.
panzi · 1h ago
Not sure why those are json_agg() instead of array_agg() in that example. Why would you use a JSON array instead of a native properly typed array? Yes, if you have some complex objects for some reason you can use JSON objects. But those where all just arrays of IDs. Also why was it json_agg() and not jsonb_agg()? Is there any reason on why to use JSON over JSONB in PostgreSQL?
renhanxue · 1h ago
If you, for whatever obscure reason, need to preserve whitespace and key ordering, that is you want something that is effectively just a text column, then you should use JSON over JSONB.
I can't think of any case at all, no matter how contrived, where you'd want to use the non-B versions of the JSON aggregate functions though.
paulryanrogers · 44m ago
The non-B JSON can take up less space on disk and less write time complexity.
bastawhiz · 2h ago
If the queries are sensible, you can always create indexes that index on the queried expressions.
Hoping for more easy columnar support in databases, which is one of the things that can lead you to storing json in database columns (if your data is truly columnar).
Currently the vendor lock-in or requirements for installing plugins make it hard to do with cloud sql providers. Especially hard since by the time it's a problem you're probably at enough scale to make switching db/vendors hard or impossible.
hobs · 2h ago
How does columnar = json? json isn't colunar at all... If you just want to have a schema in json instead of sql, use a no-sql db, postgres nosql features are strong, but the db features are actually much stronger.
niwtsol · 3h ago
That was an interesting read, seemed like an overwhelming amount of data for why they should move off cockroach. All of my db work has been read heavy and I’ve never had a need for super fast multi-region writes. Is a multi-region write architecture possible in Postgres? I’m trying to understand if GDPR was the requirement that resulted in cockroach or if the lackluster multi region write was the bigger driver.
sgarland · 2h ago
There are multi-master Postgres options like BDR (I think it’s since renamed; whatever EnterpriseDB calls it now), yes. Most people don’t need it, even if they think they do, and they also usually are in no way capable of dealing with the operational complexity it involves.
If you’ve ever administered Postgres at scale, multiply it by 10. That’s what dealing with multi-master is like. It’s a nightmare.
sroussey · 1h ago
Most people don’t need multi-region read architecture for that matter. SaaS app devs at 5 person companies really want to do “Facebook” scale problems.
ketzo · 31m ago
I think this is kinda reductive. If you’ve got users all over the world, then global DB replicas are a sizable performance improvement, whether you’ve got 1 engineer or 1,000.
sgarland · 2h ago
It is forever enraging to me that ORMs turn SELECT * into each individual column, mostly because people then post the whole thing and it’s obnoxiously large.
Similarly maddening, the appalling lack of normalization that is simply taken for granted. “It’s faster, bro.” No, no, it is not. Especially not at the hundreds of millions or billions of rows scale. If you store something low-cardinality like a status column, with an average length of perhaps 7 characters, that’s 8 bytes (1 byte overhead assumed, but it could be 2). Multiply that by 2 billion rows, and you’re wasting 16 GB. Disk is cheap, but a. Memory isn’t b. Don’t be lazy. There’s a right way to use an RDBMS, and a wrong way. If you want a KV store, use a damn KV store.
Finally, I’d be remiss if I failed to point out that Prisma is an unbelievably immature organization who launched without the ability to do JOINS [0]. They are forever dead to me for that. This isn’t “move fast and break things,” it’s “move fast despite having zero clue what we’re doing but convince JS devs that we do.”
This is a safety feature. If my code expects columns A, B, and C, but the migration to add C hasn't run yet and I'm doing something that would otherwise `SELECT `, my query should fail. If the ORM _actually_ does `SELECT ` I'll get back two columns instead of three and things can get spooky and bad real fast (unless the ORM manually validates the shape of the query response every time, which will come with a real runtime cost). If there are columns that the ORM doesn't know about, you could end up with _far more_ data being returned from the database, which could just as easily cause plenty of spooky issues—not the least of which being "overwhelming your network by flooding the client connections with data the application doesn't even know exists".
compton93 · 2h ago
I worked for startup who did all of these things on CockroachDB. We could of used a single m5.xlarge PostgreSQL instance (1000 basic QPS on 150GB of data) if we optimized our queries and went back to basics, instead we had 1TB of RAM dedicated to Cockroach.
I added about 4 indexes and halved the resources overnight. But Prisma, SELECT *, graphql and what other resume building shit people implemented was the bane of my existence, typically engineers did this believing it would be faster. I remember 1 engineer had a standing ovation in slack for his refactor which was supposedly going to save us $$$$$ except our DB CPU went up 30% because he decided to validate every company every second in every session. In his defense, he added 1 line of code that caused it, and it was obscured through prisma and graphql to an inefficient query.
FWIW; I love CockroachDB but the price is directly linked to how much your software engineers shit on the database.
reissbaker · 1h ago
Eh, I've run applications on RDBMSes with multi-billion-row tables, and I've never found normalization/denormalization to be particularly impactful on performance except for in a few rare cases. The biggest impact came from sensible indexing + query patterns. Normalization vs denormalization had a big impact on convenience, though (not always favoring one way or the other!).
But I'm no fan of Prisma either. Drizzle has its own pain points (i.e. sequential numbers for its auto-generated migrations means annoying merge conflicts if multiple people iterate on the schema at the same time), but it's much better than Prisma at sticking close to the metal and allowing good query performance and table design.
amazingamazing · 2h ago
I don't disagree with your point, but over normalization and joining everywhere also isn't necessarily the answer, even with an index. there's no easy answer to this, really depends on the performance characteristics the critical user journeys need.
with a little pain, if I had to pick an extreme, I'd pick extreme normalization with materialized views that are queried (e.g. no joins), rather than joining all of the time.
sroussey · 2h ago
I typically go for 3rd normal form, and selectively denoralize where it has true performance value.
spudlyo · 54m ago
“Normalize ’til it hurts, denormalize ’til it works.”
HappyJoy · 2h ago
Spelling out columns can help the query optimizer too
I did a double take at this. At the onset of the article, the fact they're using a distributed database and the mention of a "mid 6 figure" DB bill made me assume they have some obscenely large database that's far beyond what a single node could do. They don't detail the Postgres setup that replaced it, so I assume it's a pretty standard single primary and a 100 million row table is well within the abilities of that—I have a 150 million row table happily plugging along on a 2vCPU+16GB instance. Apples and oranges, perhaps, but people shouldn't underestimate what a single modern server can do.
Secondly, we did most of these things using SQL, Bash scripts, cron jobs and some I/O logic built directly into the application code. They were robust enough to handle some extremely mission critical systems (a failure could bring down a US primary market and if it's bad enough, you hear it on the news).
The largest table was over 100 million rows. Some migrations were painful, however. At that time, some of them would lock the whole table and we'd need to run them overnight. Fortunately, this was for an internal app so we could do that.
That sounds insane for a crud app with one million users.
What am I missing?
I already knew about prisma from the infamous https://github.com/prisma/prisma/discussions/19748
As someone who has primarily worked with Postgres for relational concerns, I’ve envied the apparent robustness of the MySQL scaling solutions.
There's a handful of companies that have scaled Citus past 1PB for production usage, but the examples I'm aware of all had more engineering to avoid capability or architecture limitations than one might like. I'd love to see someone come back with a fresh approach that covered more use cases effectively.
Disclaimer: former Citus employee
I've sworn off ORMs entirely. My application is a Postgres application first and foremost. I use PG-specific features extensively. Why would I sacrifice all the power that Postgres offers me just for some conveniences in Python, or Ruby, or whatever?
Nah. Just write the good code for your database.
When I need something PG specific I have options like writing raw SQL queries.
Having most of my data layer in C# is fantastic for productivity and in most cases the performance compared to SQL is negligible.
We will absolutely share our findings when that migration happens!
https://kysely.dev/ https://github.com/valtyr/prisma-kysely
..........I'm sorry, what? That seems........absurd.
edit: Might as well throw in: I can't stand ORMs, I don't get why people use it, please just write the SQL.
Funny relevant story: we got an OOM from a query that we used Prisma for. I looked into it - it’s was a simple select distinct. Turns out (I believe it was changed like a year ago, but I’m not positive), event distincts were done in memory! I can’t fathom the decision making there…
This is one of those situations where I can't tell if they're operating on some kind of deep insight that is way above my experience and I just don't understand it, or if they just made really bad decisions. I just don't get it, it feels so wrong.
This is answered at the very top of the link on the post you replied to. In no unclear language, no less. Direct link here: https://github.com/prisma/prisma/discussions/19748#discussio...
> I want to elaborate a bit on the tradeoffs of this decision. The reason Prisma uses this strategy is because in a lot of real-world applications with large datasets, DB-level JOINs can become quite expensive...
> The total cost of executing a complex join is often higher than executing multiple simpler queries. This is why the Prisma query engine currently defaults to multiple simple queries in order to optimise overall throughput of the system.
> But Prisma is designed towards generalized best practices, and in the "real world" with huge tables and hundreds of fields, single queries are not the best approach...
> All that being said, there are of course scenarios where JOINs are a lot more performance than sending individual queries. We know this and that's why we are currently working on enabling JOINs in Prisma Client queries as well You can follow the development on the roadmap.
Though this isn't a complete answer still. Part of it is that Prisma was, at its start, a GraphQL-centric ORM. This comes with its own performance pitfalls, and decomposing joins into separate subqueries with aggregation helped avoid them.
I removed a $lookup (the mongodb JOIN equivalent) and replaced it with, as Prisma does, two table lookups and an in-memory join
p90 response times dropped from 35 seconds to 1.2 seconds
I used to agree until I started using a good ORM. Entity Framework on .NET is amazing.
I disagree. It is probably one of the less terrible ORMs, but it is far from amazing. The object-relational impedance mismatch will always dominate for anything that isn't trivial business. EF works great until you need different views of the model. It does support some kind of view mapping technique, but it's so much boilerplate I fail to see the point.
Dapper + SqlConnection is goldilocks once you get into the nasty edges. Being able to query a result set that always exactly matches your view models is pretty amazing. The idea of the program automagically upgrading and migrating the schemas is something that was interesting to me until I saw what you could accomplish with Visual Studio's SQL Compare tool & RedGate's equivalent. I feel a lot more comfortable running manual schema migrations when working with hosted SQL providers.
Can you call .Select(entity => SomeSmallerModel() { Name = entity.Name }) or something like that to select what you need? If I am understanding your issue correctly.
I also agree that its one of the least worst but there are still things that annoy me.
You can easily project or use views with SQL then projected onto objects. It's very convenient with `.FromSql`:
https://learn.microsoft.com/en-us/ef/core/querying/sql-queri...
- Using `JOIN`s (with correlated subqueries and JSON) has been around for a while now via a `relationLoadStrategy` setting.
- Prisma has a Rust service that does query execution & result aggregation, but this is automatically managed behind the scenes. All you do is run `npx prisma generate` and then run your application.
- They are in the process of removing the Rust layer.
The JOIN setting and the removing of the middleware service are going to be defaults soon, they're just in preview.
The removal of the rust service is available in preview for Postgres as of 6.7.[1]
Rewriting significant parts of a complex codebase used by millions is hard, and pushing it to defaults requires prolonged testing periods when the worst case is "major data corruption".
[1]: https://www.prisma.io/blog/try-the-new-rust-free-version-of-...
Harder than just doing joins.
I don't think I've once seen a migrating away from Postgres article.
https://www.youtube.com/watch?v=DNHMYp8M40k
I'm trying to avoid editorializing in my above summary, for fear of mischaracterizing their opinions or the current state of postgres. Their use of postgres was 10 years ago, they were using postgres for a high-availability use case -- so they (and I) don't think "postgres bad, cockroach good." But like Bryan Cantrill says, "No one cares about your workload like you do." So benchmark! Don't make technical decisions via "vibes!"
Also, query planner maturity is a big deal. It's hard to get Spanner to use the indexes you want.
It's interesting that people typically think of columnstores for strict BI / analytics. But there are so many App / user-facing workloads that actually need it.
ps: we're working on pg_mooncake v0.2. create a columnstore in Postgres that's always consistent with your OLTP tables.
It might help for this workload.
I think the indexes they suspect were used are unused but Motion didn't realize CockroachDB was doing zigzag joins on other indexes to accomplish the same thing, leaving the indexes that would be obviously used as genuinely not used.
It's a great feature but CRDB's optimizer would prefer a zig zag join over a covering index, getting around this required indexes be written in a way to persuade the optimizer to not plan for a zig zag join.
If you want to fully embrace the vibe tables are difficult.
Even before LLMs, I was at a certain company that preferred MongoDB so we didn’t need migrations.
Sometimes you don’t care about data structure and you just want to toss something up there and worry about it later.
Postgres is the best answer if you have a solid team and you know what you’re doing.
If you want to ride solo and get something done fast, Firebase and its NoSQL cousins might be easier .
> Postgres is the best answer if you have a solid team and you know what you’re doing.
Not every type of data simply fits into relational model.
Example: time series data.
So depending on your model - pick your poison.
But for relational models, there is hardly anything better than postgres now.
It makes me happy coz I always rooted for the project from earily 2000s.
Aside. Job section says not 9-5. What does that mean? Long hours? Or not 9-5 attitude?
When I first stepped into a DBA role with CockroachDB I was confused why indexes we obviously need were in unused indexes. It wasn't until I did an explain on the queries I learned the planner was doing zig-zag joins instead.
I can't think of any case at all, no matter how contrived, where you'd want to use the non-B versions of the JSON aggregate functions though.
https://www.postgresql.org/docs/current/indexes-expressional...
Currently the vendor lock-in or requirements for installing plugins make it hard to do with cloud sql providers. Especially hard since by the time it's a problem you're probably at enough scale to make switching db/vendors hard or impossible.
If you’ve ever administered Postgres at scale, multiply it by 10. That’s what dealing with multi-master is like. It’s a nightmare.
Similarly maddening, the appalling lack of normalization that is simply taken for granted. “It’s faster, bro.” No, no, it is not. Especially not at the hundreds of millions or billions of rows scale. If you store something low-cardinality like a status column, with an average length of perhaps 7 characters, that’s 8 bytes (1 byte overhead assumed, but it could be 2). Multiply that by 2 billion rows, and you’re wasting 16 GB. Disk is cheap, but a. Memory isn’t b. Don’t be lazy. There’s a right way to use an RDBMS, and a wrong way. If you want a KV store, use a damn KV store.
Finally, I’d be remiss if I failed to point out that Prisma is an unbelievably immature organization who launched without the ability to do JOINS [0]. They are forever dead to me for that. This isn’t “move fast and break things,” it’s “move fast despite having zero clue what we’re doing but convince JS devs that we do.”
[0]: https://github.com/prisma/prisma/discussions/19748
This is a safety feature. If my code expects columns A, B, and C, but the migration to add C hasn't run yet and I'm doing something that would otherwise `SELECT `, my query should fail. If the ORM _actually_ does `SELECT ` I'll get back two columns instead of three and things can get spooky and bad real fast (unless the ORM manually validates the shape of the query response every time, which will come with a real runtime cost). If there are columns that the ORM doesn't know about, you could end up with _far more_ data being returned from the database, which could just as easily cause plenty of spooky issues—not the least of which being "overwhelming your network by flooding the client connections with data the application doesn't even know exists".
I added about 4 indexes and halved the resources overnight. But Prisma, SELECT *, graphql and what other resume building shit people implemented was the bane of my existence, typically engineers did this believing it would be faster. I remember 1 engineer had a standing ovation in slack for his refactor which was supposedly going to save us $$$$$ except our DB CPU went up 30% because he decided to validate every company every second in every session. In his defense, he added 1 line of code that caused it, and it was obscured through prisma and graphql to an inefficient query.
FWIW; I love CockroachDB but the price is directly linked to how much your software engineers shit on the database.
But I'm no fan of Prisma either. Drizzle has its own pain points (i.e. sequential numbers for its auto-generated migrations means annoying merge conflicts if multiple people iterate on the schema at the same time), but it's much better than Prisma at sticking close to the metal and allowing good query performance and table design.
with a little pain, if I had to pick an extreme, I'd pick extreme normalization with materialized views that are queried (e.g. no joins), rather than joining all of the time.
This isn't said enough.