> - Usage-based pricing that punished our success, the more users chatted, the more we paid
This is such a strange position on usage-based pricing and seems telling.
anthonyronning · 20h ago
Yeah, at a certain point it's just always running 24/7, which they charge you usage-based if your company is over 750 hours in a month.
If you're running databases continuously, I find a lot of their original unique selling point pretty moot, especially if you're paying them extra for it.
asib · 19h ago
Maybe you were referring to specifics of Neon's usage-based pricing.
The bullet I quoted makes it seem like you feel punished for having to pay more because you used more resources. That's, like, the fundamental idea of usage-based pricing. If you feel punished, it seems as though you misunderstood the whole idea.
anthonyronning · 19h ago
I see. Yeah I'm not against usage-based in general. Just specific to database's, especially in my instance where it feels like I'm paying more for the luxury of having a scale-to-zero feature that I've quickly grown beyond.
I'll reiterate that it's not the only reason why I'm moving off of them. Reliability, performance, insights, etc.
It just happens to be a lot more affordable too.
jorams · 17h ago
Having plugged your numbers into the pricing for both Neon and Planetscale I'm rather confused. At Planetscale, given the numbers cited in the post, you're paying for 4 servers (+ replicas) with one eighth of a vCPU each, running 24/7. That's equivalent to about 375 Neon compute-hours per month. Your $69 Neon plan included twice that. Neon only goes down to 1/4th of a vCPU, but that does include the same amount of memory as the 1/8th at Planetscale, so take that 4 times and you have 4 databases running all month for the price of your $69 plan at Neon. How did you get to $250?
anthonyronning · 14h ago
Honestly, I don't even know. My last month bill was for 1947 compute hours for a total of $260. I just have the 4 databases. Looks like two of them are at .5 instead of .25, maybe that's it? Unless they are auto scaling me up occasionally and I'm not aware?
jorams · 13h ago
Two of them being at 0.5 brings the total to 1.5 vCPU, which over an entire month adds about another 375 compute hours for an extra $60, which is still much lower. Indeed autoscaling seems like it could be the cause. According to the documentation that's a setting you can configure per "compute", but I don't know if it's the default.
bddicken · 20h ago
In the database world, serverless/autoscaling pricing is almost always more expensive for real workloads. The % of workloads where it makes sense is small. Ones where 90% of the time there's little small traffic and 10% of the time the DB sees large traffic spikes. Otherwise, just pay a fixed cost for the hardware you need.
halfmatthalfcat · 20h ago
This pitfall of "serverless" has been widely known since people started abusing lambda to be "always on". Serverless is a PaaS gaslight to make you pay more for the perceived convenience.
swiftcoder · 18h ago
Serverless is often cheaper just so long as your workflows are bursty/infrequent. For example, we don't need to pay to permanently rent/colocate a beefy server, just to run a batch job once a week.
If you have a constant base load of requests, lambda is just the wrong tool for the job.
vasco · 19h ago
It's not a gaslight, but it's only cost effective for specific usage patterns. It's only a "gaslight" if you think you need to run every workload the same way and don't cost estimate before you roll it out.
gausswho · 20h ago
Not necessarily. Netlify told me as I had blown past 20 bucks for 1TB of traffic that paying 50 bucks for every additional 100GB was 'a good problem to have'. Well no, not at all. If your project is one of love, the end game is not subjecting your audience to boatloads of ads.
thinkingtoilet · 20h ago
But if your project is getting more and more usage, surely that requires more expenses. What is your alternative?
abound · 20h ago
I think GP is calling out the highly non-linear nature of the pricing. $20 for the first TB and then $50/100 GB after is a 25x jump in pricing.
Linear usage cost makes sense, but the more common/sane thing is cheaper unit pricing as you hit scale.
mooreds · 19h ago
> the more common/sane thing is cheaper unit pricing as you hit scale.
Depends on the provider's business model.
Many devtools want to make it trivial to get started, and zero/low prices facilitate that. They know that once you are set up with the tool, the barrier to moving is high. They also know that devs are tinkerers who may take a free product discovered on their free time and introduce it to a workplace who will pay for it.
But someone has to pay for all those free users/plans (they aren't using zero resources). With this business model, the payer is the person/org with some level of success who is forced up into a more expensive plan.
This is a valid strategy for two reasons:
- such users/orgs are less likely to move because they already have working code using the system and moving introduces risk
- if they have high levels of traffic, they may (not certainly, but may) be a profit making enterprise and will do the cold hard calculus of "it costs me $50/100 GB but would take a dev N hours to move and will have X opportunity cost" and decide to keep paying
The successful "labor of love" project is an unfortunate casualty.
Illniyar · 18h ago
It's definitely a business model. Just like a dark pattern is a pattern :)
The counter to that argument is that it's creating an adverse effect on your most profitable customers, with an incentive to move to offerings that don't have free tiers (or where the free tiers are not considerably affecting your own costs).
If your free tier is so lucrative that you need to 25x the cost, then your free tier is too expansive and you need to tone it down until the economics make sense.
JavierFlores09 · 18h ago
> The counter to that argument is that it's creating an adverse effect on your most
> profitable customers, with an incentive to move to offerings that don't have free tiers (or where the free tiers are not considerably affecting your own costs).
> If your free tier is so lucrative that you need to 25x the cost, then your free tier is > too expansive and you need to tone it down until the economics make sense.
It does make sense, though. That's how almost every subsidized system works, and the benefit applies for everyone until they scale to a point where they are not legible for it. It does suck for the pool of people that just began paying the actual price of the service instead of the subsidized one, and certainly more so if they're not actually getting profit from it but then again, it isn't like they weren't benefitting from the price up to that point, otherwise they wouldn't have chosen it. Luckily enough, as far as databases go, there's a gazillion options to choose from and experiences like this are invaluable when it comes to picking one with a pricing model that fits the scaling requirements of a given project, and not only the technical merits.
Also as a side rant, I honestly don't think "projects of love" are a good counter argument to anything. They're clearly not of love because otherwise they would find a way to make them profitable. Most people are either lazy to, or lack the knowledge of how to turn their hobby into a marketable thing. Which is fine, nobody wants to deal with business when it comes to their hobbies, but one can't have it both ways. Either your hobby project gets successful and you find ways to cover its expenses, or you realize that your hobby project needs to be kept just a hobby project.
gausswho · 17h ago
> Also as a side rant, I honestly don't think "projects of love" are a good counter argument to anything. They're clearly not of love because otherwise they would find a way to make them profitable.
I appreciate the... tough love here, and also acknowledge that 'doing it for love' is ambiguous. But I strongly disagree that declining to make something profitable indicates that it's not out of love.
To clarify my own situation, it's more out of wanting to share knowledge with the world and build a community. It's a very popular site, ubiquitous in its niche, but that's about as much as I'll divulge.
I'll grant that we've been benefitting from the subsidy/hook up to now. But I'll also add the wrinkle that a substantial increase in bandwidth is due to AI harvesters. They are becoming an existential threat to projects like these.
robertlagrant · 19h ago
Yes - they've inverted it to allow smaller projects to onboard very cheaply.
thinkingtoilet · 17h ago
Ah. Makes sense. Thanks.
gausswho · 17h ago
For the moment, streamlining bandwidth delivery and distributing across other free/cheap tiers. After that, the plan is to find a sales team that'll discount/sponsor the site in exchange for putting their logo in the footer. After that, maybe self-host or close up.
asib · 19h ago
I hear you on that, but I would say "usage-based pricing" does not equate to "increasing marginal cost" at all. There are both usage-based providers that have increasing marginal cost and those that don't.
theamk · 18h ago
Seems reasonable to me? After all, they went to "predictable pricing" which seems to be generally better than usage-based one.
I think the only reason to go with usage-based pricing is if you want to take a risk to save money - you are getting unpredictable bills but hope that average is going to be cheaper. As any gamble, you can win or lose.
mrweasel · 19h ago
I sort of feel like their own product, Maple.AI, have the same issue. The more users use the product, the more they have to pay. So they clearly understand that the pricing model is problematic, but they still use it themselves?
clarkbw · 16h ago
I think this was actually trying to say, Neon prices were high. Because otherwise I agree it doesn't make sense.
And Neon will be lowering prices dramatically... a day from now?
gr4vityWall · 19h ago
> We have thousands of users generating thousands of chats daily. Lawyers discussing sensitive cases. Executives planning strategy. Developers working on proprietary code. They trust us because their data is mathematically guaranteed to be private. But that trust evaporates quickly when the service is unavailable.
And 250 dollars/month was considered expensive for the infrastructure handling that? My first impression is that based on the stakes alone, that would warrant a full time dedicated engineer.
Not gonna lie, although I appreciate the comparison and that they shared their experience publicly, this post sounds like half a technical write-up, half an ad for both companies.
anthonyronning · 19h ago
> And 250 dollars/month was considered expensive
No, it was not my intention for price to be the main thing people hooked in on with this article. It's the combination of it all. Better reliability, performance, and infrastructure AND it's more affordable.
> warrant a full time dedicated engineer
TBH, it's basically my sole full time job as the CTO.
> half a technical write-up, half an ad for both companies
I've been frustrated by Neon for months now and excited about PlanetScale's new postgres offering. Was pleasantly surprised by it too and wanted to write about it. I do appreciate you writing this and sorry if it comes up too much like an ad for us. Only meant to share our unique experiences and satisfaction with a new thing.
samlambert · 19h ago
if it's going down and causing reputation damage then $1 a month is too expensive.
markskram · 19h ago
we're a pre-seed startup with a team of two, so no money for a full time engineer babysitting the database. first impressions and reputation are big as we launch, so we need reliability
Insanity · 19h ago
We're experimenting with Lakebase now (Databricks' name for Neon).
Initial results are actually pretty cool when using the UI to spin it up. The API leaves some things to be desired (bad error messages that obfuscate the actual error, undocumented rate limiting, ..).
Plus, there's been quite a number of strange bugs we ran into, like tables that don't recreate correctly because of dangling resources.
Overall, I'm pretty excited about the product because it makes life a bit easier, but it's not really 'production quality yet'.
(Alternatively, maybe we're just doing things in a bad way while we're learning to use it, so it could be a PEBKAC type of issue, rather than a Lakebase issue).
vlucas · 19h ago
It's not about the $250 being too expensive. It's about feeling like you're not getting a good value - being overcharged for a subpar product.
Quote from the article:
> At $250/month for 4 databases without any replicas, we were paying premium prices for subpar reliability.
samlambert · 19h ago
I'd be happy to show you around PlanetScale if there is interest.
jmull · 18h ago
I realize this wasn't the main reason they switched, but from my perspective, $156/mo and $250/mo for db is essentially the same number:
Way too much for a project without a budget, and approximately zero for a project with a budget.
markskram · 18h ago
As with any product comparison, price is often the last thing to compete on. For us it was the reliability and debugging insights that mattered. The cost savings was just a bonus.
acedTrex · 18h ago
On any project ive ever worked on 156 -> 250 is a rounding error lol. Thats not even enough to both thinking about.
beoberha · 20h ago
What exactly is PlanetScale Postgres? Is it plain managed Postgres ala RDS or something more bespoke like Neon? I know PlanetScale is working on a Vitess-like sharded Postgres (Neki?) but I’m guessing that is not yet running in the cloud?
kedihacker · 20h ago
It's more like rds but it has far better developer experience than rds with dashboards and better migration. Their unique offering is their metal offering which uses local SSD for superior performance https://planetscale.com/metal
gnaman · 18h ago
How is the pricing like? Unfortunately I can't go past 1500GB for storage on their pricing calculator. We have tons of data and I don't feel like scheduling a sales call just to estimate cost
anthonyronning · 20h ago
Their dashboards and metrics are really good. Plus the performance is great even on their non-metal offering too. I hope to one day be able to justify the upgrade to it.
Here's their announcement blog post with a bit more info, like their benchmarks:
Possibly silly question, and not specific to this DB offering, but why is it a good decision to connect to your DB over the Internet vs keeping it in the same data center as your db clients, given the network-latency based performance hit I believe it causes?
kedihacker · 18h ago
They usually deployed to the same region like any other database not colocated with the clients. With vpc peering there should be no difference.
bddicken · 18h ago
It's real Postgres operated by PlanetScale. You get HA by default, the best performance, query insights, etc.
I thought PlantScale's postgres option was still in early access mode?
samlambert · 12h ago
It is but we've been giving people access. If you are interested then email postgres@planetscale.com
samlambert · 20h ago
this is great to see. thank you for the write up.
markskram · 19h ago
thanks for working with us, sam!
anthonyronning · 19h ago
Thank you for the great service!
cl0ckt0wer · 20h ago
They can't afford database administration? Is it that expensive to hire a DBA?
anthonyronning · 20h ago
Pre-seed stage startup of two people. No, we can't hire a DBA.
Insanity · 19h ago
Good luck with the startup. TBH, I think going for as much managed solutions as possible while you bootstrap makes sense. Something more tailor-made often doesn't make sense until you start scaling.
anthonyronning · 19h ago
Thank you! Yes, totally agree. I do hope to get there one day and would LOVE nothing more than to have a DBA take this to the next steps when we're bigger.
markskram · 19h ago
the temptation to do it all yourself is strong when you're technically capable. but offloading to managed solutions is a huge benefit
dizhn · 20h ago
They listed some prices. It's like $150 per month.
vmg12 · 20h ago
Neon doesnt have usage based pricing it has autoscaling, there is a difference.
anthonyronning · 20h ago
We get 750 compute hours per month across our entire org, then charged 16c per hour.
vmg12 · 19h ago
And they are charging per vcpu like every other company. You have control over the autoscaling parameters.
Usage based pricing typically implies paying per number of requests.
xmorse · 17h ago
The answer is an open secret
markskram · 13h ago
;)
unixhero · 18h ago
I am bullish on fly.io
xmorse · 17h ago
I am not. No one ever heard of their managed postgres and they already started calling it with an acronym
unixhero · 12h ago
It could be that you have legitimate concerns regarding fly.io, things that I have not considered.
But I for one see zero acronyms on their managed postgres product information page: https://fly.io/docs/mpg/ *
> - Usage-based pricing that punished our success, the more users chatted, the more we paid
This is such a strange position on usage-based pricing and seems telling.
If you're running databases continuously, I find a lot of their original unique selling point pretty moot, especially if you're paying them extra for it.
The bullet I quoted makes it seem like you feel punished for having to pay more because you used more resources. That's, like, the fundamental idea of usage-based pricing. If you feel punished, it seems as though you misunderstood the whole idea.
I'll reiterate that it's not the only reason why I'm moving off of them. Reliability, performance, insights, etc.
It just happens to be a lot more affordable too.
If you have a constant base load of requests, lambda is just the wrong tool for the job.
Linear usage cost makes sense, but the more common/sane thing is cheaper unit pricing as you hit scale.
Depends on the provider's business model.
Many devtools want to make it trivial to get started, and zero/low prices facilitate that. They know that once you are set up with the tool, the barrier to moving is high. They also know that devs are tinkerers who may take a free product discovered on their free time and introduce it to a workplace who will pay for it.
But someone has to pay for all those free users/plans (they aren't using zero resources). With this business model, the payer is the person/org with some level of success who is forced up into a more expensive plan.
This is a valid strategy for two reasons:
- such users/orgs are less likely to move because they already have working code using the system and moving introduces risk
- if they have high levels of traffic, they may (not certainly, but may) be a profit making enterprise and will do the cold hard calculus of "it costs me $50/100 GB but would take a dev N hours to move and will have X opportunity cost" and decide to keep paying
The successful "labor of love" project is an unfortunate casualty.
The counter to that argument is that it's creating an adverse effect on your most profitable customers, with an incentive to move to offerings that don't have free tiers (or where the free tiers are not considerably affecting your own costs).
If your free tier is so lucrative that you need to 25x the cost, then your free tier is too expansive and you need to tone it down until the economics make sense.
> If your free tier is so lucrative that you need to 25x the cost, then your free tier is > too expansive and you need to tone it down until the economics make sense.
It does make sense, though. That's how almost every subsidized system works, and the benefit applies for everyone until they scale to a point where they are not legible for it. It does suck for the pool of people that just began paying the actual price of the service instead of the subsidized one, and certainly more so if they're not actually getting profit from it but then again, it isn't like they weren't benefitting from the price up to that point, otherwise they wouldn't have chosen it. Luckily enough, as far as databases go, there's a gazillion options to choose from and experiences like this are invaluable when it comes to picking one with a pricing model that fits the scaling requirements of a given project, and not only the technical merits.
Also as a side rant, I honestly don't think "projects of love" are a good counter argument to anything. They're clearly not of love because otherwise they would find a way to make them profitable. Most people are either lazy to, or lack the knowledge of how to turn their hobby into a marketable thing. Which is fine, nobody wants to deal with business when it comes to their hobbies, but one can't have it both ways. Either your hobby project gets successful and you find ways to cover its expenses, or you realize that your hobby project needs to be kept just a hobby project.
I appreciate the... tough love here, and also acknowledge that 'doing it for love' is ambiguous. But I strongly disagree that declining to make something profitable indicates that it's not out of love.
To clarify my own situation, it's more out of wanting to share knowledge with the world and build a community. It's a very popular site, ubiquitous in its niche, but that's about as much as I'll divulge.
I'll grant that we've been benefitting from the subsidy/hook up to now. But I'll also add the wrinkle that a substantial increase in bandwidth is due to AI harvesters. They are becoming an existential threat to projects like these.
I think the only reason to go with usage-based pricing is if you want to take a risk to save money - you are getting unpredictable bills but hope that average is going to be cheaper. As any gamble, you can win or lose.
And Neon will be lowering prices dramatically... a day from now?
And 250 dollars/month was considered expensive for the infrastructure handling that? My first impression is that based on the stakes alone, that would warrant a full time dedicated engineer.
Not gonna lie, although I appreciate the comparison and that they shared their experience publicly, this post sounds like half a technical write-up, half an ad for both companies.
No, it was not my intention for price to be the main thing people hooked in on with this article. It's the combination of it all. Better reliability, performance, and infrastructure AND it's more affordable.
> warrant a full time dedicated engineer
TBH, it's basically my sole full time job as the CTO.
> half a technical write-up, half an ad for both companies
I've been frustrated by Neon for months now and excited about PlanetScale's new postgres offering. Was pleasantly surprised by it too and wanted to write about it. I do appreciate you writing this and sorry if it comes up too much like an ad for us. Only meant to share our unique experiences and satisfaction with a new thing.
Initial results are actually pretty cool when using the UI to spin it up. The API leaves some things to be desired (bad error messages that obfuscate the actual error, undocumented rate limiting, ..).
Plus, there's been quite a number of strange bugs we ran into, like tables that don't recreate correctly because of dangling resources.
Overall, I'm pretty excited about the product because it makes life a bit easier, but it's not really 'production quality yet'.
(Alternatively, maybe we're just doing things in a bad way while we're learning to use it, so it could be a PEBKAC type of issue, rather than a Lakebase issue).
Quote from the article:
> At $250/month for 4 databases without any replicas, we were paying premium prices for subpar reliability.
Way too much for a project without a budget, and approximately zero for a project with a budget.
Here's their announcement blog post with a bit more info, like their benchmarks:
https://planetscale.com/blog/planetscale-for-postgres
https://planetscale.com/blog/benchmarking-postgres
Usage based pricing typically implies paying per number of requests.
But I for one see zero acronyms on their managed postgres product information page: https://fly.io/docs/mpg/ *
* Except in the url