Ask HN: Is Kubernetes still a big no-no for early stages in 2025?
22 herval 37 8/21/2025, 6:25:42 PM
It's a commonly-repeated comment that early stage startups should avoid K8s at all cost. As someone who had to manage it on a baremetal infrastructure in the past, I get where that comes from - Kubernetes has been historically hard to setup, you'd need to spend a lot of time learning the concepts and how to write the YAML configs, etc.
However, hosted K8s options have improved significantly in recent years (all cloud providers have Kubernetes options that are pretty much self-managed), and I feel like with LLMs, it's become extremely easy to read & write deployment configs.
What's your thoughts about adopting K8s as infrastructure early on (say, when you have initial customer fit and a team of 5+ engineers) and standardizing around it? How early is too early? What pitfalls do you think still exist today?
At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing.
Probably not an issue for you but costs will also tend to bump up quite a lot, you will be ingesting way more logs, tons more metrics etc just for the cluster itself, you may find yourself paying for more things to help manage & maintain your cluster(s) . Security add-ons can quickly get expensive.
Same goes for all tech choices. If you already know it, you understand its pros and cons and it still seems like the simplest best option for the concrete thing you need to build right now, use it.
Otherwise, use whatever alternative tech fits that description instead!
For my own projects I use a managed Northflank cluster on my own AWS account and likewise... just a fantastic experience. Everything that Heroku could and should have been. Yes the cluster is a bit pricey to stand up both in terms of EC2 compute and management layer costs, but once it's there, it's there. And the costs scale much more nicely than shoving side projects onto Heroku.
At this stage I consider managed k8s my default go-to unless it's something so lightweight I just want to push it to Vercel and forget about it.
PS. Edit for clarity.
* how much downtime can be tolerated during a deploy or outage? load balancing and multi-region is more $$$.
* if you have a bunch of linux nerds and an efficient app -- a nginx webserver + your app + Postgres DB and Ansible to manage a single VM with Cloudflare in front of it might be a good option. Portainer in the VM is nice if you want to go with containers.
* if you have a bunch of desktop devs, containers and build pipelines with PaaS are a good option. many are resilient and have HTTPs built in.
* the smaller your infra/devops team, the more i would leverage team knowhow and encourage PaaS offerings.
* the smaller your budget, the more creative you need to be (ec2/storage accounts as part of hosting, singular monolithic VM has relatively flat costs, what free stuff do i have on my cloud provider, etc)
The real challenge isn’t setting up the EKS cluster, but configuring everything around it: RBAC, secrets management, infrastructure-as-code, and so on. That part takes experience. If you haven’t done it before, you're likely to make decisions that will eventually come back to haunt you — not in a catastrophic way, but enough to require a painful redesign later.
P.S. If your needs are simple, consider starting with Docker Swarm. It's surprisingly low maintenance compared to Kubernetes, which has many moving parts and frequent deprecations from cloud providers. Feel free to drop me an email. I can share a custom Python tool I've written long time ago to automate the initial setup via the AWS API.
* Kubernetes is great for a lot of things, and I think there's many use cases for it where it's the best option bar none
* Particularly once you start piling on requirements - we need logging, we need metrics, we need rolling redeployments, we need HTTPS, we need a reverse proxy, we need a load balancer, we need healthchecks. Many (not all!) of these things are what mature services want, and k8s provides a standardized way to handle them.
* K8s IS complex. I won't lie. You need someone who understands it. But I do enjoy it, and I think others do too.
* The next best alternative in my opinion (if you don't want vendor lock in) is docker-compose. It's easy to deploy locally or on a server
* If you use docker-compose, but you find yourself wanting more, migrating to k8s should be straightforward
So to answer your questions, I think you can adopt k8s whenever you feel like it, assuming you have the expertise and are willing to dedicate time to maintaining it. I use it in my home network with a 1 node "cluster". The biggest pitfalls are all related to vendor lock in - managed Redis, Azure Key Vault. Hyper specific config related to your managed k8s provider that might be tough to untangle. At the same time, you can just as easily start small with docker-compose and scale up later as needed.
On the other hand, if you are thinking of using bare VMs, then better go with managed K8s. I think in 2025 it's a draw in terms of initial setup complexity, but managed K8s doesn't require constant babysitting in my experience, unlike VMs, and you are not sinking hours into a bespoke throwaway setup.
But don’t discount bare-metal first! I see a lot of K8s or other cluster managers being used to manage underpowered cloud VMs, and while I understand the need for an orchestrator if you’re managing dozens of VMs, I wonder - why do you need multiple VMs in the first place if their total performance can be achieved by a handful of bare-metal machines?
The time sink required for the care and feeding just isn't worth it. I pretty much have to dedicate one engineer about 50% of the year to keeping the dang thing updated.
The folks who set it all up did a poor job. And it has been a mess to clean up. Not for lack of trying, but for lack of those same people being able to refine their work, getting pulled into the new hotness and letting the clusters rot.
Idk your workload, but mine is not even suited for K8s... The app doesn't like to scale. And if the leader node gets terminated from a scale down, or an EC2 fails, processing stops while the leader is reelected. Hopefully not another node that is going down in a few seconds... Most of the app teams stopped trying to scale their app up and down because of this ...
I would run on ECS if AWS was my cloud at a start up. Then if scaling was getting too crazy, move to EKS.
But for the love of God ... Keep your monitoring and logging separated from your apps. Give it its own ECS cluster, or buy a fully managed solution. It is hard to record downtime if your monitoring goes down during your K8s upgrade.
For hobby projects that I don't really plan to scale, I've recently gotten back into non-containerized workloads running off of systemctl in an Ubuntu VM. It feels pretty freeing not to worry about all the cruft, but that will bite me if something ever does need to live on multiple servers.
Edit: you can run a lot on one or two hetzner servers for almost no money. Compare €60/month, vs about $1000/month for a couple of replicated fargate services.
(AWS employee, but opinions are my own)
Just pick a cloud provider and move on. All of the top-tier providers can scale. Choose the one you're most comfortable with and move on. Focus on the things that matter for your business like building the right product and getting customers. If you have regret later, it's going to be because you were so wildly successful that it is now an actual business risk and/or your customers demand it. But don't make this decision based on a problem you don't have and are unlikely to have in the next 12-24 months, if ever. Cloud agnosticism is rarely a functional or strategic requirement of any given business, and it's usually very expensive to implement--more than any savings you might achieve by pitting providers against each other (which you can't do anyway unless you are big enough to be a strategic customer).
What’s your business? Do you have product market fit? Do you benefit enough from the three things K8S does well to pay the cost in increased complexity, reduced visibility, and increased toil? If you can’t immediately rattle off those three things, you don’t need it.
Don’t you want to just focus on the problem you’re solving for your customers and not the infrastructure that makes your app go? Every startup I’ve seen doing k8s should not have been. Every startup I’ve seen not using k8s didn’t need it. (Except a startup who moved from beanstalk which nobody should ever use, and they could have done better by moving to something like ecs)
I’ve seen a startup lose their entire DevOps team and successfully go a year without it because their core app was on heroku. What’s that worth in dollars? What are those dollars worth in opportunity cost?
Either you go all in on someones setup or you get to do it all yourself.
That's true for any service. Either you drink the AWS/GCP/Axure koolaid or you make your own. Whether it's k8s or Swarm or whatever doesn't matter.
Disclosure: I’m the author of skate
I think using Kubernetes effectively in 2025 is more about what you _don't_ use than what you _do_ use. As an early stage startup you can get a long way with no RBAC, no network policies, no auto-scalers, and even no stateful workloads. You can use in-cluster metrics metrics and logging before you need to turn to Prometheus, Loki, etc. Use something managed like AWS EKS.
Try to solve your problems first by taking away, and only if that isn't feasible then start adding. Plain old Deployments will get you a long way.
Now this next bit is going to sound like a pitch, and that's because it is – but when those free credits start running out, your bill starts reaching mid-four-figures, and you start thinking about your first DevOps hire, _call us_. Just for 30 minutes. We can migrate you out your cloud infra and onto a nice spacious bare metal k8s cluster, and we'll become your 24/7 on-call DevOps team. We'll get woken up in the night when things break, not you. And core-for-core it will cost a lot less than AWS.
The fact that we can do all that is a testament to how expensive AWS really is. K8s is a good choice now if you keep it simple, positions you well for growth in the future, and for a cluster under a couple of hundred cores it is going to be pretty economical to run it in the public cloud.
PS. Link in bio
Even in some very non-startup enterprises, Cloud Foundry and Open Shift get adopted for a reason: some teams don’t need the overhead.
For startups there’s fly.io, render.com, and of course Heroku, but really — you can get from MVP to pretty decent scale on AWS or GCP with some scripts or Ansible.
Use k8s if you need it. It’s pretty well-proven. But it’s not something you need to have FOMO about.
Maybe you need a cluster per client and k8s is the only option.
Maybe you literally only need a few docker services and swarm/ecs/etc are fine forever.
What is the problem that K8s solves for you?
In general - scaling up a small number of microservices + their associated infra (redis/rabbitmq/etc)
Ideally start with an existing kube stack and slowly make it your own.
Operationalizing across hetereogenous clusters will be an unfortunate source of excitement.
But I agree if you're doing something simpler, sticking to a single provider is fine
And regarding Kubernetes migrations, once you've made sure you have network and DNS connectivity cross cluster it's essentially just replacing the CSI and LoadBalancer controller. Then the actual data migration there's no magic bullet, depends on what you run.
The USP for Kubernetes is that it's essentially the same no matter where you run it since everything conforms to the Kubernetes API spec.
If you don't want or need local development LARPing prod then anything goes.
I would provision buckets with Terraform/tofu, we just use ingress so idk about API gateways.
The eye opener for me was "I can just do this in Kubernetes", which is pretty much always true (though not always right).
Kubernetes + Prometheus + Grafana (with friends), cert-manager, CSI, LB and some CNI you have something resembling what I'd use from $cloud provider.
Deploying K3s is really easy, it can definitely be a time-sink when you're learning but the knowledge transfers really well.
You also don't really need all Kubernetes features to use it, you can deploy K3s on a single VM and run your pods with hostnetworking and local path mounts, essentially turning it into a fancy docker-compose which you can grow with instead of throw out.
I value FOSS and being able to run "anywhere" with the same tools. K8s and Postgres gets me there, I haven't worked on any "web scale" projects though but I know both can scale pretty high.