Ask HN: How Do You Deploy?
12 canto 21 3/26/2025, 9:54:16 PM
If you're just starting, are a YC startup or similar, where and how do you deploy/ship your apps?
Say, you have your github, you've setup some pipelines for CI (or not ;), but how about deployment?
You'll need some storage, maybe a db of some sort and some compute or serverless.
You do AWS lambda, beanstalk, eks/aks/etc, raw vm, api gateway use railway or heroku on your own?
Or you hire devops or a product engineer with some cloud exp to handle this?
No right or wrong answers here :-) I appreciate all input!
Your job at this stage isn’t to build something technically awesome and beautifully architected or even production-ready by most standards. It’s to figure out what people will pay for. Focus your energy on that.
Once you’ve got your first hundred customers/thousand users and are serving >100k requests/day, think about what it will take to scale infra affordably.
(YMMV if your product is exceptionally compute-heavy, such as a custom ML model, or involves complex data pipelines out of the gate or other edge cases.)
2. Upload a container archive to the server.
3. A systemd path unit will extract the archive and load it into the container runtime.
4. Orchestration tool like docker compose will run or re-run services. Docker compose file was already setup in step 1.
DB is usually Sqlite in WAL mode with Litestream.
Checkout the repo at https://github.com/confuzeus/simple-django for a full example.
In the case of my major current personal project, I do it with a GitHub Actions workflow: https://github.com/JonLatane/jonline/blob/main/.github/workf...
A deploy looks like this: https://github.com/JonLatane/jonline/actions/runs/1346474905...
Here, I do a canary deploy to a dev server (jonline.io), then cut a GitHub release and deploy to production servers (bullcity.social, oakcity.social). (All really live on the same single-box dinky K8s cluster.)
In my own projects I have stayed the longest with Ansible, once the scripts are built, you can use them to deploy most web apps in the same way and stuff rarely breaks.
For websites I have switched away from ansible to simple shell-scripts ("npm run build && scp ..."), I have also done this for web apps but it starts getting a bit more complex when doing healthchecks/rollbacks.
In general, most of my work involes web apps and I start with this and grow from there:
- Monolith backend + Postgres + same language for backend and frontend with shared code.
- Small Linux server within a cloud with fixed pricing (like DigitalOcean) with backups enabled.
- When the project allows it, postgres is installed in the VM (backups help to recover data and keep the price small).
- Use nginx as the entrypoint to the app, this is very flexible once you are used to it, for example, you can do caching + rate limit with simple configuration.
- Use certbot to get the SSL certificate.
- Use systemd to keep the app running.
- A cheap monitoring service to keep pinging my app.
- Deploys are triggered from my computer unless it is justified to delegate this to the CI.
It's been a while since I have found Ansible to be too slow and I have been willing to complete building my with a general-purpose tool for deploying webapps this way but I have no idea if I'll be ever done with this.
Perhaps the most important project I used to run with this approach is a custom block-explorer API which indexed Bitcoin + a few other cryptocurrencies and it scaled well with a single-VM (nginx aggressive caching for immutable data helped a lot), this means that the postgres storage required more than 1TB.
Cert-Manager is a CertBot-compatible K8s service that “just works” with deployed services. Nginx ingresses are a pretty standard thing there too. Monitoring is built-in. And with a few API keys, it’s easy to do things like deploy from GitHub actions when you push a commit to main, after running tests.
And perhaps most importantly, managed Kubernetes services let you attach storage for DB and clusters with standard K8s APIs (the only thing provider-/DigitalOcean-specific is the names of the storage service tiers). Also the same price as standard DigitalOcean storage with all their standard backups… but again, easier to set up, and standardized so that if DigitalOcean ever gets predatory, it’s easy enough to migrate to any of a dozen other managed K8s services.
I take a "power pack" approach. Everything - content, code, infra - is packed together and deployed together. It runs in docker compose on DigitalOcean.
To deploy, I just push to GitHub. A service on the server side rebuilds whenever it sees new commits. It's also part of the power pack. I don't like having things spread across multiple services. A pre-push hook lints and builds the static site before deployments, so failed builds are very rare.
This works well for me because I often work offline or with bad internet. It's important for me to run everything locally. I can also run just the static site generator if all I do is edit the content.
I also find this a lot easier to reason about than a bunch of scattered services talking to each other through APIs. In the end, it's just docker plus a bunch of scripts.
git merge -> ci -> oci artifact -> cd -> cloud.
Every deployable is packaged as an image, and can be deployed to serverless runtimes available on many clouds, VMs, and k8s (I assume other orchestrators too, but haven't tried).
My goal is to commoditize my cloud provider, while minimizing my costs. Everything is configured through terraform, so standing up an equivalent environment on a new cloud is pretty trivial.
I've tried to be very mindful about what I depend on from the provider (eg using provider specific sdks). I have had mixed results at sticking to this. I would like to improve this to the point where I could automatically fail over to other providers.
If the product gains traction you can either scale up at Fly, chose to move to a different cloud offering or even decide to self-host.
Too many people over complicate everything with elastic-cloud-distributed-load-balancer on Vercel and end up overpaying and not controlling their infrastructure.
You can achieve redundancy by spinning two docker instances of the same container and setting Caddy reverse proxy in front of them. You don't need k8s for that.