We moved from AWS to Hetzner, saved 90%, kept ISO 27001 with Ansible
We rebuilt key AWS features ourselves using Terraform for VPS provisioning, and Ansible for everything from hardening (auditd, ufw, SSH policies) to rolling deployments (with Cloudflare integration). Our Prometheus + Alertmanager + Blackbox setup monitors infra, apps, and SSL expiry, with ISO 27001-aligned alerts. Loki + Grafana Agent handle logs to S3-compatible object storage.
The stack includes: • Ansible roles for PostgreSQL (with automated s3cmd backups + Prometheus metrics) • Hardening tasks (auditd rules, ufw, SSH lockdown, chrony for clock sync) • Rolling web app deploys with rollback + Cloudflare draining • Full monitoring with Prometheus, Alertmanager, Grafana Agent, Loki, and exporters • TLS automation via Certbot in Docker + Ansible
I wrote up the architecture, challenges, and lessons learned: https://medium.com/@accounts_73078/goodbye-aws-how-we-kept-i...
I’m happy to share insights, diagrams, or snippets if people are interested — or answer questions on pitfalls, compliance, or cost modeling.
At what cost? People usually exclude the cost of DIY style hosting. Which usually is the most expensive part. Providing 24x7 support for the stuff that you've home grown alone is probably going to make large dent into any savings you got by not outsourcing that to amazon.
> $24,000 annual bill felt disproportionate
That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
This still could make sense. But you aren't telling the full story here. And I bet it's a lot less glamorous when you factor in development time for this.
Don't get me wrong; I'm actually considering making a similar move but more for business reasons (some of our German customers really don't like US hosting companies) than for cost savings. But this will raise cost and hassle for us and I probably will need some re-enforcements on my team. As the CTO, my time is a very scarce commodity. So, the absolute worst use of my time would be doing this myself. My focus should be making our company and product better. Your techstack is fine. Been there done that. IMHO Terraform is overkill for small setups like this; fits solidly in the YAGNI category. But I like Ansible.
I don’t understand why people keep propagating this myth which is mostly pushed by the marketing department of Azure, AWS and GCP.
The truth is cloud provider doesn’t actually provide 24/7 support to your app. They only ensure that their infrastructure is mostly running for a very loose definition of 24/7.
You still need an expert on board to ensure you are using them correctly and are not going to be billed a ton of money. You still need people to ensure that your integration with them doesn’t break on you and that’s the part which contains your logic and is more likely to break anyway.
The idea that your cloud bill is your TCO is a complete fabrication and that’s despite said bill often being extremely costly for what it is.
But the idea that AWS provides some sort of white glove 24/7 support is laughable for anyone that's ever run into issues with one of their products...
I guess a lot depends on size, diversity and dynamics of the demand. Not every nail benefits from contact with the biggest hammer in the toolbox.
Presumably they are in Europe? so labour is a few times cheaper here.
> Providing 24x7 support
They are not maintaining the hardware itself and it’s not like Amazon is doing providing devops for free. Unless you are using mainly serverless stuff the difference might not be that significant
>> That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
In terms of absolute savings, we’re talking about 90% of 24k, that’s about 21.6k saved per year. A good amount, but you cannot hire an SRE/DevOps Engineer for that price; even in Europe, such engineers are paid north of 70k per year.
I personally think the TCO (total cost of ownership) will be higher in the long run, because now every little bit of the software stack has to be managed by their infra team/person, and things are getting more and more complex over time, with updates and breaking changes to come. But I wish them well.
Out of experience, in the long run, this "managed aws saved us because we didn't need people" feels always like the typical argument made by saas sales people. In reality, many services/saas are really expensive, and you probably will only need a few features which sometimes you can rollout yourself.
The initial investment might be higher, but in the long run I think it's worth it. It's a lot like Heroku vs AWS. Superexpensive, but it allows you with little knowledge to push a POC in production. In this case, it's AWS vs self hosted or whatever.
Finally, can we quantify the cost of data/information? This company seems to be really "using" this strategy (= everything home made, you're safe with us) for sales purposes. And it might work, although for the final consumer this might have a higher price, which finally pays the additional devops to maintain the system. So who cares?
How important is for companies to not be subject to CLOUD act or funny stuff like that?
Unless by Europe you mean the Apple feature availability special of UK/Germany/France/Spain/Italy
I’m an SWE with a background in maths and CS in Croatia, and my annual comp is less than what you claim here. Not drastically, but comparing my comp to the rest of the EU it’s disappointing, although I am very well paid compared to my fellow citizens. My SRE/devops friends are in a similar situation.
I am always surprised to see such a lack of understanding of economic differences between countries. Looking through Indeed, a McDonald’s manager in the US makes noticeably more than anyone in software in southeast Europe.
There will be a new AWS European Sovereign Cloud[1] with the goal of being completely US independent and 100% compliant with EU law and regulations.
[1]: https://www.aboutamazon.eu/news/aws/aws-plans-to-invest-7-8-...
The idea that anything branded AWS can possibly be US independent when push comes to shove is of course pure fantasy.
The US clearly state that extraterritoriality is fine with them. Depending on the company, one gag order is enough to sabotage a whole company.
The ICC move by MS made hospitals go in an even higher gear to prepare off-ramp plans. From private Azure cloud to “let’s get out”
Two reasons for this stick out:
- Are the multi-million dollar SV seed rounds distorting what real business costs are? Counting dev salaries etc. (if there is at least one employee) it doesn't seem worth the effort to save $20k - i.e., 1/5 of a dev salary? But for a bootstrapped business $20k could definitely be existential.
- The important number would be the savings as percent of net revenue. Is the business suddenly 50% more profitable? Then it's definitely worth it. But in terms of thinking about positively growing ARR doing cost/benefit on dropping AWS vs. building a new (profitable) feature I could see why it might not make sense.
Edit to add: it's easy to offhand say "oh yeah easy, just get to $2M ARR instead of saving $20k- not a big deal" but of course in the real world it's not so simple and $20k is $20k. The prevalent SV mindset of just spending without thinking too much about profitability is totally delusional except for like 1 out of 10000 startups.
If I generalize, I see two kinds of groups for whom this reduction of cost does not matter. The first group are VC-funded, and the second group are in charge of +million AWS bill. We do not have anything in common with these companies, but we have something in common with 80% of readers on this forum and 80% of AWS clients.
We're also bootstrapped and use Hetzner, not AWS (except for the occasional test), for very much the same reasons as you.
And we are also fully infrastructure as code using Ansible.
We used to be a pure software vendor, but are bringing out a devtool where the free tier runs on Hetzner. But with traction, as we build out higher tier services, it's an open question on what infrastructure to host it on.
There are a kazillion things to consider, not the least of which is where the user wants us to be.
• We heavily invested upfront in infrastructure-as-code (Terraform + Ansible) so that infra is deterministic, repeatable, and self-healing where possible (e.g. auto-provisioning, automated backup/restore, rolling updates).
• Monitoring + alerting (Prometheus + Alertmanager) means we don’t need to watch screens — we get woken up only if there’s truly a critical issue.
• We don’t try to match AWS’s service level (e.g. RTO of minutes for every scenario) — we sized our setup to our risk profile and customers’ SLAs.
> True cost comparison:
• The migration was done as part of my CTO role, so no external consulting costs. The time investment paid back within months because the ongoing cost to operate the infra is low (we’re not constantly firefighting).
• I agree that if you had to hire more people just to manage this, it could negate the savings. That’s why for some teams, AWS is still a better fit.
> Business vs. cost drivers: Honestly, our primary driver was sovereignty and compliance — cost savings just made the business case easier to sell internally. Like you, our European customers were increasingly skeptical of US cloud providers, so this aligned with both compliance and go-to-market.
> Terraform / YAGNI: Fair point! Terraform probably is more than we need for the current scale. I went with it partly because it fits our team’s skillset and lets us keep options open as we grow (multi-provider, DR regions, etc).
And, finally, because this, I am posting about it. I am sharing as much as I can, and just spread the work about it. I just sharing my experience and knowledge. If you have any questions or want to discuss further, feel free to reach out at jk@datapult.dk!
[1] https://upcloud.com
Given these existence of these tools, which are fantastic, I'm often stunned at how sluggish, expensive and how lacklustre the UX is of the AWS monitoring stack.
Monitoring quickly became the most expensive, and most unpleasant part of our AWS experience.
However in the US it's not very relevant or even interesting to companies, and some European companies fail to understand that.
SOC 2 is the default and the preferred standard in the US - it's more domestic and less rigid than ISO 27001.
checking for evidence that you are doing those things I would call ridgit. SOC2 as attestation doesn’t require so much documentation.
Also, Loki! How do you handle memory hunger on loki reader for those pesky long range queries, and are there alternatives?
Failures/upgrades: We provision with Terraform, so spinning up replacements or adding capacity is fast and deterministic.
We monitor hardware metrics via Prometheus and node exporter to get early warnings. So far (9 months in) no hardware failure, but it’s a risk we offset through this automation + design.
Apps are mostly data-less and we have (frequently tested) disaster recovery for the database.
Loki: We’re handling the memory hunger by
• Distinguishing retention limits and index retention
• Tuning query concurrency and max memory usage via Loki'’'s config + systemd resource limits.
• Use Promtail-style labels + structured logging so queries can filter early rather than regex the whole log content.
• Where we need true deep history search, we offload to object store access tools or simple grep of backups — we treat Loki as operational logs + nearline, not as an archive search engine.
We used AWS EKS in the old days and we never liked the extreme complexity of it.
With two Spring Boot apps, a database and Redis running across Ubuntu servers, we found simpler tools to distribute and scale workloads.
Since compute is dirt cheap, we over-provision and sleep well.
We have live alerts and quarterly reviews (just looking at a dashboard!) to assess if we balance things well.
K8s on EKS was not pleasant, I wanna make sure I never learn how much worse it can get across European VPS providers.
Just remember: their interest is that you buy their cloud service, not in giving an out-of-the-box great experience on their open source stuff.
One of the advantages of more expensive providers seems to be that they have good reputation due to a de facto PoW mechanism.
The only potential indirect risks is if your Hetzner VPS IP range gets blacklisted (because some Hetzner clients abuse it for Sybil attacks or spam).
Or if Hetzner infrastructure was heavily abused, their upstream or internal networking could (in theory) experience congestion or IP reputation problems — but this is very unlikely to affect your individual VPS performance.
This depends on what you are doing on Hetzner and how you restrict access but for an ISO-27001 certified enterprise app, I believe this is extremely unlikely.
No matter load, there is a need for complexity for this certification.
Not all employees log in daily. For a scheduling app, most people check a few times a week, but not every day.
Daily active users (DAU) = around 10,000 to 20,000
Peak concurrency (users on at the exact same time) = generally between 1,500 to 2,000 at busy times (like when new schedules drop or at shift start/end times)
Average concurrent users at any random time = maybe 50 to 150
Why cloud costs can add up even for us:
Extensive use of real-time features and complex labour rules mean the app needs to handle a lot of data processing and ultimately sync into salary systems.
An example:
Being assigned to a shift has different implications for every user. It may trigger a nuisance bonus, and such a bonus could further only be triggered in certain cases, such as when you had the shifts assigned compared to when it start time.
Lastly, there is the optimizing of a schedule why is computationally expensive.
That is not to say that this aspect alone justifies huge fees, but it does have significant value.
AWS RDS does not upgrade major or minor versions of Postgres or, as you mentioned, MySQL. In that case, they might patch update it. But these patch updates are easy to do yourself and does not take long to be reminded of in your ISMS and then subsequently carry out.
The purpose of this post is not to justify cloud hyperscalers versus European servers. It is actually a post on how to manage a highly regulated, compliant, and certified server setup yourself outside AWS because so many people just have their ISO certification on AWS infrastructure and once they got that they are never able to leave AWS again.
If you have no client demand and no real need to work on updating your infrastructure yourself, then you can go ahead and not go for an ISO 27001 certification and let AWS RDS update as it pleases. But if you operate a complex beast in a regulated industry such as employment law, finance, and such, then you get some more fun challenges and higher need for control.
It is a great big cloud play to make enterprises reliant on the competency in their weird service abstractions, which is slowly draining the quite simple ops story an enterprise usually needs.
Might throw together a post on it eventually:
https://news.ycombinator.com/context?id=43216847
And also lacking a bit in details:
- both technical (e.g. how are you dealing with upgrades or multi-data center fallback for your postgresql), and
- especially business, e.g. what's the total cost analysis including the supplemental labor cost to set this up but mostly to maintain it.
Maybe if you shared your scripts and your full cost analysis, that would be quite interesting.
I'm trying to share as much technical across this thread as for your two examples:
System upgrades:
Keep in mind that as per the ISO specification, system upgrades should be applied but in a controlled manner. This lends itself perfectly to the following case that is manually triggered.
Since we take steps to make applications stateless, and Ansible scripts are immutable:
We spin up a new machine with the latest packages and once ready it join the Cloudflare load balancer. The old machines are drained and deprovisioned.
we spin up a new machine We have a playbook that iterates through our machines and does it per machine before proceeding. Since we have redundancy on components, this creates no downtime. The redundancy in the web application is easy to achieve using the load balancer in Cloudflare. For the Postgres database, it does require that we switch the read-only replica to become the main database.
DB failover:
The database is only written and read from by our web applications. We have a second VM on a different cloud that has a streaming replication of the Postgres database. It is a hot standby that can be promoted. You can use something like PG Bouncer or HAProxy to route traffic from your apps. But our web framework allows for changing the database at runtime.
> Business
Before migration (AWS): We had about 0.1 FTE on infra — most of the time went into deployment pipelines and occasional fine-tuning (the usual AWS dance). After migration (Hetzner + OVHCloud + DIY stack): After stabilizing it is still 0.1 FTE (but I was 0.5 FTE for 3-4 months), but now it rests with one person. We didn’t hire a dedicated ops person. On scaling — if we grew 5-10×: * For stateless services, we’re confident we’d stay DIY — Hetzner + OVHCloud + automation scales beautifully. * For stateful services, especially the Postgres database, I think we'd investigate servicing clients out of their own DBs in a multi-tenant setup, and if too cumbersome (we would need tenant-specific disaster recovery playbooks), we'd go back to a managed solution quickly.
I can't speak for cloud FTE toll vs a series of VPS servers in the big boys league ($ million in monthly consumption) and in the tiny league but at our league it turns out that it is the same FTE requirement.
Anyone want to see my scripts, hit me up at jk@datapult.dk. I'm not sure it'd be great security posture to hand it out on a public forum.
Everyone talks about it but none wants to be the first mover.
There's also a lot of FUD regarding hiring more staff, my observed experience is that hyperscalers need an equivalent number of people on rotation- it's just different skills (learning the intricacies/quirks of different product offerings on the hyperscaler vs CS/Operational fundamentals) - so everyone is scared to overload their teams with work and potentially need to hire people -- you can couple this with the fact that all migrations are up-front expensive and change is bad by default.
There will come a day where there simply isn't enough money to spend 10x the cost on these systems. It will be a sad day for everyone because salaries will be depressed too, and we will opine the days of shiny tools where we could make lots of work disappear by saying that our time is too expensive to work with such peasant issues.
If I manage to get https://uncloud.run/ or something similar up & running, the platform will no longer matter, whether it's OVH, Hetzner, Azure, AWS, GCP, ... It should all be possible & easy to switch... #FamousLastWords
Most of our customers have a hard requirement on ISO 9001. Many on ISO 27001, too. The rest strongly prefers a partner having a plan to get ISO 27001
The Medium post is mostly fluff and a lead generator.
I’m happy to share specific configs, diagrams, or lessons learned here on HN if people want — and actually I’m finding this thread a much better forum for that kind of deep dive.
I'll dive into other aspects elsewhere: You can't doubt that given what I am sharing here.
Any particular area you’d like me to expand on? (e.g. how we structured Terraform modules, Ansible hardening, Prometheus alerting, Loki tuning?)
A.5.25 Security in development and support processes:
Safe rolling deploy, rollback mechanisms, NGINX health checks, code versioning, Prometheus alerting for deployment issues
A.6.1.2 Segregation of duties:
Separate roles for database, monitoring, web apps; distinct system users
A.8.1.1 Inventory of assets:
Inventory management through Ansible inventory.ini and groups
A.8.2.3 Handling of assets:
Backup management with OVH S3 storage; retention policy for backups
A.8.16 Monitoring activities (audit logging, monitoring):
auditd installed with specific rule sets; Prometheus + Grafana Agent + Loki for system/application/audit log monitoring
A.9.2.1 User registration and de-registration:
ansible_user, restricted SSH access (no root login, pubkey auth), AllowUsers, DenyUsers enforced
A.9.2.3 Management of privileged access rights:
Controlled sudo, audit rules track use of sudo/su; no direct root access
A.9.4.2 Secure log-on procedures:
SSH hardening (no password login, no root, key-based access)
A.9.4.3 Password management system:
Uses Ansible Vault and variables;
A.10.1.1 Cryptographic controls policy:
SSL/TLS certificate generation with Cloudflare DNS-01 challenge, enforced TLS on Loki, Prometheus
A.12.1.1 Security requirements analysis and specification:
Tasks assert required variables and configurations before proceeding
A.12.4.1 Event logging:
auditd, Prometheus metrics, Grafana Agent shipping logs to Loki
A.12.4.2 Protection of log information:
Logs shipped securely via TLS to Loki, audit logs with controlled permissions
A.12.4.3 Administrator and operator logs:
auditd rules monitor privileged command usage, config changes, login records
A.12.4.4 Clock synchronization:
chrony installed and enforced on all hosts
A.12.6.1 Technical vulnerability management:
Lynis, Wazuh, vulnerability scans for Prometheus metrics
A.13.1.1 Network controls:
UFW with strict defaults, Cloudflare whitelisting, inter-server TCP port controls
A.13.1.2 Security of network services:
SSH hardening, NGINX SSL, Prometheus/Alertmanager access control
A.13.2.1 Information transfer policies and procedures:
Secure database backups to OVH S3 (HTTPS/S3 API)
A.14.2.1 Secure development policy:
Playbooks enforce strict hardening as part of deploy processes
A.15.1.1 Information security policy for supplier relationships:
OVH S3, Cloudflare services usage with access key/secret controls; external endpoint defined
A.16.1.4 Assessment of and decision on information security events:
Prometheus alert rules (e.g., high CPU, low disk, instance down, SSL expiry, failed backups)
A.16.1.5 Response to information security incidents:
Alertmanager routes critical/security alerts to email/webhook; plans for security incident log webhook
A.17.1.2 Implementing information security continuity:
Automated DB backups, Prometheus backup job monitoring, retention enforcement
A.18.1.3 Protection of records:
Loki retention policy, S3 bucket storage with rotation; audit logs secured on disk
* - https://news.ycombinator.com/showhn.html
Have you looked into others as well, like IONOS and Scaleway?
Scaleway came up but is more expensive. IONOS did not come up in our research.
Part of what we tried to do was to make ourselves independent from traditional cloud services and be really good at doing stuff on a VPS. Once you start doing that, you can actually allow yourself to look more at uptimes and at costs. Also, since we wanted everything to be fully automated, Terraform support was important for us, and OVHcloud and Hetzner had that.
I'm sure there's many great cloud providers out in Europe, but it's hard to vet them to understand if they can meet demand and if they are financially stable. We would want not to keep switching cloud providers. So picking two of the major ones seemed like a safe choice.
I don't remember a single such case. I remember reading a lot of speculations like "it's highly likely that it was done by Russians" every single time without a trace of evidence.
It's undeniable that core European infrastructure is targeted currently
Personally I think the amount of special pleading required to imagine that it is _not_ Russia is a bit much (particularly around the deep sea cable cuts; at that point you’re really claiming that Russia is deniably pretending that it is them, but really it’s someone else), but you do you. It doesn’t change the overarching point; both Hetzner and OVH would be obvious targets for, ah, whoever it is.
Once I was working in a quite small company (around 100 employees) that hosted everything on AWS. Due to high bills (it's a small company that resided in Asia) and other problems, I migrated everything to DigitalOcean (we still used AWS for things like SES), and the monthly bill for hosting became like 10 times lower. With no other consequences (in other words, it haven't become less reliable).
I still wonder who calculated that AWS is cheaper than everything else. It's definitely one of the most expensive providers.