Show HN: Canine – A Heroku alternative built on Kubernetes
I've been working on Canine for about a year now. It started when I was sick of paying the overhead of using stuff like Heroku, Render, Fly, etc to host some web apps that I've built. At one point I was paying over $400 a month for hosting these in the cloud. Last year I moved all my stuff to Hetzner.
For a 4GB machine, the cost of various providers:
Heroku = $260 Fly.io = $65 Render = $85 Hetzner = $4
(This problem gets a lot worse when you need > 4GB)
The only downside of using Hetzner is that there isn’t a super straightforward way to do stuff like:
- DNS management / SSL certificate management - Team management - Github integration
But I figured it should be easy to quickly build something like Heroku for my Hetzner instance. Turns out it was a bit harder than expected, but after a year, I’ve made some good progress
The best part of Canine, is that it also makes it trivial to host any helm chart, which is available for basically any open source project, so everything from databases (e.g. Postgres, Redis), to random stuff like torrent tracking servers, VPN’s endpoints, etc.
Open source: https://github.com/czhu12/canine Cloud hosted version is: https://canine.sh
Also, your docs on how K8s works look really good, and might be the most approachable docs I've seen on the subject. https://canine.gitbook.io/canine.sh/technical-details/kubern...
Question: I assumed when I read the pitch, that I could spin up a managed K8s somewhere, like in Digital Ocean, and use this somehow. But after reading docs and comments, it sounds like this needs to manage my K8s for me? I guess my question is: 1) When I spin up a "Cluster" on Hetzner, is that just dividing up a single machine, or is it a true K8s cluster that spans across multiple machines? 2) If I run this install script on another server, does it join the cluster, giving me true distributed servers to host the pods? 3) Is there a way to take an existing managed K8s and have Canine deploy to it?
I usually use #1 for staging / development apps, and then #2 for production apps. For #2, I manage the number of nodes on the Digital Ocean side, and kubernetes just magically reschedules my workload accordingly (also can turn on auto scaling).
I think the thing that you're getting at that is not supported is having Canine create a multi-node cluster directly within Hetzner.
There is a terraform to create a Kubernetes cluster from hetzner, but this isn't currently installed on Canine.
I'm not closed to trying it out, there were a few UI improvements I wanted to take a shot at first, but at the moment Canine assume's you have a cluster ready to go, or can help you walk through a K3s installation to a single VPS.
https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...
Rails app Canine infra Raspberry pi server My own ISP
Was a tech stack I managed to get an app running on, for some projects I've kicked around.
Small feedback - your "Why you should NOT use Canine" section actually is a net-negative for me. I actually was thinking it was cool that it may actually list downsides, but then you did a sarcastic thing that was annoying. I think you should just be frank - you'll have to purchase and manage servers, you'll be on the hook if they go down and have to get them back up, this is an early product made by one person, etc.
I know this is just a general description, but “10,000 servers” —> Kubernetes actually only claims support up to 5,000 nodes: https://kubernetes.io/docs/setup/best-practices/cluster-larg...
Plenty of larger clusters exist, but this usually requires extensive tuning (such as entirely replacing the API registry). And obviously the specific workload plays a large role. Kubernetes is actually quite far from supporting larger clusters out of the box, though most releases include some work in that direction.
I'll walk that back
No comments yet
Still in active development but the goal is to keep it simple enough that you can easily understand what's happening at each layer and can troubleshoot.
I’ve got a spare N100 NUC at home that’s languishing with an unfinished Microcloud install; thinking of yanking that off and giving Canine a try instead!
But, I've always found core kubernetes to be a delight to work with, especially for stateless jobs.
Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config.
I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering.
Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack.
It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex.
The kubernetes iceberg is 3+ years old but still fairly accurate.
https://www.reddit.com/r/kubernetes/comments/u9b95u/kubernet...
I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it.
even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.
if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.
Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.
If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.
Hence why I’m always eager to drive something like Canine!
(EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)
Nutanix and others are helping a lot in this area. Also really like Talos and hope they keep growing.
K8s’ ecosystem is improving by the day, but I’m still leaning towards a managed K8s cluster from a cloud provider for most production workloads, as it really is just a few lines of YAML to bootstrap new clusters with automated backups and secrets management nowadays - if you don’t mind the eye-watering bill that comes every month for said convenience.
Kinda hard to control real-world things with no Internet connection that rely on an internet connection
Note: Nutanix made some interesting k8s-related acquisitions in the last few years. If interested, you should take a look at some of the things they are working on.
I like what you're doing. But, to behonst, it's a tough market. While the promise of $265 vs $4 might seem like a no-brainer, you're comparing apples to oranges.
- Your DX is most likely be far from Heroku's. Their developer experience is refined by 100,000s developers. It's hard to think through everything, and you're very unlikely to make it anywhere close, once you go beyond simple use-cases.
- A "single VM" setup is not really production-grade. You're lacking reliability, scalability, redundancy and many more features that these platforms have. It definitely works for low-traffic side-projects. But people or entities that actually have a budget for something like this, and are willing to pay, are usually looking for a different solution.
That being said, I wish you all the luck. Maybe things change it the AI-generated apps era.
If so props to you.
My original idea behind https://holos.run was to create a Heorku like experience for k8s so I’m super happy to see this existing in the world. I’d love to explore an integration, potentially spinning up the single or multi node clusters with cluster api.
the problem is that kubero, Idk they did not gain any traction.
maybe most user want simple tools like coolify
edit: looks like POST https://canine.sh/projects is returning 422.
At the very least, theres. bug with showing a better error message so I'll do that now!
Production, I usually use digital ocean, so then I get a managed kubernetes, but also a managed postgres within the same data center for latency needs. Let's me sleep easier at night :)
Can Canine automatically upgrade my helm charts? That would be killer. I usually stay on cloud-hosted paid plans because remembering to upgrade is not fun. The next reason is that I often need to recall the ops knowledge just after I've forgotten it.
Upgrading helm charts without some manual monitoring seems like it might still be an unsolved problem :(
You deserve an award for building this, thank you.
Also, having seen the demo video, it’s a happy path thing (public repo, has dockerfiles, etc. what about private code and images?)
You deserve an award for building this, thank you!
Given that the hardware itself now costs less than 8k to purchase outright, just seemed ridiculous. Albeit we did have SOC2, enterprise plan, etc, but it was a painful bill every quarter.
Chrome 137. Android 13.
Other than that... I'll give it a shot. Have three N100 NUCs. Two are currently unused after failed attempts to learn to use k8s.
Maybe this'll do the trick.
Good work either way!
In my head, I put coolify, vercel as higher in the stack. fly.io, heroku, canine to be closer to metal, and all the on the metal is k8s and friends.
Though I do believe that Coolify will “eventually” support Kubernetes, I think what you’re saying feels right to me.
This way, it doesn't result in having to host a separate app within your cluster, saving resources. I was kind of imagining this to be deploying to fairly small, indie hacker type setups, so making sure that it was resource efficient was relatively important.
The docker compose set up is just for local development. I'll make that more clear, thanks for the feedback!
K3s already takes up about 100MB which only leaves 400 for the application, so trying to run Canine on that machine would probably create too much bloat.
I think the landing page fails at answering the two most basic questions:
1. Can i deploy via a stupid simple “git push” ?
2. Can i express what my workloads are via a stupid simple Procfile?