Show HN: Unregistry – “docker push” directly to servers without a registry
324 psviderski 75 6/18/2025, 11:17:10 PM github.com ↗
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.
In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.
So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.
docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.
Would love to hear your thoughts and use cases!
> Linux via Homebrew
Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.
Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.
You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.
Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...
You can hack your own way pretty easily.
EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.
Rename the file to whatever you like, e.g. to get `docker pushoverssh`:
Note that Docker doesn't allow dashes in plugin commands.> What's that extra 's' for?
> That's a typo
Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.
[1]: https://zotregistry.dev
Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`
And loading it looks like this: `docker load -i /path/to/my-app.tar`
Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.
Hence the value.
Edit: that thing exists it is uncloud. Just found out!
That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.
Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?
I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.
What would be nicer instead is some variation of docker compose pussh that pushes the latest versions of local images to the remote host based on the remote docker-compose.yml file. The alternative would be docker pusshing the affected containers one by by one and then triggering a docker compose restart. Automating that would be useful and probably not that hard.
My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.
FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.
"docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server"
The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.
This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.
https://github.com/containers/skopeo
I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal
You should be able to run unregistry as a standalone service on one of the nodes. Kubernetes uses containerd for storing images on nodes. So unregistry will expose the node's images as a registry. Then you should be able to run k8s deployments using 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on other nodes will be pulling the image from unregistry.
You may want to take a look at https://spegel.dev/ which works similarly but was created specifically for Kubernetes.
the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command
I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.
Both approaches are inferior to yours because of the load on the server (one way or another).
Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.
The size of my images are tiny, the extra complexity is unwarranted.
Then of course I'm not a 1000 people company with 1GB docker images.
[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...
Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.
For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.
I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.
When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.
First mover advantage and ongoing VC-funded marketing/DevRel
> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server