Show HN: Unregistry – “docker push” directly to servers without a registry

249 psviderski 60 6/18/2025, 11:17:10 PM github.com ↗
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

Comments (60)

alisonatwork · 2h ago
This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.

psviderski · 1h ago
You need a containerd on the remote end (Docker and Kubernetes use containerd) and anything that speaks registry API (OCI Distribution spec: https://github.com/opencontainers/distribution-spec) on the client. Unregistry reuses the official Docker registry code for the API layer so it looks and feels like https://hub.docker.com/_/registry

You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.

Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...

You can hack your own way pretty easily.

0x457 · 1h ago
It needs docker daemon on both ends. This is just a clever way to share layer between two daemons via ssh.
nine_k · 4h ago
Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.
gchamonlive · 2h ago
It's fine, but it wouldn't hurt to have a more formal alias like `docker push-over-ssh`.

EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.

psviderski · 55m ago
That's a valid concern. You can very easily give it whatever name you like. Docker looks for `docker-COMAND` executables in ~/.docker/cli-plugins directory making COMMAND a `docker` subcommand.

Rename the file to whatever you like, e.g. to get `docker pushoverssh`:

  mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
Note that Docker doesn't allow dashes in plugin commands.
EricRiese · 3h ago
> The extra 's' is for 'sssh'

> What's that extra 's' for?

> That's a typo

someothherguyy · 2h ago
and prone to collision!
nine_k · 1h ago
Indeed so! Because it's art, not engineering. The engineering approach would require a recognizably distinct command, eliminating the possibility of such a pun.
metadat · 3h ago
This should have always been a thing! Brilliant.

Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.

scott113341 · 3h ago
Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

[1]: https://zotregistry.dev

modeless · 2h ago
It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!
TheRoque · 1h ago
You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.

Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`

And loading it looks like this: `docker load -i /path/to/my-app.tar`

Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.

nine_k · 1h ago
If you have an image with 100MB worth of bottom layers, and only change the tiny top layer, the unregistry will only send the top layer, while save / load would send the whole 100MB+.

Hence the value.

fellatio · 2h ago
Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

Edit: that thing exists it is uncloud. Just found out!

That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.

psviderski · 45m ago
For sure, it's always a tradeoff and it's great to have options so you can choose the best tool for every job.
layoric · 2h ago
I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.
lxe · 4h ago
Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.
psviderski · 35m ago
I'm glad the idea of uncloud resonated with you. Feel free to join our Discord if you have questions or need help
vhodges · 3h ago
There is also https://skateco.github.io/ which (at quick glance) seems similar
nodesocket · 4h ago
A recommendation for Portainer if you haven't used or considered it. I'm running two EC2 instances on AWS using portainer community edition and portainer agent and works really well. The stack feature (which is just docker compose) is also super nice. One EC2 instance; running Portainer agent runs Caddy in a container which acts as the load balancer and reverse proxy.
mountainriver · 1h ago
I’ve wanted unregistry for a long time, thanks so much for the awesome work!
psviderski · 32m ago
Met too, you're welcome! Please create an issue on github if you find any bugs
cultureulterior · 56m ago
This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.
bradly · 4h ago
As a long ago fan of chef-solo, this is really cool.

Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?

psviderski · 20m ago
Yep, I'm familiar with Kamal and it actually inspired me to build Uncloud using similar principles but with more cluster-like capabilities.

I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.

esafak · 4h ago
You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/
throwaway314155 · 4h ago
I assume you can do these "image acrobatics" in any shell.
actinium226 · 4h ago
This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.

koakuma-chan · 5h ago
This is really cool. Do you support or plan to support docker compose?
psviderski · 5h ago
Thank you! Can you please clarify what kind of support you mean for docker compose?
fardo · 5h ago
I assume that he means "rather than pushing up each individual container for a project, it could take something like a compose file over a list of underlying containers, and push them all up to the endpoint."
koakuma-chan · 4h ago
Yes, pushing all containers one by one would not be very convenient.
baobun · 4h ago
The right yq|xargs invocation on your compose file should get you to a oneshot.
koakuma-chan · 4h ago
I would prefer docker compose pussh or whatever
psviderski · 24m ago
That's an interesting idea. I don't think you can create a subcommand/plugin for compose but creating a 'docker composepussh' command that parses the compose file and runs 'docker pussh' should be possible.

My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.

czhu12 · 44m ago
Does this work with Kubernetes image pulls?
yjftsjthsd-h · 4h ago
What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?
psviderski · 3h ago
Yeah exactly, which is crucial for large images if you change only a few last layers.

The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.

This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.

nothrabannosir · 5h ago
What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

https://github.com/containers/skopeo

yibers · 4h ago
"skopeo" seems to related to managing registeries, very different from this.
NewJazz · 4h ago
Skopeo manages images, copies them and stuff.
remram · 4h ago
Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?
selcuka · 3h ago
You mean ssh'ing into the remote server, then pulling image from local? That would require your local host to be accessible from the remote host, or setting up some kind of ssh tunneling.
mdaniel · 2h ago
`ssh -R` and `ssh -L` are amazing, and I just learned that -L and -R both support unix sockets on either end and also unix socket to tcp socket https://manpages.ubuntu.com/manpages/noble/man1/ssh.1.html#:...

I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal

armx40 · 4h ago
How about using docker context. I use that a lot and works nicely.
Snawoot · 4h ago
How do docker contexts help with the transfer of image between hosts?
jokethrowaway · 1h ago
Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

Both approaches are inferior to yours because of the load on the server (one way or another).

Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

The size of my images are tiny, the extra complexity is unwarranted.

Then of course I'm not a 1000 people company with 1GB docker images.

dzonga · 4h ago
this is nice, hopefully DHH and the folks working on Kamal adopt this.

the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command

psviderski · 9m ago
I don't see a reason to not adopt this in Kamal. I'm also building Uncloud that took a lot of inspiration from Kamal, please check it out. I will integrate unregistry into uncloud soon to make the build/deploy process a breeze.
rudasn · 4h ago
Build the image on the deployment server? Why not build somewhere else once and save time during deployments?

I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.

quantadev · 1h ago
I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.
s1mplicissimus · 4h ago
very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)
isaacvando · 3h ago
Love it!
jlhawn · 5h ago
A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!
alisonatwork · 2h ago
On podman this is built in as native command podman-image-scp[0], which perhaps could be more efficient with SSH compression.

[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...

travisgriggs · 17m ago
So with Podman, this exists already, but for docker, this has to be created by the community.

I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.

When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.

selcuka · 3h ago
That method is actually mentioned in their README:

> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server

rgrau · 4h ago
I use a variant with ssh and some compression:

    docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'
selcuka · 3h ago
If you are happy with bzip2-level compression, you could also use `ssh -C` to enable automatic gzip compression.