We tried to use this on our compute cluster for silicon design/verification. We gave up in the end and just went with the traditional TCL (now Lua) modules.
The problems are:
1. You can't have apptainers that use each other. The most common case was things like Make, GCC, Git etc. If Make is in a different apptainer to GCC then it won't work because as soon as you go into Make then it can't see GCC any more.
2. It doesn't work if any of your output artefacts depend on things inside the container. For example you use your GCC apptainer to compile a program. It appears to work, but when you run it you find it actually linked to something in the apptainer that isn't visible any more. This is also a problem for C headers.
3. We had constant issues with PATH getting messed up so you can't see things outside the apptainer that should have been available.
All in all it was a nice idea but ended up causing way more hassle than it was worth. It was much easier just to use an old OS (RHEL8) and get everything to work directly on that.
mbreese · 52m ago
I think of using Apptainer/Singularity as more like Docker than anything else (without the full networking configs). These are all issues with traditional Docker containers as well, so I’m not sure how you were using the containers or what you were expecting. For my workflows on HPC, I use apptainers as basically drop-in replacements for Docker, and for that, they work quite well. These biggest benefit is that the containers are unprivileged. This means you can’t do a lot of things (in particular complex networking), but it also makes it much more secure for multi-tenant systems (like HPC).
(I know Docker and Apptainer are slightly different beasts, but I’m speaking in broad strokes in a general sense without extra permissions).
harel · 47m ago
Not a constructive comment, but I find the name "Apptainer" doesn't really work. Rolls funny on the tongue and feels just "wrong" to me.
cs_throwaway · 1h ago
Funny this is here. Apptainer is Singularity, described here:
If you ever use a shared cluster at a university or run by the government, Apptainer will be available, and Podman / Docker likely won't be.
In these environments, it is best not to use containers at all, and instead get to know your sysadmin and understand how he expects the cluster to be used.
shortrounddev2 · 43m ago
Why are docker/podman less common? And why do you say it's better not to use containers? Performance?
kgxohxkhclhc · 25m ago
docker and podman expect to extract images to disk, then use fancy features like overlayfs, which doesn't work on network filesystems -- and in hpc, most filesystems users can write to persistently are network filesystems.
apptainer images are straight filesystem images with no overlayfs or storage driver magic happening -- just a straight loop mount of a disk image.
this means your container images can now live on your network filesystem.
kinow · 3h ago
Apptainer and singularity ce are quite common in HPC. While both implementations fork the old singularity project, they are not really identical.
We use singularity in the HPCs (like Leonardo, LUMI, Fugaku, NeSI NZ, Levante) but some devs and researchers have apptainer installed locally.
We found a timezone bug a few days ago in our Python code (matplotlib,xarray,etc.), but that didn't happen with apptainer.
As the code bases are still a bit similar, I could confirm apptainer fixed it but singularity ce was still affected by the bug -- singularity replaces the UTC timezone file by the user's timezone, Helsinki EEST in our case in LUMI HPC.
> Apptainer and singularity ce are quite common in HPC. While both implementations fork the old singularity project, they are not really identical.
Apptainer is not a fork of the old Singularity project: Apptainer is the original project, but the community voted to change its name. It also came under the umbrella of the Linux Foundation:
Sylabs (where the original Singularity author first worked) was the one that forked off the original project.
markus92 · 3h ago
Luckily they’re still compatible with each others containers. Can use Apptainer to build the container then run it on Singularity and vice-versa.
Simran-B · 3h ago
Flatpak considers to move from OSTree to containers, citing the well-maintained tooling as a major plus point. How would that differ from Apptainers?
Imustaskforhelp · 57m ago
Maybe the idea is that flatpak can have better sandbox control over applications running in flatpak using xdg-dbus ie. you can select the permissions that you want to give to a flatpak application and so sometimes it can act near native and not be completely isolated like containers.
Also I am not sure if apptainers are completely isolated.
Though I suppose through tools like https://containertoolbx.org/ such point also becomes moot & then I guess if they move to container, doesn't it sort of become like toolbx?
To be honest, I think a lot of tools can have a huge overlap b/w them and I guess that's okay too
Tepix · 1h ago
I agree with Havoc, the message is unclear: Is Apptainer a replacement for Flatpack on the desktop, or is it targeting the server?
MillironX · 44m ago
Server - but this is kind of a wrong question. Apptainer is for running cli applications in immutable, rootless containers. The closest tool I can think of is Fedora Toolbx [1]. Apptainer is primarily used to distributing and reusing scientific computing tools b/c it doesn't allow root, doesn't allow for changes to the rootfs in each container, automatically mounts the working directory and works well with GPUs (that last point I can't personally attest to).
I have yet to see a container technology that doesn't break a myriad of things.
Hilift · 1h ago
I thought the "hardened images" were a step in the right direction. It's a pain to have to deal with vulnerabilities on ephemeral short-lived containers/instances. Having something hyper up to date is welcome.
To some extend I understand the problem that these solution are trying to address, I'm just not sure that simply stuff things into containers is really the right solution.
Perhaps the problems need to be addressed on a more fundamental level.
It is very handy in SLURM clusters and servers where you have no sudo.
Some attrition using it though: is there a good in-depth book about it?
exabrial · 1h ago
Honestly switched to systemd isolation features (chroot, ro/rw mounts, etc) and never looked back.
aa-jv · 3h ago
Very interesting .. I was recently tasked with getting a bespoke AI/ML environment ready to ship/deploy to, what can only be considered, foreign environments .. and this has proven to be quite a hassle, because, of course: python.
So I guess Apptainer is the solution to this use case - anyone had any experience with using it to bundle up an AI/ML application for redistribution? Thoughts/tips?
SirHumphrey · 3h ago
I did start to use them for AI development on the HPC I have access to and it worked well (GPU pass-through basically automatically, the performace seemed basically the same) - but I mostly use them because I do not want to argue with administrators anymore that it's probably time they update Cuda 11.7 (as well as python 3.6) - the only version of Cuda currently installed on the cluster.
aa-jv · 2h ago
Ah, right. So, no matter what container comes along to solve this problem, there's still the BOFH factor to deal with ..
Curious though, how are you doing this work without admin privs?
SirHumphrey · 1h ago
It's a bit annoying, but you can install conda without admin privileges and apptainer was installed for compliance with some EuroHPC project and luckily made accessible to all users. The container allows me to have an environment where I have "root" access and can install software.
The most annoying thing is not the lack of privileges, but that the compute nodes do not have internet access (because "security") beside connecting to the headnode, so there is the whole song and dance of running the container (or installing conda packages) on the headnode so I can download everything I need, then saving the state and running them on the compute node.
ethan_smith · 2h ago
Apptainer excels for AI/ML distribution because it handles GPU access and MPI parallelization natively, with better performance than Docker in HPC environments. The --fakeroot feature lets you build containers without sudo, and the SIF file format makes distribution simpler than managing Docker layers.
Havoc · 3h ago
Wish these sort of projects would do a better job articulating what the value proposition is over leading existing ones.
Like why should I put time into learning this instead of rootless podman? Aside from this secret management thing it sounds like same feature set
kitd · 1h ago
From the Introduction [1]
Many container platforms are available, but Apptainer is focused on:
Verifiable reproducibility and security, using cryptographic signatures, an immutable container image format, and in-memory decryption.
Integration over isolation by default. Easily make use of GPUs, high speed networks, parallel filesystems on a cluster or server by default.
Mobility of compute. The single file SIF container format is easy to transport and share.
A simple, effective security model. You are the same user inside a container as outside, and cannot gain additional privilege on the host system by default. Read more about Security in Apptainer.
You should put time into learning this if you are going to be running HPC jobs on clusters, because some HPC clusters support this for jobs and not much else.
tecleandor · 2h ago
So is this popular in science or data analysis / forecasting or something like that?
I'm not familiar with it (I don't know if it changed names or just didn't notice)
misnome · 1h ago
Used to be called “Singularity”
maxnoe · 2h ago
This project is way older than (rootless) podman.
Imustaskforhelp · 54m ago
But I guess aren't their premise just the same though? I wonder how different "learning" apptainer is compared to "learning" podman given atleast in podman with podman-compose and many other such things, podman just is really equivalent to docker in a lot of scenarios with a 1:1 bind mostly
eraser215 · 3h ago
Isn't this another fork by the same douche who claims he created centos and started rocky? His track record as an open source person is pretty atrocious, contrary to popular belief.
eisbaw · 3h ago
Argh, yet another way to distribute userland images.
AppImages does it right by including the run-time with the image itself - no prior installation needed.
More nix less containers, btw.
E.g. docker run -ti nixery.dev/shell/cowsay bash for on-the-fly containers based on Nix.
Imustaskforhelp · 46m ago
Of course we ignore the taking shots at a project just for its existence thing aside
I actually really like nixery.dev idea. Sounds kinda neat.
If I am being really honest, there are a lot of ways to go around tbh, there are ways to run nix inside of docker and docker inside of nix too.
There are ways to convert docker images into os too and there are tools like coreos.
There is nix-shell and someone on hackernews told me about comma and I am still figuring out comma (haha! Thanks to them!)
And if one just wants isolation, they can use bubblewrap or (pledge by jart) and I guess there is complete beauty and art in such container-esque space and I truly love this space a lot.
I am actually wondering right now that using traefik (as load balancer) + nats (for a modular monolith) + podman/coreos + (cloudflare tunnels?) + any vps and you can use nix to build those containers too or you can go the other way around by having a nixos on vps with traefik + nats can be a really good alternative to kubernetes.
I mean, There is docker swarm too if you don't want any of such complexity but people say that its less worked upon but still I guess there is a sort of fun in reinventing the wheel of kubernetes, but I guess I don't have tooo much problems with kubernetes I suppose because of the existence of helm charts (I haven't used kubernetes) but helm charts are written in go templates and I think they are a bit clunky but still I love golang and I feel like I would be okay with writing helm charts but I guess I am one of the people who just believes to scale horizontally first than vertically untill the economic scale gets broken and its more cheaper to use kubernetes / learn it than not.
The problems are:
1. You can't have apptainers that use each other. The most common case was things like Make, GCC, Git etc. If Make is in a different apptainer to GCC then it won't work because as soon as you go into Make then it can't see GCC any more.
2. It doesn't work if any of your output artefacts depend on things inside the container. For example you use your GCC apptainer to compile a program. It appears to work, but when you run it you find it actually linked to something in the apptainer that isn't visible any more. This is also a problem for C headers.
3. We had constant issues with PATH getting messed up so you can't see things outside the apptainer that should have been available.
All in all it was a nice idea but ended up causing way more hassle than it was worth. It was much easier just to use an old OS (RHEL8) and get everything to work directly on that.
(I know Docker and Apptainer are slightly different beasts, but I’m speaking in broad strokes in a general sense without extra permissions).
https://journals.plos.org/plosone/article?id=10.1371/journal...
If you ever use a shared cluster at a university or run by the government, Apptainer will be available, and Podman / Docker likely won't be.
In these environments, it is best not to use containers at all, and instead get to know your sysadmin and understand how he expects the cluster to be used.
apptainer images are straight filesystem images with no overlayfs or storage driver magic happening -- just a straight loop mount of a disk image.
this means your container images can now live on your network filesystem.
We use singularity in the HPCs (like Leonardo, LUMI, Fugaku, NeSI NZ, Levante) but some devs and researchers have apptainer installed locally.
We found a timezone bug a few days ago in our Python code (matplotlib,xarray,etc.), but that didn't happen with apptainer.
As the code bases are still a bit similar, I could confirm apptainer fixed it but singularity ce was still affected by the bug -- singularity replaces the UTC timezone file by the user's timezone, Helsinki EEST in our case in LUMI HPC.
https://github.com/sylabs/singularity/issues/3686
Apptainer is not a fork of the old Singularity project: Apptainer is the original project, but the community voted to change its name. It also came under the umbrella of the Linux Foundation:
* https://apptainer.org/news/community-announcement-20211130/
Sylabs (where the original Singularity author first worked) was the one that forked off the original project.
Also I am not sure if apptainers are completely isolated.
Though I suppose through tools like https://containertoolbx.org/ such point also becomes moot & then I guess if they move to container, doesn't it sort of become like toolbx?
To be honest, I think a lot of tools can have a huge overlap b/w them and I guess that's okay too
[1]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolb...
https://www.docker.com/blog/introducing-docker-hardened-imag...
Perhaps the problems need to be addressed on a more fundamental level.
This paper might help
Some attrition using it though: is there a good in-depth book about it?
So I guess Apptainer is the solution to this use case - anyone had any experience with using it to bundle up an AI/ML application for redistribution? Thoughts/tips?
Curious though, how are you doing this work without admin privs?
The most annoying thing is not the lack of privileges, but that the compute nodes do not have internet access (because "security") beside connecting to the headnode, so there is the whole song and dance of running the container (or installing conda packages) on the headnode so I can download everything I need, then saving the state and running them on the compute node.
Like why should I put time into learning this instead of rootless podman? Aside from this secret management thing it sounds like same feature set
I'm not familiar with it (I don't know if it changed names or just didn't notice)
More nix less containers, btw.
E.g. docker run -ti nixery.dev/shell/cowsay bash for on-the-fly containers based on Nix.
I actually really like nixery.dev idea. Sounds kinda neat.
If I am being really honest, there are a lot of ways to go around tbh, there are ways to run nix inside of docker and docker inside of nix too.
There are ways to convert docker images into os too and there are tools like coreos.
There is nix-shell and someone on hackernews told me about comma and I am still figuring out comma (haha! Thanks to them!)
And if one just wants isolation, they can use bubblewrap or (pledge by jart) and I guess there is complete beauty and art in such container-esque space and I truly love this space a lot.
I am actually wondering right now that using traefik (as load balancer) + nats (for a modular monolith) + podman/coreos + (cloudflare tunnels?) + any vps and you can use nix to build those containers too or you can go the other way around by having a nixos on vps with traefik + nats can be a really good alternative to kubernetes.
I mean, There is docker swarm too if you don't want any of such complexity but people say that its less worked upon but still I guess there is a sort of fun in reinventing the wheel of kubernetes, but I guess I don't have tooo much problems with kubernetes I suppose because of the existence of helm charts (I haven't used kubernetes) but helm charts are written in go templates and I think they are a bit clunky but still I love golang and I feel like I would be okay with writing helm charts but I guess I am one of the people who just believes to scale horizontally first than vertically untill the economic scale gets broken and its more cheaper to use kubernetes / learn it than not.