KDE Plasma 6.4 review – A worrying trend (dedoimedo.com)
1 points by jandeboevrie 11m ago 0 comments
Show HN: I built an AI dataset generator (github.com)
5 points by matthewhefferon 36m ago 1 comments
Apptainer: Application Containers for Linux
88 cl3misch 51 6/26/2025, 9:45:21 AM apptainer.org ↗
The problems are:
1. You can't have apptainers that use each other. The most common case was things like Make, GCC, Git etc. If Make is in a different apptainer to GCC then it won't work because as soon as you go into Make then it can't see GCC any more.
2. It doesn't work if any of your output artefacts depend on things inside the container. For example you use your GCC apptainer to compile a program. It appears to work, but when you run it you find it actually linked to something in the apptainer that isn't visible any more. This is also a problem for C headers.
3. We had constant issues with PATH getting messed up so you can't see things outside the apptainer that should have been available.
All in all it was a nice idea but ended up causing way more hassle than it was worth. It was much easier just to use an old OS (RHEL8) and get everything to work directly on that.
For my workflows on HPC, I use apptainers as basically drop-in replacements for Docker, and for that, they work quite well. These biggest benefit is that the containers are unprivileged. This means you can’t do a lot of things (in particular complex networking), but it also makes it much more secure for multi-tenant systems (like HPC).
(I know Docker and Apptainer are slightly different beasts, but I’m speaking in broad strokes in a general sense without extra permissions).
You can use a container as a single environment in which to do development, and that works fine. But they are by definition an isolated environment with different dependencies than other containers. The result of compiling something in a container necessarily needs to end up in its own container.
...that said, you could use the exact same container base image, and make many different container images from it, and those files would be compatible (assuming you shipped all needed dependencies).
- Need to run more than one activity in a single container (this is an anti-pattern in other container technologies)
- HPC (and sometimes college) environments
- Want single-file distribution model (although doesn't support deltas)
- Cryptographically sign a SIF file without an external server
- Robust GPU support
We use singularity in the HPCs (like Leonardo, LUMI, Fugaku, NeSI NZ, Levante) but some devs and researchers have apptainer installed locally.
We found a timezone bug a few days ago in our Python code (matplotlib,xarray,etc.), but that didn't happen with apptainer.
As the code bases are still a bit similar, I could confirm apptainer fixed it but singularity ce was still affected by the bug -- singularity replaces the UTC timezone file by the user's timezone, Helsinki EEST in our case in LUMI HPC.
https://github.com/sylabs/singularity/issues/3686
Apptainer is not a fork of the old Singularity project: Apptainer is the original project, but the community voted to change its name. It also came under the umbrella of the Linux Foundation:
* https://apptainer.org/news/community-announcement-20211130/
Sylabs (where the original Singularity author first worked) was the one that forked off the original project.
https://journals.plos.org/plosone/article?id=10.1371/journal...
If you ever use a shared cluster at a university or run by the government, Apptainer will be available, and Podman / Docker likely won't be.
In these environments, it is best not to use containers at all, and instead get to know your sysadmin and understand how he expects the cluster to be used.
apptainer images are straight filesystem images with no overlayfs or storage driver magic happening -- just a straight loop mount of a disk image.
this means your container images can now live on your network filesystem.
If there's a hard disk on the compute nodes, then you just run the container from the remote image registry, and it downloads and extracts it temporarily to disk. No need for a network filesystem.
If the containerized apps want to then work on common/shared files, they can still do that. You just mount the network filesystem on the host, then volume-mount that into the container's runtime. Now the containerized apps can access the network filesystem.
This is standard practice in AWS ECS, where you can mount an EFS filesystem inside your running containers in ECS. (EFS is just NFS, and ECS is just a wrapper around Docker)
there is also the problem of simply distributing the image and mounting it up. you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image.
Also I am not sure if apptainers are completely isolated.
Though I suppose through tools like https://containertoolbx.org/ such point also becomes moot & then I guess if they move to container, doesn't it sort of become like toolbx?
To be honest, I think a lot of tools can have a huge overlap b/w them and I guess that's okay too
Find the code on https://github.com/evertheylen/probox or read my blog post on https://evertheylen.eu/p/probox-intro/
[1]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolb...
What's the appeal of using this over unshare + chroot to a mounted tarball with a tmpfs union mount where needed? Saner default configuration? Saner interface to cgroups?
Some attrition using it though: is there a good in-depth book about it?
https://www.docker.com/blog/introducing-docker-hardened-imag...
Perhaps the problems need to be addressed on a more fundamental level.
This paper might help
So I guess Apptainer is the solution to this use case - anyone had any experience with using it to bundle up an AI/ML application for redistribution? Thoughts/tips?
Curious though, how are you doing this work without admin privs?
The most annoying thing is not the lack of privileges, but that the compute nodes do not have internet access (because "security") beside connecting to the headnode, so there is the whole song and dance of running the container (or installing conda packages) on the headnode so I can download everything I need, then saving the state and running them on the compute node.
Like why should I put time into learning this instead of rootless podman? Aside from this secret management thing it sounds like same feature set
I'm not familiar with it (I don't know if it changed names or just didn't notice)
More nix less containers, btw.
E.g. docker run -ti nixery.dev/shell/cowsay bash for on-the-fly containers based on Nix.
I actually really like nixery.dev idea. Sounds kinda neat.
If I am being really honest, there are a lot of ways to go around tbh, there are ways to run nix inside of docker and docker inside of nix too.
There are ways to convert docker images into os too and there are tools like coreos.
There is nix-shell and someone on hackernews told me about comma and I am still figuring out comma (haha! Thanks to them!)
And if one just wants isolation, they can use bubblewrap or (pledge by jart) and I guess there is complete beauty and art in such container-esque space and I truly love this space a lot.
I am actually wondering right now that using traefik (as load balancer) + nats (for a modular monolith) + podman/coreos + (cloudflare tunnels?) + any vps and you can use nix to build those containers too or you can go the other way around by having a nixos on vps with traefik + nats can be a really good alternative to kubernetes.
I mean, There is docker swarm too if you don't want any of such complexity but people say that its less worked upon but still I guess there is a sort of fun in reinventing the wheel of kubernetes, but I guess I don't have tooo much problems with kubernetes I suppose because of the existence of helm charts (I haven't used kubernetes) but helm charts are written in go templates and I think they are a bit clunky but still I love golang and I feel like I would be okay with writing helm charts but I guess I am one of the people who just believes to scale horizontally first than vertically untill the economic scale gets broken and its more cheaper to use kubernetes / learn it than not.