I've been running testing/trixie since the end of 2023 or so. (I generally always run testing, but stick with stable for ~6 months after stabilization, in order to avoid lots of package churn in new-testing.)
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
progmetaldev · 1d ago
This is exactly why I use Debian when I install Linux. I want something that will keep chugging along, yet may not have the most cutting edge software. I can take my time with the system, and know that it is solid.
If I need newer software that isn't in their package repository, I understand that I have the ability to compile what I need, or at least make an active decision to modify my system to run what I want. Basically, the possibility of instability is a conscious choice for me, which I do sometimes take.
BrandoElFollito · 7h ago
Same here. I run Debian and docker on top to have current services that do not friend on the host freshness.
I just need boring stability to wildly experiment in isolation
rurban · 22h ago
Exactly what I expect from the latest Debian. Boring and not working. Too many hacks by people who cannot work with upstream and have no idea what they are doing. But they are having their own good idea of a "proper" layout and package management.
progmetaldev · 20h ago
Curious what you run, instead of Debian. I haven't had the same experience as you've had, but for myself, Debian has just worked other than having to provide my own wireless adapter driver .so file when I installed off a thumb drive without an Ethernet connection. While I have more experience than the average Linux user, and got started with Slackware in the 90's and then moved to Red Hat in the very late 90's, it's been a good 20 years since I used a Linux system full time. That's as a desktop, I've had no issues running as an application server for 2 decades.
I haven't run into a scenario where the desktop has caused me issues, only with Windows-only software that I sometimes require. What software has caused you issues that doesn't play nicely with Debian? What hacks are in place to mitigate upstream issues? I'm honestly curious, and if you don't use Debian, what distribution do you use regularly?
abenga · 22h ago
Exactly where is Debian "not working"?
rurban · 19h ago
Almost daily at work. Always have to verify with my fedora machines that it is indeed debian/ubuntu and not upstream.
gpderetta · 15h ago
debian or ubuntu? I have had terrible experience in the past with ubuntu breaking randomly, but debian has been fairly stable for a desktop machine
exiguus · 1d ago
This could be me. I do the same, and i already plan to update to Forky at the beginning of 2026.
_fat_santa · 14h ago
> plan to update to Forky
Why do you want to switch to Ubuntu?
I'm sorry I had to, I'll show myself out
eklavya · 15h ago
Want to do that but there is no official nvidia driver and cuda installation support for testing. Only major releases.
tguvot · 1d ago
i been running unstable since 2004 or so. i think it broke only once when I skipped year of updates
blueflow · 1d ago
TIL there are 14 subtly different naming schemes for network interfaces[1]. "predictable" my ass.
14 different schemes multiplied by some acting slightly different in every version. Sure you can pin it, but that fixes only their internal back and forth, is only possible via the kernel cmdline and there is no guarantee for how long the old versions will stay available, as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again.
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often.
This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
woleium · 1d ago
I imagine they went against mac address because it is not immutable, some folks rotate mac addresses for privacy/security reasons.
tlamponi · 1d ago
The original one is still there. Systemd knows even about that, it's differentiated as MAC vs PermanentMAC.
duskwuff · 1d ago
There are, unfortunately, some older devices (like some Sun systems) which use the same MAC address for every network interface on the device.
em-bee · 1d ago
i thought about that, but couldn't you access the hardcoded address to identify the card?
but you also want to be able to change a card in a server without the device name changing. at least that used to be an issue in the past.
grantla · 1d ago
> as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
tlamponi · 1d ago
Yeah, I know, I spent more than a week into looking for options to reduce impact for all of our users.
And note that cgroupv1 also still works in the kernel just fine, only the part that systemd controlled was removed from systemd. You can still boot with cgroupv1 support on, e.g., Alpine Linux and OpenRC as init 1. So not sure if that will lessen my concerns about no guarantees for older naming-scheme versions, maintaining triple digits of them sure has its cost too.
And don't understand me wrong, sunsetting cgroupv1 was reasonable, but it was a lot of churn, it at least was a one time thing. The network interface naming situation is periodic churn, guaranteed to bite you every now and then just by using the defaults.
Scramblejams · 1d ago
Can you tell me why NamePolicy=keep doesn't do the trick?
Looking myself for options to keep a Debian bare metal server I admin from going deaf and mute the next time I upgrade it... It still uses an /etc/network/interfaces file that configures a bridge for VMs to use, and the bridge_ports parameter requires an interface name which, when I upgraded to Bookworm, changed.
At this rate maybe I'll write a script that runs on boot and fixes up that file with whatever interface it finds, then restarts the network.
bbarnett · 1d ago
This worked brilliantly in Debian for more than a decade, had almost zero downside, and just did what asked. I went through 3+ dist-upgrades, for the first time in my life, without a NIC change.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
Systemd just doesn't "get" that.
mjg59 · 20h ago
systemd's design choices here were influenced by a lot of bugs Red Hat received where failed hardware was swapped out and interface names changed as a result. Real world enterprise users wanted this, it wasn't an arbitrary design choice.
throw0101d · 4h ago
> systemd's design choices here were influenced by a lot of bugs Red Hat received where failed hardware was swapped out and interface names changed as a result.
Under RH-based systems the ifcfg-* files had a HWADDR variable, so if you swapped a card you could get the new MAC address and plug it in there and get the same interface name. There was also udevd rules where you map names to particular hardware, including particular MACs.
> Real world enterprise users wanted this, it wasn't an arbitrary design choice.
As a real world sysadmin, working now a few of decades in this field (starting with non-EL-RH, then BSD, then Solaris, then RHEL, Debian, and now Ubuntu), I have never wanted this.
bbarnett · 11h ago
That's quite the jump.
Some real world users asked for a fix. They did not mean they asked specifically for this fix.
There were other ways to handle this.
With Debian's system, you could wipe the state files, and for example eth0/etc would be reassigned per initialization order. Worked fine.
Even if you didn't like that, pre-Systemd udev allowed assigned by a variety of properties, including bus identifiers.
It was merely that Redhat, as usual, was so lacking in sophistication, unlike Debian.
foresto · 1d ago
I dislike systemd's Predictable Network Interface Names, so I disable them with this kernel command line option: net.ifnames=0
Welcome back, eth0. :)
sigio · 1d ago
Yup.. use this default on all my systems. Did a bookworm->trixie upgrade today on my mailserver, and everything worked, as it still just has eth0 ;)
lynx97 · 1d ago
The "stable" interface naming scheme is a scam. And I have proof. Test upgraded a VM today, from bookworm to trixie. And guess what. Everything worked, except after reboot the network interface was unconfigured? Guess what. The name changed...
pferde · 1d ago
That can only happen if the emulated hardware layout presented to the VM changes. I'd look at that before calling anything a scam.
tlamponi · 1d ago
Scam is probably the wrong word, and it's choice might be a bit feeling fueled, but it's really not true that this only depends on the HW.
systemd also changes behavior in what naming policies are the default and what it considered as input, it did that since ever but started to version that since v238 [0].
Due to that the HW can stay exactly the same but names still change. I see this in VMs that stay exactly the same, no software update, not change in how the QEMU cli gets generated, really nothing changed from the outside virtual HW POV, interface name still changes.
The underlying problem was a real one, the solution seems like a bit of a sunken cost fallacy, and it added more problem dimensions than there previously exist.
Besides, even if the HW would change, shouldn't a _predicatble_ naming scheme be robust to not care about that as long as the same NIC is still plugged in somewhere?
Disclaimer, as stated elsewhere: I really like systemd, I'm not one that speaks out against it lightly, but the IF naming is not something they got right, but rather made worse for the default case.
Being able to easily pin interface names through .link files is great, but requiring users to do that or have no network after an upgrade, especially for simple one-NIC use cases in a completely controlled environment like a VM is just bonkers.
Ah, ok, I didn't think of systemd version changes. Thanks.
Regarding your rhetorical question about "the same NIC", I think the problem is in determining whether the NIC is the same, and it is not an easy one to solve. I remember that older Suse Linux versions used to pin the interface name to the NIC's MAC address in an udev rule file that got autogenerated when a NIC with a given MAC first appeared on the system, but they stopped doing that.
tlamponi · 1d ago
Yeah, the permanent MAC address (i.e., the one the card actually reports to the system not the one dynamic one it can use) would be the safest bet, as that is the most stable thing there is, and more importantly, it is very relevant for switches and firewalls in enterprise settings, so if it changes it's often likely that network access will be broken any way, so one basically can only win with using the MAC as main identifier IMO, at least compared to the current status quo.
As long as you only got NICs with different permanent MAC addresses installed that does not matter for getting actually long-term stable names.
And for the other case you can still fallback to the other policies, it still will be much more stable by default.
Please note that I don't say that MAC is perfect, but using something that is actually tied to a NIC itself would fare much better by default compared to the NICs position as determined by a bunch of volatile information, and what normally does not matter to me at all, as e.g., I will always use a 100G NIC as ceph private network while the 25G ones as public one, no matter where they are plugged in. That someone configures something by location is the excpection, not the norm.
dpkirchner · 22h ago
I don't know if this is still the case but the last time I went without ifnames=0 adding a GPU would cause all the network interfaces to get new names. Junk.
hsbauauvhabzb · 1d ago
That’s not a scam and that’s not proof. That’s an upgrade problem. Stop misusing the word and devaluing it.
unethical_ban · 1d ago
The best use of AI I've gotten so far is having it explain to me how to manage a Fedora Server's core infrastructure "the right way". Which files, commands, etc. to permanently or temporarily change network, firewall, DNS, NTP settings.
sherr · 1d ago
Looking forward to the release.
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket
activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config.
sshd still reads /etc/ssh/sshd_config at startup. As far as I know, this is hard-coded in the executable.
What Debian has changed happens before the daemon is launched: the service is socket activated.
So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
JdeBP · 1d ago
I suspect that systemd people are looking at this thread in perplexity, and probably doing their thing (that I've seen over the years) of regarding the world of Debian as being amazingly behind the times in places.
The SSH server being a socket unit with systemd doing all of the socket parallelism-limiting and accepting was one of the earliest examples of socket activation ever given in systemd. It was in one of Lennart Poettering's earliest writings on the subject back in 2011.
And even that wasn't the earliest discussion of this way of running SSH by a long shot, as this was old news even before systemd was invented. One can go back years earlier than even Barrett's, Silverman's, and Byrnes's SSH: The Secure Shell: The Definitive Guide published in 2005, which is one of many places explaining that lots of options in sshd_config get ignored when SSH is started by something else that does all of the socket stuff.
Like inetd.
This has been the case ever since it has been possible to put an SSH server together with inetd in "nowait" mode. Some enterprising computer historian might one day find out when the earliest mention of this was. It might even be in the original 1990s INSTALL file for SSH.
rodgerd · 1d ago
As you say, network programs activating from a master network management is, of course, the history of Unix. It's ironic to see knee-jerk complaints about it.
LooseMarmoset · 1d ago
systemd-resolved is an effing nightmare when combined with network-manager. these two packages consistently manage to stomp all over DNS resolution in their haste to be the one true source of name resolution. i tried enabling systemd-resolved as part of an effort to do dns over https and i end up with zero dns. i swear that /etc/resolv.conf plus helper scripts is more consistent and easy.
wpm · 1d ago
It’s why I always say in the typical “systemd bad” threads that systemd the init system is great, it’s the systemd-* everything else’s that give it a bad name.
I want systemd nowhere fucking near my NTP or DNS config.
blueflow · 1d ago
Thank god you can enable and disable each of these components in complete isolation, so you don't suffer any kind of lock-in from systemd.
dijit · 1d ago
fighting your distro in practice is a total nightmare.
pavon · 1d ago
I've not had any problems with swapping out systemd-* with other packages, including -coredump, -cron, -oomd, -resolved or -timesyncd. Even journald was fairly painless to swap out. Unlike systemd itself, the distros' approach them not so much as a core part of the userland, but as lightweight basic implementations that meet many user's needs, but which are in no way a replacement for more fully-functional implementations.
dijit · 17h ago
it’s not possible to swap out journald.
yehoshuapw · 1d ago
I ended up in gentoo mostly just to avoid systemd
(used it before, mostly to learn. went to debian for new laptop. gave up after fighting systemd.
I'm aware of devuan and artix, but gentoo just worked (after all the time spent))
tpoindex · 13h ago
MX Linux for the win. Debian based, but defaults to 'init'. Booting with systemd is an option. Just enough systemd-* running to make things easy and seamless.
$ ps agxf|grep 'systemd'
607 ? S 0:00 /lib/systemd/systemd-udevd
2201 ? S 0:00 /sbin/cgmanager --daemon -m name=systemd
2726 ? S 0:00 /lib/systemd/systemd-logind
Also can install Nvidia or AMD video drivers.
bbarnett · 1d ago
Debian, at least until bookworm works perfectly without systemd. The easiest way to make this transition, is to installed Debian with nothing but 'standard system utilities' and 'SSH server' (if you want) during install:
There are a few edge cases, packages which require systemd, but I've been running thousands of systems including desktops this way for a decade.
Yes, I also run thousands of systems with systemd too.
spookie · 22h ago
Have you looked at Devuan? Genuinely a good experience.
JdeBP · 1d ago
It's important to remember that it has to be this aggressive in excluding systemd, too. There is quite a lot of strong coupling amongst the parts of systemd, so there are very few half-measure scenarios, where one can have only some systemd stuff, that will actually function correctly in toto.
Moreover, there are the odd one or two unrelated packages that just happen to have the string "systemd" in their names. (-:
tremon · 12h ago
Note that this will still allow systemd-udevd because it's packaged under its original name, udev.
LooseMarmoset · 5h ago
yeah. you need eudev instead. still, I appreciate the guide!
beagle3 · 1d ago
I’ve been dropping systemd-timesyncd and using chrome since forever, and it works well. I’m sure some systemd-* things are harder to replace, but not every replacement is a fight against your distro.
spookie · 22h ago
There are reasonable ones out there. Just use a well maintained one that aligns with you.
LooseMarmoset · 1d ago
devuan saved just enough of my sanity for me to function
spookie · 22h ago
Yeah, pretty incredible distro. Some rough edges here and there but man, even maintenance is a dream. It's just plain simpler to diagnose, and actually control what the system does. Instead of having to go down abstraction hell land.
Don't get me wrong, systemd is cool. But my god people really abuse of it. Especially distros. Some really make it hard to understand what service is in actual control. Why wrap daemons in daemons in daemons? With the worst possible names and descriptions to boot.
chrsw · 1d ago
Yes! I'm hoping for another Devuan release after this.
tremon · 12h ago
chattr +i /etc/resolv.conf
It's the only long-term solution to that problem that I endorse. Every attempt of working with the system, whether via systemd.network, resolved.conf or resolvconf, has always eventually bit me one way or another.
RVuRnvbM2e · 1d ago
I've been using this combination successfully for a long time with no issues. In fact it is the only way to handle complex DNS setup on Linux.
If you have specific issues, please file them over at systemds GitHub issue tracker.
e2le · 1d ago
What is the rationale for changing OpenSSH into a socket activated service? Given that it comes with issues, I assume the benefits outweigh the downsides.
idoubtit · 1d ago
> Given that it comes with issues, I assume the benefits outweigh the downsides.
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
zbentley · 15h ago
I also have a hunch that socket activation allows for more predictability around what a not-running/listening-for-activation service's port is doing at the syscall/kernel level, which in turn makes power saving and/or sleep states a little more predictably efficient.
Total guess, mind you.
blueflow · 1d ago
> Dependencies are simpler and faster (waiting for a service to start and accept requests is costly)
Yeah, but requiring a service's response is why its a dependency in the first place, no?
TacticalCoder · 1d ago
> Given that it comes with issues, I assume the benefits outweigh the downsides.
I think it doesn't outweight the downside. Let's not forget this:
"OpenSSH normally does not load liblzma, but a common third-party patch used by several Linux distributions causes it to load libsystemd, which in turn loads lzma."
The "XZ utils backdoor" nearly backdoored every single distro running systemd.
People (including those who tried to plant this backdoor) are going to say: "systemd has nothing to do with the backdoor" but I respectfully disagree.
systemd is one heck of a Rube-Goldberg piece of machinery: the attack surface is gigantic seen that systemd's tentacles reaches everywhere.
With a tinfoil hat on one could think the goal of systemd was, precisely, to make sure the most complicated backdoors could be inserted here and there: "Let's have openssh use a lib it doesn't need at all because somehow we'll call libsystemd from openssh".
Genius idea if you ask me.
What could possibly go wrong with systemd "now just opening a port for openssh" uh? Nothing I'm sure.
Now that said I'm very happy that we've now got stuff like the Talos Linux distribution (ultra minimal, immutable, distro meant to run Kubernetes with as few executables as possible and of course no systemd) and then containers using Alpine Linux or, even if Debian based, minimal system with (supposedly) only one process running (and, once again, no systemd).
Containerization is one way out of systemd.
I can't wait for a good systemd-less hypervisor: then I can kiss Microsoft goodbye (systemd is a Microsoft technology, based on Microsoftism, by a now Microsoft employee).
Thanks but no thanks.
Talos distro, systemd-less containers: I want more of this kind of mindset.
The future looks very nice.
systemd lovers should just throw the towel in and switch to Windows: that's what they actually really want and it's probably no better than they deserve.
jimmaswell · 1d ago
The OpenSSH change is madness if it's not a bug. I hope it's not intentional.
tharos47 · 1d ago
It was done in Ubuntu a few versions ago. Afaik only the listening port config is ignored and instead is setup in systemd
hsbauauvhabzb · 1d ago
Which is exactly the problem. It’s not obvious, is misleading and it’s cause is not easily determined just by looking at the config.
What else is lurking that you and I aren’t aware of?
tremon · 11h ago
The problem is that SSH is usually the admin interface to the system. You don't want unnecessary moving parts between you and your remote shell access when you need to troubleshoot a half-running system.
hsbauauvhabzb · 3h ago
I agree with everything but your indication that the problem is limited to only SSH.
imoverclocked · 1d ago
> What else is lurking that you and I aren’t aware of?
All of systemd... and network manager/modem manager/...
I grew up with static configuration files, init scripts, inetd, etc... then grew into Solaris smf, dtrace, zones ... and then Linux implemented systemd. I really miss smf and related tools but systemd is still just meh to me. The implementation feels like a somewhat half-in-each-world brainchild.
Evidlo · 1d ago
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
edoceo · 1d ago
I'm assuming this means it doesn't respect socket config in the sshd_config so, you configure the listening port in systemd.
juujian · 1d ago
Have been using Trixie on my laptop for a year (?) now, it has been a very positive experience. I had brought a brand new, very recent ThinkPad, not considering that the relevant drivers would not be in Debian Stable yet. Now on Trixie, having a relatively recent version of everything KDE plasma is a blessing. Things have changed so much, for the better, particularly regarding Wayland. The experience with Trixie is already better than it ever was for me with Ubuntu (good riddance!), and I cannot believe that this is supposed to be an unstable release. I broke stuff once, and that was my own fault (forcing update when not all necessary packages were staged yet, learned my lesson on that!).
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.
chromakode · 1d ago
For users with SSDs, saving the write wear seems like a desirable default.
Aachen · 1d ago
I have yet to hear of someone wearing out an SSD on a desktop/laptop system (not server, I'm sure there's heavy applications that can run 24/7 and legitimately get the job done), even considering bugs like the Spotify desktop client writing loads of data uselessly some years ago
Making such claims on HN attracts edge cases like nobody's business but let's see
progmetaldev · 1d ago
I think you're 100% correct in that this isn't a normal event to occur. I believe it's probably one of those things where someone felt that setting it to memory is just more efficient in the general case, and they happened to be skilled in that part of development, and felt it added value.
Maybe the developer runs a standard desktop version, but also uses it personally as a server for some kind of personal itch or software, on actual desktop hardware? Maybe I'm overthinking it, or the developer that wrote this code has the ability to fix more important issues, but went with this instead. I've tackled optimization before where it wasn't needed at the time, but it happened to be something I was looking into, and I felt my time investment could pay off in cases where resources were being pushed to their limits. I work with a lot of small to mid-sized businesses that can actually gain from seemingly small improvements like this.
archargelod · 18h ago
I'm using OpenSUSE Tumbleweed that has this option enabled by default.
Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.
It took me MONTHS to figure out what's the problem.
Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.
Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?
hiq · 11h ago
How did you figure that this was the problem?
If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?
archargelod · 2h ago
I was trying to solve another problem related to mounting, ran `df -h` a couple times and noticed that:
1) tmpfs is mounted to /tmp
2) available size on /tmp is very low
3) my free ram indicator in status bar is red
And then I tried downloading some files while looking at htop. It was immeduately obvious that it was the problem causing the hangs.
pluto_modadic · 1d ago
finally they're using a tmpfs. thank goodness <3
dlachausse · 1d ago
> You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting.
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
hsbauauvhabzb · 1d ago
That seems like a bug with those applications which make use of the filesystem instead of performing in-memory operations or using named pipes.
esseph · 22h ago
"bug", more of a chosen design considering hardware constraints when things were designed.
hsbauauvhabzb · 19h ago
So send a PR or have a distro level patch if that’s the issue.
api · 1d ago
Wait... that means a misbehaving program can cause out of memory errors easily by filling up /tmp?
That's a very bad default.
CamouflagedKiwi · 1d ago
A misbehaving program can cause out of memory errors already by filling up memory. It wouldn't persist past that program's death but the effect is pretty catastrophic on other programs regardless.
o11c · 1d ago
That actually is a pretty big difference.
Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled), a program that tries to allocate all memory will quickly get OOM killed and the system will recover quickly.
If /tmp/ fills up your RAM, the system will not recover automatically, and might not even be recoverable by hand without rebooting. That said, systemd-managed daemons using a private /tmp/ in RAM will correctly clear it when killed.
tsimionescu · 1d ago
The sane thing is to have swap enabled. Having swap "disabled" forces your system to swap out executables to disk, since these are likely the only memory-mapped files you have. So, if your memory fills up, you get catastrophic thrashing of the instruction cache. If you're lucky, you really go over available memory, and the OOMKiller kills some random process. But if you're not, your system will keep chugging along at a snail's pace.
Perhaps disabling overcommit as well as swap could be safer from this point of view. Unfortunately, you get other problems if you do so - as very little Linux software handles errors returned by malloc, since it's so uncommon to not have overcommit on a Linux system.
I'd also note that swap isn't even that slow for SSDs, as long as you don't use it for code.
tremon · 11h ago
Having swap "disabled" forces your system to swap out executables to disk
Read-only pages are never written to swap, because they can be retrieved as-is from the filesystem already. Binaries and libraries are accounted as buffer cache, not used memory, and under memory pressure those pages are simply dropped, not swapped out. Whether you have swap enabled or disabled doesn't change that.
Still, I hope that Debian does the sane thing and sets proper size limits. I recall having to troubleshoot memory issues on a system (Ubuntu IIRC) a decade ago where they also extensively used tmpfs: /dev, /dev/shm, /run, /tmp, /var/lock -- except that all those were mounted with the default size, which is 50% of total RAM. And the size limit is per mountpoint...
tsimionescu · 9h ago
> under memory pressure those pages are simply dropped, not swapped out
This is just semantics. The pages are evicted from memory, knowing that they are backed by the disk, and can be swapped back in from disk when needed - behavior that I called "swapping out" since it's pretty similar to what happens with other memory pages in the presence of swap.
Regardless of the naming, the important part is what happens when the page is needed again. If your code page was evicted, when your thread gets scheduled again, it will ask for the page to be read back into memory, requiring a disk read; this will cause some other code page to be evicted; then a new thread will be scheduled - worse case, one that uses the exact code page that just got evicted, repeating the process. And since the scheduler will generally try to execute the thread that has been waiting the most, while the VMM will prefer to evict the oldest read pages, there is actually a decent chance that this exact worse case will happen a lot. This whole thing will completely freeze the system to a degree that is extremely unlikely for a system with a decent amount of swap space.
pmontra · 1d ago
I've been running with swap off since my first SSD in 2015 or 2016. 16 GB RAM, then 32. No problems at all.
If I see RAM close to 30 GB I restart my browser and go back to 20 GB or less. Not every month.
progmetaldev · 1d ago
Are you running your own system for personal use, a service available to the public, or both? Do you normally see your system used consistently, or does it get used differently (and in random ways)?
Since you state you're running a browser, I assume you mean for personal use. Unfortunately, when you run a service open to the public, you can find all kinds of odd traffic even for normal low-memory services. Sometimes you'll get hit with an aggressive bot looking for an exploit, and a lot of those bots don't care if they get blocked, because they are built to absolutely crush a system with exploits or login attempts where they are only blocked by the system crashing.
I'd say that most bots are this aggressive, because the old school "script kiddies", or now it's just AI-enabled aggressors, just run code without understanding things. It's easier than ever to run an attack against a range of IP addresses looking for vulnerabilities, that can be chained into a LLM to generate code that can be run easily.
pmontra · 22h ago
That's my laptop. I'll check what my customers do on their servers, but all of them have a login screen on the home page of their services. Only one of them have a registration screen. Those are the servers I have access to. Their corporate sites run on WordPress and I don't know how those servers are configured.
Anyway, I'd also enable swap on public facing servers.
tsimionescu · 22h ago
Sure, if your working set always fits in RAM, you won't have problems. You wouldn't have problems with swap enabled, either.
It's only when you're consistently at the limit of how much RAM you have available that the differences start to matter. If you want to run a ~30GB +- 10% workload on a system with 32GB of RAM, then you'll get to find out how stable it is with VS without swap.
o11c · 1d ago
People keep saying this, yet infinite real-world experience shows that systems perform far better if the OOM Killer actually gets to kill something, which is only possible with swap disabled. In my experience, the OOM killer picks the right target first maybe 70% of the time, and the rest of the time it kills some other large process and allows enough progress for the blameworthy process to either complete or get OOM'ed in turn. In either case, all is good - whoever is responsible for monitoring the process notices its death and is able to restart it (automatically or manually - the usual culprits are: children of a too-parallel `make`, web browsers, children of systemd, or parts of the windowing environment [the WM and Graphical Shell can easily be restarted under X11 without affecting other processes; Wayland may behave badly here]). If you are launching processes without resilient management (this includes "bubble the failure up unto my nth-grandparent handles it") you need to fix that before anything else.
With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
You also have some misunderstandings a bout overcommit. If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy. The real problems are large processes that call `fork` instead of `vfork` (which is admittedly hard to use) or `posix_spawn` (which is admittedly limited and full of bugs), and processes that try to be "clever" and cache things in RAM (for which there's admittedly no good kernel interface).
===
"Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
CoolCold · 1d ago
+1 from me for
> With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
It's usually enough to have couple of times when you need to get into distant DC / wait for some IPMI connected for couple of hours, to learn "let it fail fast and gimme ssh back" on practice vs theory on "you should have swap on"
tsimionescu · 22h ago
Conversely, having critical processes get OOMKilled in critical sections can teach you the lesson that it's virtually impossible to write robust software with the assumption that any process can die at any instruction because the kernel thought it's not that important. OOM errors can be handled; SIGKILL can't.
tsimionescu · 22h ago
My only point is that you should have at least some few gig of swap space to smooth out temporary memory spikes, possibly avoiding random processes getting killed at random times, and making it very unlikely that the system will evict your code pages when it's running close to, but below the memory limit. The OOMKiller won't kick in if you're below the limit, but your system will freeze completely - virtually every time the scheduler runs, one core will stall on a disk read.
Conversely, with a few GB of old data paged out to disk, even to a slow HDD, there is going to be much, much less thrashing going on. Chances are, the system will work pretty normally, since it's most likely that memory that isn't being used at all is what will get swapped out, so it's unlikely to need to be swapped in any time soon. The spike that caused you to go over your normal memory usage will die down, memory will get freed naturally, and worse you'll see is that some process will have a temporary spike in latency soem time later when it actually needs those swapped out pages.
Now, if the spike is too large to fit even in RAM + swap, the OOMKiller will still run and the system will recover that way.
The only situation where you'll get in the state you are describing is if veitually all of your memory pages are constantly getting read and written to, so that the VMM can't evict any "stale" pages to swap. This should be a relatively rare occurrence, but I'm sure there are workloads where this happens, and I agree that in those cases, disabling swap is a good idea.
> If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy.
This is a very optimistic assumption. Crashing is about as likely as some kind of data corruption for these cases. Not to mention, crashing (or getting OOMKilled, for that matter) are very likely to cause data loss - a potentially huge issue. If you can avoid the situation altogether, that's much better. Which means overprovisioning and enabling some amount of swap if your workload is of a nature that doesn't constantly churn the entire working memory.
> "Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
You're supposed to design software for the systems you actually yarget, not some lowest common denominator. If you're targeting use cases where the software will be deployed on 5400 RPM HDDs with full disk encryption at rest running on an Intel Celeron CPU with 512 MB of RAM, then yes, design your system for those constraints. Disable swap, overcommit too, probably avoid any kind of VM technology, etc.
But don't go telling people who are designing for servers running on SSDs to disable swap because it'll make the system unusably slow - it just won't.
NekkoDroid · 1d ago
> If /tmp/ fills up your RAM
tmpfs by default only uses up to half your available RAM unless specified otherwise. So this isn't really a consideration unless you configure it to be a consideration you need to take into account.
(Systemd also really recently (v258) added quotas to tmpfs and IIRC its set by default to 80% of the tmpfs, so it is even less of a problem)
If each of those can take up 50% of ram, this is still a big problem. I don't know what defaults Debian uses nowadays, because I have TMPFS_SIZE=1% in /etc/default/tmpfs so my system is explicitly non-default.
NekkoDroid · 6h ago
Sure, but counterpoint: if a process is already writing that much in multiple of those directories, who knows what its writing in other directories that aren't backed by RAM.
All those arguments would be useful if we somehow could avoid the fact that the system will use it as "emergency memory" and become unresponsive. The kernel's OOM killer is broken for this, and userland OOM daemons are unreliable. `vm.swappiness` is completely useless in the worst case, which is the only case that matters.
With swap off, all the kernel needs to do is reserve a certain threshold for disk cache to avoid the thrashing problem. I don't know what the kernel actually does here (or what its tunables are), because systems with swap off have never caused problems for me the way systems with swap on inevitably do. The OOM killer works fine with swap off, because a system must always be resilient to unexpected process failure.
And worst of all - the kernel requires swap (and its bugs) to be enabled for hibernation to work.
It really wouldn't be hard to design a working swap system (just calculate how much to keep of different purposes of swap, and launch the OOM killer earlier), but apparently nobody in kernel-land understands the real-world problems enough to bother.
em-bee · 1d ago
the kernel requires swap (and its bugs) to be enabled for hibernation to work
this one gets me irritated every time i think about it. i don't want to use swap, but i do want hibernation. why is there no way to disable swap without that?
hmm, i suppose one could write a script that enables an inactive swap partition just before shutdown, and disables it again after boot.
sigio · 1d ago
I never want to use hibernation, since then I have to re-enter my disk encryption passphrase at resume time, have to wait longer for both suspend and resume because it needs to sync upto 48GB to/from disk (and I don't want to waste 48GB of diskspace for swapspace/hibernation). Suspend to ram is fine, I can keep the system suspended for a couple of days without issues, but it only needs to survive a long weekend at most.
Resume from RAM is about instant, and then just needs a screensaver unlock to get back to work.
chronid · 1d ago
And i want to use hibernation, as I don't mind putting my disk encryption passphrase once a day as the price of not risking having my laptop with a completely drained battery on Monday morning due to 1% battery drain/h of s2idle in my 64GB RAM configuration.
You can use suspend+hibernate to accomplish that and it works well. Unless the gods of kernel lockdown decide you cannot for your own good (and it doesn't matter if your disk is fully encrypted, you're not worthy anyway) of course. It's their kernel running on your laptop after all.
bigstrat2003 · 1d ago
> Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled)
What the heck are you talking about? Swap is enabled on every Linux system I manage (servers, desktop etc) and it's perfectly stable.
nonameiguess · 1d ago
User paulv already posted this 3 hours ago in a comment currently lower than this one, but tmpfs by default can't use all of your RAM. /tmp can get filled up and be unavailable for anything else to write to, but you'll still have memory. It won't crash the entire system.
paulv · 1d ago
The default configuration for tmpfs is to "only" use 50% of physical ram, which still isn't great, but it's something.
foresto · 1d ago
To be clear, that 50% (or whatever you configure) is a limit, not a constant.
willemlaurentz · 1d ago
Warning for those running Debian and Dovecot under stable.
Not only that: Dovecot 2.4 will also remove the functionalities of dsync, replicator and director [1]. This is frustrating and a big loss as these enabled e.g. very simple and reliable two-node (active-active) redundant setups, which will not be possible anymore with 2.4.
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
> I use it for years to achieve HA for personal mail servers and will now have to look for alternatives
Yeah, Dovecot seem to be going hardline commercial. Basically the open-source version will be maintained as a single server solution. Looks like if you want supported HA etc. you'll have to pay the big bucks.
There is a video up on YouTube[1] where their Sales Director is speaking at a conference and he said (rough transcript):
"there will be an open source version, but that open source version will be maintained for single server use only. we are actually taking out anything any actually kinda' involves multiple servers, dsync replication and err some other stuff. so dovecot will be a fully-featured single node server"
Have you looked at Stalwart[2] as an alternative ?
I usually fix those kind of problems by running the offending software in a docker container, with the correct version. Sometimes the boundaries of the container create their own problems. Dovecot 2.3 is at https://hub.docker.com/r/dovecot/dovecot/tags?name=2.3
(I love Debian) It's going to take a bit for me to get used to having a current version of Python on the system by default.
doctoboggan · 1d ago
Probably still good practice to use venv and a python executable version maintainer (uv could be used for both).
tcdent · 1d ago
Obvs uv, but I'm not going to install a dupe version of python with pyenv if the system version matches my target.
seabrookmx · 13h ago
Why use pyenv if you have uv?
CopyOnWrite · 17h ago
Tried to upgrade from Bookworm to Trixie for my desktops end of April.
Only thing that was broken was the desktop background, everything else worked great w/o any issue and even solved some trouble I had to fix by hand for Bookworm (WiFi sleep mode), so I upgraded all my physical and virtual machines.
Had no issues at all, only thing annoying compared to running stable was the amount of updated packages, which again run trough w/o any hitch and I have to take full responsibility. ;-)
Highly recommended if you want a Linux distribution for a server or a desktop which simply works and keeps working.
sugarpimpdorsey · 1d ago
Fair warning: the Trixie update does not allow you to roll back. It is in theory possible but practically it not only fails every single time, but leaves the system in an inconsistent and broken state. (Code for 'soon to be unbootable').
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
Ask me how I know.
gilbertbw · 1d ago
The page about upgrading [0] does have this warning:
Back up your data
Performing a release upgrade is never without risk. The upgrade may fail, leaving the system in a non-functioning state. USERS SHOULD BACKUP ALL DATA before attempting a release upgrade. DebianStability contains more information on these steps.
Yet Windows will let you roll back an upgrade with a single click within 10 days.
Of course anyone can restore from backups. It's a pain and it's time consuming.
My post serves more as a warning to those who may develop buyer's remorse.
sellmesoap · 1d ago
I always find the rough edges on upgrading windows (and macos), I've had several computers that take 3-4 hours to hit a roadblock, give a inscrutable error message and rollback. I feel spoiled using nixos (once you get over the learning curve)
a5c11 · 19h ago
If you keep your /home on a separate partition, you can basically reinstall a whole system without much efforts. It's good to do that from time to time. etckeeper is very helpful too. Lots of desktop apps are AppImage nowadays, so if you keep them in the home directory, they'll persist.
necheffa · 1d ago
This is what LVM/btrfs/ZFS snapshots were invented for.
Windows is using Volume Shadow Copies, which for the purposes of this discussion, you can think of as roughly equivalent.
jraph · 1d ago
You might like snapshot based solutions like Snapper
aitchnyu · 22h ago
I once tried Spiral Linux, light fork of Sid with bundled Snapper stack. Switched to a giant-userbase distro, Fedora, mostly because Plasma was bad with 5k screens. Are there mainstream distros with easy rollback?
jraph · 21h ago
openSUSE should be one of them.
wiz21c · 1d ago
as much as I love Debian (been a faithful user since 25 years or so, no more Windows at home since then), that Windows ability is just really cool and Debian is still not on par I believe...
42lux · 1d ago
You know imaging your machine is still an option...
mikae1 · 1d ago
But you can't do that on a live system as you can with Windows or macOS. Not a problem for pre release upgrade perhaps. But I'm so missing this feature from macOS.
pak9rabid · 1d ago
You can if you're using LVM. Take a snapshot of the logical volume your system is on, then run `dd' against the snapshot, as it's essentially a frozen point-in-time.
I've used this trick many times in a live, rw environment.
hysan · 1d ago
Depends on your filesystem. For example, I certainly can as I’m using btrfs. I’m also using Timeshift for easy management of snapshots. As others have mentioned, there are other choices too like Snapper that all work well.
SAI_Peregrinus · 1d ago
You can snapshot the filesystem if you're using BTRFS, ZFS, or another Copy-on-Write filesystem.
crtasm · 1d ago
Linux Mint offers rollbacks, I have snapshots going back a point release and a major version.
Too many bits of 'advice' on Stack Overflow, etc. claiming it's possible as top Google results.
I'm here to say unequivocally: it does not work, will not work, and will leave the system in an irreversibly broken state. Do not attempt.
sigio · 1d ago
Well... then it's always been possible, if you use LVM and create a snapshot before upgrading, then you can revert your snapshot in case it breaks.
But this isn't something that would be 'out of the box'... but that's why we make backups, but I can't remember an dist-upgrade ever significantly bricking my systems.
yjftsjthsd-h · 1d ago
In all fairness... How would that work? Not even just on Debian; in the general case, I don't see how to avoid that other than full filesystem snapshots or backups of some sort. Even on, say, a NixOS system where rolling back all the software and config (basically, /, /usr, and /etc) to exactly its old config is as easy as rebooting and picking the old generation, databases will still have migrated their on-disk format.
JdeBP · 1d ago
Indeed. Snapshots. And they are a breeze on operating systems where ZFS for everything is available. It's not like the Windows feature of the same name, which I suspect is in part what makes people wary of the idea. That works rather differently. A ZFS snapshot completes in seconds.
paulv · 1d ago
> Ask me how I know.
What problems did you have that made you want to roll back the update?
sugarpimpdorsey · 1d ago
I had some containerized application software break and start misbehaving in odd ways which was indicative of a deeper incompatibility issue. Possibly GPU related. No time to debug, had to roll it back.
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.
tremon · 11h ago
Wait, the containerized application broke because of a host upgrade? That's some leaky container you have there.
This sounds like a business setting, so this sounds like a good opportunity to advocate for testing hardware, a testing budget, a rollout plan, and a sound backup strategy.
bobmcnamara · 1d ago
For me it's usually been GPU driver compatibility.
esaym · 1d ago
You should be using an lvm snapshot. You are not even making a valid complaint.
dsr_ · 16h ago
A few months prior to every Stable release, people start raving about how awesome running Testing is, how it has up-to-date packages and fixes their kernel issues and solves world hunger and capitalism.
This is because Testing has done a soft freeze, then a hard freeze, then is prepped to become the new Stable. During that process, nothing new can be added to Testing.
Then, one day, Stable is released and the floodgates on Testing re-open. The people who specified "Trixie" are fine: they are now running Stable. The people who specified "testing" in their apt sources, or are installing Testing based on the wonderful reports of just a month ago, are in for a terrible experience. And... anyone who installed Stable as "stable" instead of "bookworm" is now getting upgraded shortly after release day, instead of at their convenience.
This happens every cycle.
Never recommend that anyone new to Debian should install Testing, even if it's about to become Stable. Unless you are working on throwaway systems, always specify a codename for release, not "stable" or "testing" or "unstable".
hiq · 11h ago
Good advice!
I'm on stable like 3/4 of the time, until there's some newer package version I want and that happens to be in testing, at which point I switch (using the codename as you suggest instead of Testing). If I don't have a specific need, I tend to switch during the soft or hard freeze, out of curiosity, because I never had problems doing that.
kristianp · 7h ago
One problem I have with major version upgrades of Linux distributions is the number of changes to different programs/systems. Every 18 months I have to learn how to do a number of things differently. Program x disappears, you are told you have to use y now. Linux has been around for decades at this point. Something that old should be considered mature and not need major changes.
Do we really need a new major version of gtk/qt or a different firewall program, when all those things have existed for many years?
elcapitan · 1d ago
I've been running Trixie since I bought my Framework laptop last September, and it has been great. First Linux experience after 20 years of Mac, and everything has been incredibly stable.
Now I need to figure out what happens when my testing suddenly is stable, and how to get on the next testing, I guess.
juujian · 1d ago
There is basically two different configurations. If your `sources.list` is explicitly on Trixie, it will stay there. If it is on testing, then you will get the next testing release in time.
bootsmann · 1d ago
Debian upgrading Podman to a version above 4.3.1 hopefully also means we get Quadlet support on raspberry pis. Took them forever to add this.
ducktective · 1d ago
Can anyone experienced with debian package development, point me to some valid, recent and Best Practice™ guides or blog posts explaining how to package stuff for Debian?
For actually uploading new packages to the archive you need to be "DD" (Debian Developer), which is a bit more involved process to go through. "DM"s (Debian Maintainer) is easier and can do already lots of things. It's also possible to start out by finding an existing DD that sponsors your upload, i.e. checks your final packaging work and if it looks alright will upload it in your name to the repositories.
You might also check out the wiki of Debian, it's sometimes a bit dated and got lots of info packed in, but can still be valuable if you're willing to work through the outdated bits. E.g.:
https://wiki.debian.org/DebianMentorsFaq
o11c · 1d ago
The native Debian package tooling is very far from sane, even compared to other distros - and they actively refuse to make it saner (instead just adding layers of cruft without addressing the core problems). You're probably best off using `checkinstall` or similar, and adding dependencies by hand.
em-bee · 1d ago
is RPM that much saner? which RPM based distribution comes with long term support suitable for servers that also includes btrfs? (i used to use centos, but since red hat removed btrfs from the kernel, refusing to support it, i had to switch to debian, because i depend on btrfs support)
o11c · 1d ago
Note that I'm only talking about package-building itself here, not the practical distributions built thereupon; if that is considered, the tradeoffs are quite different. IMO the deb-using world is more useful than the rpm-using world, especially for non-server environments in particular. This is also where Nix beat Guix despite Nix's packaging language making TeX look sane.
But yes, RPM itself is better than Deb if only because there's a single .spec file rather than a sea of embedded nonsense. It's still not as nice as many "port" packaging systems (e.g. the BSDs, but also Gentoo), but most of those cheat by not having to deal with arbitrary binary packages. Still, binary packages are hardly an excuse for the ludicrous contortions the standard deb-building tools choose to require.
yjftsjthsd-h · 1d ago
> which RPM based distribution comes with long term support suitable for servers that also includes btrfs?
Sounds like OpenSUSE to me. I tend to favor the fast-updating versions, but I'm pretty sure openSUSE Leap is exactly what you're asking for.
spookie · 22h ago
You can also use opensuse slowroll. Like Tumbleweed but only monthly updates.
homebrewer · 1d ago
Oracle Linux (gasp). They employ some of the main developers of btrfs, "their" distribution is just a RHEL rebuild with some patches (including btrfs), and it is very quick at delivering updates (they're usually several hours behind RHEL, while the next best — AlmaLinux — takes a day or two. Other rebuilds, very much including the somehow heavily hyped Rocky, are much slower).
I don't think there are many alternatives. OpenSUSE isn't supported for very long, and there really isn't anything else if you want btrfs, no Debian or its derivatives, and fire & forget kind of distribution.
Edit: Also look at Alpine Linux, if supports btrfs and has one of the best package formats that is an absolute joy to write (way easier than rpm or deb).
It's pretty different in some areas (no systemd and musl being two examples), check if that's fine for you.
spookie · 22h ago
Well openSUSE Leap 16.0 will be launched in October 1st, it will be supported for some time. At least 7 years.
homebrewer · 16h ago
I don't think they promise more than three years? (Not a criticism in the slightest, I don't demand anything from unpaid volunteers.)
"RHEL" is supported for 10, and if Oracle screws us over (I can believe in that possibility), ELevate lets you migrate sideways to any supported alternative.
It really depends on the use case.
spookie · 6h ago
It uses semantic versioning. Maybe that's where our different assessments lie.
For example, 15.6 was still stuck on GCC 7. Guess when 15.0 was released. Also, stuck on Python 3.6, which was released around the same time.
em-bee · 1d ago
thank you for that answer. yes oracle is well hmm :-o
Pumped for this. I was (and am) massively impressed with Debian 12. I've been an on-again off-again Linux user since around 2003, but this release was the one that finally got me to switch completely. The jank factor actually seems to be less than that of Windows and macOS at this point, which I never thought I'd say.
move-on-by · 9h ago
I’m most excited about wcurl and the —update flag for Apt. Everything is fine without them, but it’s nice QoL enhancements.
endorphine · 1d ago
So hyped for this release, since it will result in nice bugfixes (autorandr, Polybar) and me simplifying my dotfiles. Many thanks to all Debian developers!
bondant · 18h ago
Does someone know if there's an increase in memory usage for Gnome or if it's about the same? I am wondering if I can update an old computer with only 4GB of memory.
CopyOnWrite · 17h ago
Running Trixie/Gnome on a machine with 4GB (3.6GB usable) w/o any trouble.
Gnome runs better than ever, the problem is in my experience usually the web browser when it comes to RAM usage. Install ZRAM and the machine should be perfectly usable, if your usage patterns are similar to mine.
(Only thing which annoys me is that I cannot find an excuse to buy a better computer, because everything simply works on the machine...)
bondant · 16h ago
Thanks for the feedback!
h4kunamata · 1d ago
What to expect??
The usual: light, stable and functional.
I run older version as my DNS servers and homelad stuff, opensource 3D printer, etc.
Debian just works with no dramas.
I run it in text mode only so boot takes what ~3-5 seconds.
traceroute66 · 17h ago
I'm very fond of Debian, fantastic OS.
But what I really wish is they would some have more automation options suitable for the modern cloud world, e.g. something similar to Butane/Ignition as used on CoreOS.
I know there's cloud-init but that's hacky, verbose and opinionated compared to something like Butane/Ignition.
I also know there's ansible etc. in Debian distro but that's kind of yesterday's solution.
unixhero · 21h ago
Debian has two long running bugs
When you su to root, the whole path or a different path is loaded. One has to type su and then su - in order to reach all the regular bin and abin directories. It is flagged, but Debian team won't fix. As a user it is not great.
Secondly, Debian now ships with Raspberrypifirmware package, even on intel installs. When you enable backports to install a newer kernel, this fails due to this package bring there . It ks a major hassle to fix, and without Chatgpt/competitors it is very easy to get lost troubleshooting this.
demetris · 1d ago
Trixie is SUPER for desktop use!
I’ve been on sid for the last 10 months for my laptop (old T450s) and my secondary desktop, and it is really fun.
There are annoyances but they are not related to Debian itself.
FIRST
I decided it is time to switch to Wayland. Now my favorite run-or-raise app (Kupfer) cannot do run-or-raise. But there is a really nice extension to do run-or-raise on GNOME without the aggressive disruption of the Activities overview: Switcher. The other thing that is difficult on Wayland is text expansion. I have not found a solution for that part.
SECOND
The annoying to infuriating things that GNOME likes doing sometimes. But that is a constant. Nothing new.
Congrats and thanks to all the Debian people!
vanviegen · 1d ago
Just using KDE solves both your problems.
demetris · 1d ago
Ahahahaha!
The reply writes itself, doesn’t it? :-D
I like GNOME more though, in general. I just want GNOME without some of the unfathomable stuff, and with the progress KDE has made with Wayland.
spookie · 22h ago
Fair, but give it a try in the meantime. Start by using the minimal one. It has its upsides and downsides. I really just wish I wouldn't rely so heavily on extensions in GNOME, really. That and being able to add things in nautilus context menu in a sane manner.
snvzz · 1d ago
This is the first release with support for RISC-V, at the same level as arm64 or amd64.
For half a year now, I've run trixie on RISC-V (VisionFive 2 board), with ZFS root, without issue.
josteink · 1d ago
Just upgraded my laptop the other day.
sway and/or libinput now supports mouse-pad gestures so you can configure tjree-finger swiping between workspaces.
Very much appreciated.
swayvil · 1d ago
I installed it 5 days ago. Changed my sources and upgraded.
It's got the latest Angband (4.2.5). Homestyle SDL ui.
anthk · 1d ago
Upgrade to trixie and enable backports. You'll maybe get the last kernel, MESA, libreoffice, browsers and whatnot without hurting the rest of the system.
1oooqooq · 1d ago
why even include Intel xeon cpu drm? are xeons still available?
seabrookmx · 13h ago
Lol what?
Yes, every data center in the world still runs on Intel Xeon because AMD can't get a big enough allocation at TSMC to meet demand.
The reports of Intel's death are greatly exaggerated.
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
If I need newer software that isn't in their package repository, I understand that I have the ability to compile what I need, or at least make an active decision to modify my system to run what I want. Basically, the possibility of instability is a conscious choice for me, which I do sometimes take.
I just need boring stability to wildly experiment in isolation
I haven't run into a scenario where the desktop has caused me issues, only with Windows-only software that I sometimes require. What software has caused you issues that doesn't play nicely with Debian? What hacks are in place to mitigate upstream issues? I'm honestly curious, and if you don't use Debian, what distribution do you use regularly?
Why do you want to switch to Ubuntu?
I'm sorry I had to, I'll show myself out
[1] https://manpages.debian.org/testing/systemd/systemd.net-nami...
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often. This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
but you also want to be able to change a card in a server without the device name changing. at least that used to be an issue in the past.
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
And note that cgroupv1 also still works in the kernel just fine, only the part that systemd controlled was removed from systemd. You can still boot with cgroupv1 support on, e.g., Alpine Linux and OpenRC as init 1. So not sure if that will lessen my concerns about no guarantees for older naming-scheme versions, maintaining triple digits of them sure has its cost too.
And don't understand me wrong, sunsetting cgroupv1 was reasonable, but it was a lot of churn, it at least was a one time thing. The network interface naming situation is periodic churn, guaranteed to bite you every now and then just by using the defaults.
Looking myself for options to keep a Debian bare metal server I admin from going deaf and mute the next time I upgrade it... It still uses an /etc/network/interfaces file that configures a bridge for VMs to use, and the bridge_ports parameter requires an interface name which, when I upgraded to Bookworm, changed.
At this rate maybe I'll write a script that runs on boot and fixes up that file with whatever interface it finds, then restarts the network.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
Systemd just doesn't "get" that.
Under RH-based systems the ifcfg-* files had a HWADDR variable, so if you swapped a card you could get the new MAC address and plug it in there and get the same interface name. There was also udevd rules where you map names to particular hardware, including particular MACs.
> Real world enterprise users wanted this, it wasn't an arbitrary design choice.
As a real world sysadmin, working now a few of decades in this field (starting with non-EL-RH, then BSD, then Solaris, then RHEL, Debian, and now Ubuntu), I have never wanted this.
Some real world users asked for a fix. They did not mean they asked specifically for this fix.
There were other ways to handle this.
With Debian's system, you could wipe the state files, and for example eth0/etc would be reassigned per initialization order. Worked fine.
Even if you didn't like that, pre-Systemd udev allowed assigned by a variety of properties, including bus identifiers.
It was merely that Redhat, as usual, was so lacking in sophistication, unlike Debian.
Welcome back, eth0. :)
systemd also changes behavior in what naming policies are the default and what it considered as input, it did that since ever but started to version that since v238 [0]. Due to that the HW can stay exactly the same but names still change. I see this in VMs that stay exactly the same, no software update, not change in how the QEMU cli gets generated, really nothing changed from the outside virtual HW POV, interface name still changes.
The underlying problem was a real one, the solution seems like a bit of a sunken cost fallacy, and it added more problem dimensions than there previously exist.
Besides, even if the HW would change, shouldn't a _predicatble_ naming scheme be robust to not care about that as long as the same NIC is still plugged in somewhere?
Disclaimer, as stated elsewhere: I really like systemd, I'm not one that speaks out against it lightly, but the IF naming is not something they got right, but rather made worse for the default case. Being able to easily pin interface names through .link files is great, but requiring users to do that or have no network after an upgrade, especially for simple one-NIC use cases in a completely controlled environment like a VM is just bonkers.
[0]: https://www.freedesktop.org/software/systemd/man/latest/syst...
Regarding your rhetorical question about "the same NIC", I think the problem is in determining whether the NIC is the same, and it is not an easy one to solve. I remember that older Suse Linux versions used to pin the interface name to the NIC's MAC address in an udev rule file that got autogenerated when a NIC with a given MAC first appeared on the system, but they stopped doing that.
And for the other case you can still fallback to the other policies, it still will be much more stable by default.
Please note that I don't say that MAC is perfect, but using something that is actually tied to a NIC itself would fare much better by default compared to the NICs position as determined by a bunch of volatile information, and what normally does not matter to me at all, as e.g., I will always use a 100G NIC as ceph private network while the 25G ones as public one, no matter where they are plugged in. That someone configures something by location is the excpection, not the norm.
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
[1] https://linuxcontainers.org
[2] https://www.debian.org/releases/trixie/release-notes/issues....
sshd still reads /etc/ssh/sshd_config at startup. As far as I know, this is hard-coded in the executable.
What Debian has changed happens before the daemon is launched: the service is socket activated. So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
The SSH server being a socket unit with systemd doing all of the socket parallelism-limiting and accepting was one of the earliest examples of socket activation ever given in systemd. It was in one of Lennart Poettering's earliest writings on the subject back in 2011.
* https://0pointer.de/blog/projects/inetd.html
And even that wasn't the earliest discussion of this way of running SSH by a long shot, as this was old news even before systemd was invented. One can go back years earlier than even Barrett's, Silverman's, and Byrnes's SSH: The Secure Shell: The Definitive Guide published in 2005, which is one of many places explaining that lots of options in sshd_config get ignored when SSH is started by something else that does all of the socket stuff.
Like inetd.
This has been the case ever since it has been possible to put an SSH server together with inetd in "nowait" mode. Some enterprising computer historian might one day find out when the earliest mention of this was. It might even be in the original 1990s INSTALL file for SSH.
I want systemd nowhere fucking near my NTP or DNS config.
(used it before, mostly to learn. went to debian for new laptop. gave up after fighting systemd. I'm aware of devuan and artix, but gentoo just worked (after all the time spent))
https://forum.qubes-os.org/uploads/db3820/original/2X/c/c774...
Once install is done, login and save this file:
After: Reboot then: There are a few edge cases, packages which require systemd, but I've been running thousands of systems including desktops this way for a decade.Yes, I also run thousands of systems with systemd too.
Moreover, there are the odd one or two unrelated packages that just happen to have the string "systemd" in their names. (-:
Don't get me wrong, systemd is cool. But my god people really abuse of it. Especially distros. Some really make it hard to understand what service is in actual control. Why wrap daemons in daemons in daemons? With the worst possible names and descriptions to boot.
If you have specific issues, please file them over at systemds GitHub issue tracker.
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
Total guess, mind you.
Yeah, but requiring a service's response is why its a dependency in the first place, no?
I think it doesn't outweight the downside. Let's not forget this:
"OpenSSH normally does not load liblzma, but a common third-party patch used by several Linux distributions causes it to load libsystemd, which in turn loads lzma."
The "XZ utils backdoor" nearly backdoored every single distro running systemd.
People (including those who tried to plant this backdoor) are going to say: "systemd has nothing to do with the backdoor" but I respectfully disagree.
systemd is one heck of a Rube-Goldberg piece of machinery: the attack surface is gigantic seen that systemd's tentacles reaches everywhere.
With a tinfoil hat on one could think the goal of systemd was, precisely, to make sure the most complicated backdoors could be inserted here and there: "Let's have openssh use a lib it doesn't need at all because somehow we'll call libsystemd from openssh".
Genius idea if you ask me.
What could possibly go wrong with systemd "now just opening a port for openssh" uh? Nothing I'm sure.
Now that said I'm very happy that we've now got stuff like the Talos Linux distribution (ultra minimal, immutable, distro meant to run Kubernetes with as few executables as possible and of course no systemd) and then containers using Alpine Linux or, even if Debian based, minimal system with (supposedly) only one process running (and, once again, no systemd).
Containerization is one way out of systemd.
I can't wait for a good systemd-less hypervisor: then I can kiss Microsoft goodbye (systemd is a Microsoft technology, based on Microsoftism, by a now Microsoft employee).
Thanks but no thanks.
Talos distro, systemd-less containers: I want more of this kind of mindset.
The future looks very nice.
systemd lovers should just throw the towel in and switch to Windows: that's what they actually really want and it's probably no better than they deserve.
What else is lurking that you and I aren’t aware of?
All of systemd... and network manager/modem manager/...
I grew up with static configuration files, init scripts, inetd, etc... then grew into Solaris smf, dtrace, zones ... and then Linux implemented systemd. I really miss smf and related tools but systemd is still just meh to me. The implementation feels like a somewhat half-in-each-world brainchild.
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.
Making such claims on HN attracts edge cases like nobody's business but let's see
Maybe the developer runs a standard desktop version, but also uses it personally as a server for some kind of personal itch or software, on actual desktop hardware? Maybe I'm overthinking it, or the developer that wrote this code has the ability to fix more important issues, but went with this instead. I've tackled optimization before where it wasn't needed at the time, but it happened to be something I was looking into, and I felt my time investment could pay off in cases where resources were being pushed to their limits. I work with a lot of small to mid-sized businesses that can actually gain from seemingly small improvements like this.
Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.
It took me MONTHS to figure out what's the problem.
Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.
Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?
If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
That's a very bad default.
Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled), a program that tries to allocate all memory will quickly get OOM killed and the system will recover quickly.
If /tmp/ fills up your RAM, the system will not recover automatically, and might not even be recoverable by hand without rebooting. That said, systemd-managed daemons using a private /tmp/ in RAM will correctly clear it when killed.
Perhaps disabling overcommit as well as swap could be safer from this point of view. Unfortunately, you get other problems if you do so - as very little Linux software handles errors returned by malloc, since it's so uncommon to not have overcommit on a Linux system.
I'd also note that swap isn't even that slow for SSDs, as long as you don't use it for code.
Read-only pages are never written to swap, because they can be retrieved as-is from the filesystem already. Binaries and libraries are accounted as buffer cache, not used memory, and under memory pressure those pages are simply dropped, not swapped out. Whether you have swap enabled or disabled doesn't change that.
Still, I hope that Debian does the sane thing and sets proper size limits. I recall having to troubleshoot memory issues on a system (Ubuntu IIRC) a decade ago where they also extensively used tmpfs: /dev, /dev/shm, /run, /tmp, /var/lock -- except that all those were mounted with the default size, which is 50% of total RAM. And the size limit is per mountpoint...
This is just semantics. The pages are evicted from memory, knowing that they are backed by the disk, and can be swapped back in from disk when needed - behavior that I called "swapping out" since it's pretty similar to what happens with other memory pages in the presence of swap.
Regardless of the naming, the important part is what happens when the page is needed again. If your code page was evicted, when your thread gets scheduled again, it will ask for the page to be read back into memory, requiring a disk read; this will cause some other code page to be evicted; then a new thread will be scheduled - worse case, one that uses the exact code page that just got evicted, repeating the process. And since the scheduler will generally try to execute the thread that has been waiting the most, while the VMM will prefer to evict the oldest read pages, there is actually a decent chance that this exact worse case will happen a lot. This whole thing will completely freeze the system to a degree that is extremely unlikely for a system with a decent amount of swap space.
If I see RAM close to 30 GB I restart my browser and go back to 20 GB or less. Not every month.
Since you state you're running a browser, I assume you mean for personal use. Unfortunately, when you run a service open to the public, you can find all kinds of odd traffic even for normal low-memory services. Sometimes you'll get hit with an aggressive bot looking for an exploit, and a lot of those bots don't care if they get blocked, because they are built to absolutely crush a system with exploits or login attempts where they are only blocked by the system crashing.
I'd say that most bots are this aggressive, because the old school "script kiddies", or now it's just AI-enabled aggressors, just run code without understanding things. It's easier than ever to run an attack against a range of IP addresses looking for vulnerabilities, that can be chained into a LLM to generate code that can be run easily.
Anyway, I'd also enable swap on public facing servers.
It's only when you're consistently at the limit of how much RAM you have available that the differences start to matter. If you want to run a ~30GB +- 10% workload on a system with 32GB of RAM, then you'll get to find out how stable it is with VS without swap.
With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
You also have some misunderstandings a bout overcommit. If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy. The real problems are large processes that call `fork` instead of `vfork` (which is admittedly hard to use) or `posix_spawn` (which is admittedly limited and full of bugs), and processes that try to be "clever" and cache things in RAM (for which there's admittedly no good kernel interface).
===
"Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
> With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
It's usually enough to have couple of times when you need to get into distant DC / wait for some IPMI connected for couple of hours, to learn "let it fail fast and gimme ssh back" on practice vs theory on "you should have swap on"
Conversely, with a few GB of old data paged out to disk, even to a slow HDD, there is going to be much, much less thrashing going on. Chances are, the system will work pretty normally, since it's most likely that memory that isn't being used at all is what will get swapped out, so it's unlikely to need to be swapped in any time soon. The spike that caused you to go over your normal memory usage will die down, memory will get freed naturally, and worse you'll see is that some process will have a temporary spike in latency soem time later when it actually needs those swapped out pages.
Now, if the spike is too large to fit even in RAM + swap, the OOMKiller will still run and the system will recover that way.
The only situation where you'll get in the state you are describing is if veitually all of your memory pages are constantly getting read and written to, so that the VMM can't evict any "stale" pages to swap. This should be a relatively rare occurrence, but I'm sure there are workloads where this happens, and I agree that in those cases, disabling swap is a good idea.
> If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy.
This is a very optimistic assumption. Crashing is about as likely as some kind of data corruption for these cases. Not to mention, crashing (or getting OOMKilled, for that matter) are very likely to cause data loss - a potentially huge issue. If you can avoid the situation altogether, that's much better. Which means overprovisioning and enabling some amount of swap if your workload is of a nature that doesn't constantly churn the entire working memory.
> "Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
You're supposed to design software for the systems you actually yarget, not some lowest common denominator. If you're targeting use cases where the software will be deployed on 5400 RPM HDDs with full disk encryption at rest running on an Intel Celeron CPU with 512 MB of RAM, then yes, design your system for those constraints. Disable swap, overcommit too, probably avoid any kind of VM technology, etc.
But don't go telling people who are designing for servers running on SSDs to disable swap because it'll make the system unusably slow - it just won't.
tmpfs by default only uses up to half your available RAM unless specified otherwise. So this isn't really a consideration unless you configure it to be a consideration you need to take into account.
(Systemd also really recently (v258) added quotas to tmpfs and IIRC its set by default to 80% of the tmpfs, so it is even less of a problem)
All those arguments would be useful if we somehow could avoid the fact that the system will use it as "emergency memory" and become unresponsive. The kernel's OOM killer is broken for this, and userland OOM daemons are unreliable. `vm.swappiness` is completely useless in the worst case, which is the only case that matters.
With swap off, all the kernel needs to do is reserve a certain threshold for disk cache to avoid the thrashing problem. I don't know what the kernel actually does here (or what its tunables are), because systems with swap off have never caused problems for me the way systems with swap on inevitably do. The OOM killer works fine with swap off, because a system must always be resilient to unexpected process failure.
And worst of all - the kernel requires swap (and its bugs) to be enabled for hibernation to work.
It really wouldn't be hard to design a working swap system (just calculate how much to keep of different purposes of swap, and launch the OOM killer earlier), but apparently nobody in kernel-land understands the real-world problems enough to bother.
this one gets me irritated every time i think about it. i don't want to use swap, but i do want hibernation. why is there no way to disable swap without that?
hmm, i suppose one could write a script that enables an inactive swap partition just before shutdown, and disables it again after boot.
You can use suspend+hibernate to accomplish that and it works well. Unless the gods of kernel lockdown decide you cannot for your own good (and it doesn't matter if your disk is fully encrypted, you're not worthy anyway) of course. It's their kernel running on your laptop after all.
What the heck are you talking about? Swap is enabled on every Linux system I manage (servers, desktop etc) and it's perfectly stable.
In this new stable release, an update to Dovecot will break your configuration: https://willem.com/blog/2025-06-04_breaking-changes/
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
[1] https://doc.dovecot.org/2.4.0/installation/upgrade/2.3-to-2....
Yeah, Dovecot seem to be going hardline commercial. Basically the open-source version will be maintained as a single server solution. Looks like if you want supported HA etc. you'll have to pay the big bucks.
There is a video up on YouTube[1] where their Sales Director is speaking at a conference and he said (rough transcript):
"there will be an open source version, but that open source version will be maintained for single server use only. we are actually taking out anything any actually kinda' involves multiple servers, dsync replication and err some other stuff. so dovecot will be a fully-featured single node server"
Have you looked at Stalwart[2] as an alternative ?
[1] https://youtu.be/s-JYrjCKshA?t=912 [2] https://stalw.art/
(I love Debian) It's going to take a bit for me to get used to having a current version of Python on the system by default.
Only thing that was broken was the desktop background, everything else worked great w/o any issue and even solved some trouble I had to fix by hand for Bookworm (WiFi sleep mode), so I upgraded all my physical and virtual machines.
Had no issues at all, only thing annoying compared to running stable was the amount of updated packages, which again run trough w/o any hitch and I have to take full responsibility. ;-)
Highly recommended if you want a Linux distribution for a server or a desktop which simply works and keeps working.
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
Ask me how I know.
Of course anyone can restore from backups. It's a pain and it's time consuming.
My post serves more as a warning to those who may develop buyer's remorse.
Windows is using Volume Shadow Copies, which for the purposes of this discussion, you can think of as roughly equivalent.
I've used this trick many times in a live, rw environment.
Too many bits of 'advice' on Stack Overflow, etc. claiming it's possible as top Google results.
I'm here to say unequivocally: it does not work, will not work, and will leave the system in an irreversibly broken state. Do not attempt.
But this isn't something that would be 'out of the box'... but that's why we make backups, but I can't remember an dist-upgrade ever significantly bricking my systems.
What problems did you have that made you want to roll back the update?
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.
This sounds like a business setting, so this sounds like a good opportunity to advocate for testing hardware, a testing budget, a rollout plan, and a sound backup strategy.
This is because Testing has done a soft freeze, then a hard freeze, then is prepped to become the new Stable. During that process, nothing new can be added to Testing.
Then, one day, Stable is released and the floodgates on Testing re-open. The people who specified "Trixie" are fine: they are now running Stable. The people who specified "testing" in their apt sources, or are installing Testing based on the wonderful reports of just a month ago, are in for a terrible experience. And... anyone who installed Stable as "stable" instead of "bookworm" is now getting upgraded shortly after release day, instead of at their convenience.
This happens every cycle.
Never recommend that anyone new to Debian should install Testing, even if it's about to become Stable. Unless you are working on throwaway systems, always specify a codename for release, not "stable" or "testing" or "unstable".
I'm on stable like 3/4 of the time, until there's some newer package version I want and that happens to be in testing, at which point I switch (using the codename as you suggest instead of Testing). If I don't have a specific need, I tend to switch during the soft or hard freeze, out of curiosity, because I never had problems doing that.
Do we really need a new major version of gtk/qt or a different firewall program, when all those things have existed for many years?
Now I need to figure out what happens when my testing suddenly is stable, and how to get on the next testing, I guess.
The policy manual serves as both ruleset but also explains lots of things w.r.t. packaging, as that's part of the ruleset: https://www.debian.org/doc/debian-policy/index.html#
For actually uploading new packages to the archive you need to be "DD" (Debian Developer), which is a bit more involved process to go through. "DM"s (Debian Maintainer) is easier and can do already lots of things. It's also possible to start out by finding an existing DD that sponsors your upload, i.e. checks your final packaging work and if it looks alright will upload it in your name to the repositories.
You might also check out the wiki of Debian, it's sometimes a bit dated and got lots of info packed in, but can still be valuable if you're willing to work through the outdated bits. E.g.: https://wiki.debian.org/DebianMentorsFaq
But yes, RPM itself is better than Deb if only because there's a single .spec file rather than a sea of embedded nonsense. It's still not as nice as many "port" packaging systems (e.g. the BSDs, but also Gentoo), but most of those cheat by not having to deal with arbitrary binary packages. Still, binary packages are hardly an excuse for the ludicrous contortions the standard deb-building tools choose to require.
Sounds like OpenSUSE to me. I tend to favor the fast-updating versions, but I'm pretty sure openSUSE Leap is exactly what you're asking for.
I don't think there are many alternatives. OpenSUSE isn't supported for very long, and there really isn't anything else if you want btrfs, no Debian or its derivatives, and fire & forget kind of distribution.
Edit: Also look at Alpine Linux, if supports btrfs and has one of the best package formats that is an absolute joy to write (way easier than rpm or deb).
It's pretty different in some areas (no systemd and musl being two examples), check if that's fine for you.
"RHEL" is supported for 10, and if Oracle screws us over (I can believe in that possibility), ELevate lets you migrate sideways to any supported alternative.
It really depends on the use case.
For example, 15.6 was still stuck on GCC 7. Guess when 15.0 was released. Also, stuck on Python 3.6, which was released around the same time.
alpine feels a bit to opinionated for my taste. i just found this though: https://mrgecko.org/blog/2024/add-btrfs-support-to-rocky-and... centos has a SIG that provides kernels with btrfs which can be used with alma and rocky. that sounds promising.
Best Packaging Practices https://www.debian.org/doc/manuals/developers-reference/best...
There's two tutorials/walkthroughs linked from there:
- how to build an existing Debian package: https://wiki.debian.org/BuildingTutorial
- how to package new software for Debian: https://www.debian.org/doc/manuals/debmake-doc/ch05.en.html
Gnome runs better than ever, the problem is in my experience usually the web browser when it comes to RAM usage. Install ZRAM and the machine should be perfectly usable, if your usage patterns are similar to mine.
(Only thing which annoys me is that I cannot find an excuse to buy a better computer, because everything simply works on the machine...)
The usual: light, stable and functional. I run older version as my DNS servers and homelad stuff, opensource 3D printer, etc. Debian just works with no dramas. I run it in text mode only so boot takes what ~3-5 seconds.
But what I really wish is they would some have more automation options suitable for the modern cloud world, e.g. something similar to Butane/Ignition as used on CoreOS.
I know there's cloud-init but that's hacky, verbose and opinionated compared to something like Butane/Ignition.
I also know there's ansible etc. in Debian distro but that's kind of yesterday's solution.
When you su to root, the whole path or a different path is loaded. One has to type su and then su - in order to reach all the regular bin and abin directories. It is flagged, but Debian team won't fix. As a user it is not great.
Secondly, Debian now ships with Raspberrypifirmware package, even on intel installs. When you enable backports to install a newer kernel, this fails due to this package bring there . It ks a major hassle to fix, and without Chatgpt/competitors it is very easy to get lost troubleshooting this.
I’ve been on sid for the last 10 months for my laptop (old T450s) and my secondary desktop, and it is really fun.
There are annoyances but they are not related to Debian itself.
FIRST
I decided it is time to switch to Wayland. Now my favorite run-or-raise app (Kupfer) cannot do run-or-raise. But there is a really nice extension to do run-or-raise on GNOME without the aggressive disruption of the Activities overview: Switcher. The other thing that is difficult on Wayland is text expansion. I have not found a solution for that part.
SECOND
The annoying to infuriating things that GNOME likes doing sometimes. But that is a constant. Nothing new.
Congrats and thanks to all the Debian people!
The reply writes itself, doesn’t it? :-D
I like GNOME more though, in general. I just want GNOME without some of the unfathomable stuff, and with the progress KDE has made with Wayland.
For half a year now, I've run trixie on RISC-V (VisionFive 2 board), with ZFS root, without issue.
sway and/or libinput now supports mouse-pad gestures so you can configure tjree-finger swiping between workspaces.
Very much appreciated.
It's got the latest Angband (4.2.5). Homestyle SDL ui.
Yes, every data center in the world still runs on Intel Xeon because AMD can't get a big enough allocation at TSMC to meet demand.
The reports of Intel's death are greatly exaggerated.