I've been running testing/trixie since the end of 2023 or so. (I generally always run testing, but stick with stable for ~6 months after stabilization, in order to avoid lots of package churn in new-testing.)
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
exiguus · 39m ago
This could be me. I do the same, and i already plan to update to Forky at the beginning of 2026.
sherr · 5h ago
Looking forward to the release.
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket
activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config.
sshd still reads /etc/ssh/sshd_config at startup. As far as I know, this is hard-coded in the executable.
What Debian has changed happens before the daemon is launched: the service is socket activated.
So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
LooseMarmoset · 5h ago
systemd-resolved is an effing nightmare when combined with network-manager. these two packages consistently manage to stomp all over DNS resolution in their haste to be the one true source of name resolution. i tried enabling systemd-resolved as part of an effort to do dns over https and i end up with zero dns. i swear that /etc/resolv.conf plus helper scripts is more consistent and easy.
wpm · 4h ago
It’s why I always say in the typical “systemd bad” threads that systemd the init system is great, it’s the systemd-* everything else’s that give it a bad name.
I want systemd nowhere fucking near my NTP or DNS config.
blueflow · 4h ago
Thank god you can enable and disable each of these components in complete isolation, so you don't suffer any kind of lock-in from systemd.
dijit · 3h ago
fighting your distro in practice is a total nightmare.
yehoshuapw · 2h ago
I ended up in gentoo mostly just to avoid systemd
(used it before, mostly to learn. went to debian for new laptop. gave up after fighting systemd.
I'm aware of devuan and artix, but gentoo just worked (after all the time spent))
bbarnett · 1h ago
Debian, at least until bookworm works perfectly without systemd. The easiest way to make this transition, is to installed Debian with nothing but 'standard system utilities' and 'SSH server' (if you want) during install:
There are a few edge cases, packages which require systemd, but I've been running thousands of systems including desktops this way for a decade.
Yes, I also run thousands of systems with systemd too.
beagle3 · 1h ago
I’ve been dropping systemd-timesyncd and using chrome since forever, and it works well. I’m sure some systemd-* things are harder to replace, but not every replacement is a fight against your distro.
LooseMarmoset · 3h ago
devuan saved just enough of my sanity for me to function
RVuRnvbM2e · 2h ago
I've been using this combination successfully for a long time with no issues. In fact it is the only way to handle complex DNS setup on Linux.
If you have specific issues, please file them over at systemds GitHub issue tracker.
jimmaswell · 5h ago
The OpenSSH change is madness if it's not a bug. I hope it's not intentional.
tharos47 · 5h ago
It was done in Ubuntu a few versions ago. Afaik only the listening port config is ignored and instead is setup in systemd
hsbauauvhabzb · 14m ago
Which is exactly the problem. It’s not obvious, is misleading and it’s cause is not easily determined just by looking at the config.
What else is lurking that you and I aren’t aware of?
e2le · 4h ago
What is the rationale for changing OpenSSH into a socket activated service? Given that it comes with issues, I assume the benefits outweigh the downsides.
idoubtit · 4h ago
> Given that it comes with issues, I assume the benefits outweigh the downsides.
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
blueflow · 1h ago
> Dependencies are simpler and faster (waiting for a service to start and accept requests is costly)
Yeah, but requiring a service's response is why its a dependency in the first place, no?
Evidlo · 5h ago
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
edoceo · 5h ago
I'm assuming this means it doesn't respect socket config in the sshd_config so, you configure the listening port in systemd.
blueflow · 6h ago
TIL there are 14 subtly different naming schemes for network interfaces[1]. "predictable" my ass.
14 different schemes multiplied by some acting slightly different in every version. Sure you can pin it, but that fixes only their internal back and forth, is only possible via the kernel cmdline and there is no guarantee for how long the old versions will stay available, as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again.
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often.
This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
woleium · 4h ago
I imagine they went against mac address because it is not immutable, some folks rotate mac addresses for privacy/security reasons.
tlamponi · 3h ago
The original one is still there. Systemd knows even about that, it's differentiated as MAC vs PermanentMAC.
duskwuff · 2h ago
There are, unfortunately, some older devices (like some Sun systems) which use the same MAC address for every network interface on the device.
em-bee · 3h ago
i thought about that, but couldn't you access the hardcoded address to identify the card?
but you also want to be able to change a card in a server without the device name changing. at least that used to be an issue in the past.
grantla · 5h ago
> as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
tlamponi · 4h ago
Yeah, I know, I spent more than a week into looking for options to reduce impact for all of our users.
And note that cgroupv1 also still works in the kernel just fine, only the part that systemd controlled was removed from systemd. You can still boot with cgroupv1 support on, e.g., Alpine Linux and OpenRC as init 1. So not sure if that will lessen my concerns about no guarantees for older naming-scheme versions, maintaining triple digits of them sure has its cost too.
And don't understand me wrong, sunsetting cgroupv1 was reasonable, but it was a lot of churn, it at least was a one time thing. The network interface naming situation is periodic churn, guaranteed to bite you every now and then just by using the defaults.
Scramblejams · 4h ago
Can you tell me why NamePolicy=keep doesn't do the trick?
Looking myself for options to keep a Debian bare metal server I admin from going deaf and mute the next time I upgrade it... It still uses an /etc/network/interfaces file that configures a bridge for VMs to use, and the bridge_ports parameter requires an interface name which, when I upgraded to Bookworm, changed.
At this rate maybe I'll write a script that runs on boot and fixes up that file with whatever interface it finds, then restarts the network.
bbarnett · 1h ago
This worked brilliantly in Debian for more than a decade, had almost zero downside, and just did what asked. I went through 3+ dist-upgrades, for the first time in my life, without a NIC change.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
Systemd just doesn't "get" that.
foresto · 2h ago
I dislike systemd's Predictable Network Interface Names, so I disable them with this kernel command line option: net.ifnames=0
Welcome back, eth0. :)
lynx97 · 5h ago
The "stable" interface naming scheme is a scam. And I have proof. Test upgraded a VM today, from bookworm to trixie. And guess what. Everything worked, except after reboot the network interface was unconfigured? Guess what. The name changed...
pferde · 2h ago
That can only happen if the emulated hardware layout presented to the VM changes. I'd look at that before calling anything a scam.
tlamponi · 1h ago
Scam is probably the wrong word, and it's choice might be a bit feeling fueled, but it's really not true that this only depends on the HW.
systemd also changes behavior in what naming policies are the default and what it considered as input, it did that since ever but started to version that since v238 [0].
Due to that the HW can stay exactly the same but names still change. I see this in VMs that stay exactly the same, no software update, not change in how the QEMU cli gets generated, really nothing changed from the outside virtual HW POV, interface name still changes.
The underlying problem was a real one, the solution seems like a bit of a sunken cost fallacy, and it added more problem dimensions than there previously exist.
Besides, even if the HW would change, shouldn't a _predicatble_ naming scheme be robust to not care about that as long as the same NIC is still plugged in somewhere?
Disclaimer, as stated elsewhere: I really like systemd, I'm not one that speaks out against it lightly, but the IF naming is not something they got right, but rather made worse for the default case.
Being able to easily pin interface names through .link files is great, but requiring users to do that or have no network after an upgrade, especially for simple one-NIC use cases in a completely controlled environment like a VM is just bonkers.
Ah, ok, I didn't think of systemd version changes. Thanks.
Regarding your rhetorical question about "the same NIC", I think the problem is in determining whether the NIC is the same, and it is not an easy one to solve. I remember that older Suse Linux versions used to pin the interface name to the NIC's MAC address in an udev rule file that got autogenerated when a NIC with a given MAC first appeared on the system, but they stopped doing that.
tlamponi · 59m ago
Yeah, the permanent MAC address (i.e., the one the card actually reports to the system not the one dynamic one it can use) would be the safest bet, as that is the most stable thing there is, and more importantly, it is very relevant for switches and firewalls in enterprise settings, so if it changes it's often likely that network access will be broken any way, so one basically can only win with using the MAC as main identifier IMO, at least compared to the current status quo.
hsbauauvhabzb · 12m ago
That’s not a scam and that’s not proof. That’s an upgrade problem. Stop misusing the word and devaluing it.
unethical_ban · 2h ago
The best use of AI I've gotten so far is having it explain to me how to manage a Fedora Server's core infrastructure "the right way". Which files, commands, etc. to permanently or temporarily change network, firewall, DNS, NTP settings.
tcdent · 5h ago
Woah Python 3.13 in stable?!
(I love Debian) It's going to take a bit for me to get used to having a current version of Python on the system by default.
doctoboggan · 3h ago
Probably still good practice to use venv and a python executable version maintainer (uv could be used for both).
tcdent · 54m ago
Obvs uv, but I'm not going to install a dupe version of python with pyenv if the system version matches my target.
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.
chromakode · 5h ago
For users with SSDs, saving the write wear seems like a desirable default.
Aachen · 1h ago
I have yet to hear of someone wearing out an SSD on a desktop/laptop system (not server, I'm sure there's heavy applications that can run 24/7 and legitimately get the job done), even considering bugs like the Spotify desktop client writing loads of data uselessly some years ago
Making such claims on HN attracts edge cases like nobody's business but let's see
pluto_modadic · 5h ago
finally they're using a tmpfs. thank goodness <3
dlachausse · 5h ago
> You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting.
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
hsbauauvhabzb · 10m ago
That seems like a bug with those applications which make use of the filesystem instead of performing in-memory operations or using named pipes.
api · 6h ago
Wait... that means a misbehaving program can cause out of memory errors easily by filling up /tmp?
That's a very bad default.
CamouflagedKiwi · 5h ago
A misbehaving program can cause out of memory errors already by filling up memory. It wouldn't persist past that program's death but the effect is pretty catastrophic on other programs regardless.
o11c · 3h ago
That actually is a pretty big difference.
Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled), a program that tries to allocate all memory will quickly get OOM killed and the system will recover quickly.
If /tmp/ fills up your RAM, the system will not recover automatically, and might not even be recoverable by hand without rebooting. That said, systemd-managed daemons using a private /tmp/ in RAM will correctly clear it when killed.
NekkoDroid · 14m ago
> If /tmp/ fills up your RAM
tmpfs by default only uses up to half your available RAM unless specified otherwise. So this isn't really a consideration unless you configure it to be a consideration you need to take into account.
(System also really recently (v258) added quotas to tmpfs and IIRC its set by default to 80% of the tmpfs, so it is even less of a problem)
tsimionescu · 2h ago
The sane thing is to have swap enabled. Having swap "disabled" forces your system to swap out executables to disk, since these are likely the only memory-mapped files you have. So, if your memory fills up, you get catastrophic thrashing of the instruction cache. If you're lucky, you really go over available memory, and the OOMKiller kills some random process. But if you're not, your system will keep chugging along at a snail's pace.
Perhaps disabling overcommit as well as swap could be safer from this point of view. Unfortunately, you get other problems if you do so - as very little Linux software handles errors returned by malloc, since it's so uncommon to not have overcommit on a Linux system.
I'd also note that swap isn't even that slow for SSDs, as long as you don't use it for code.
pmontra · 1h ago
I've been running with swap off since my first SSD in 2015 or 2016. 16 GB RAM, then 32. No problems at all.
If I see RAM close to 30 GB I restart my browser and go back to 20 GB or less. Not every month.
All those arguments would be useful if we somehow could avoid the fact that the system will use it as "emergency memory" and become unresponsive. The kernel's OOM killer is broken for this, and userland OOM daemons are unreliable. `vm.swappiness` is completely useless in the worst case, which is the only case that matters.
With swap off, all the kernel needs to do is reserve a certain threshold for disk cache to avoid the thrashing problem. I don't know what the kernel actually does here (or what its tunables are), because systems with swap off have never caused problems for me the way systems with swap on inevitably do. The OOM killer works fine with swap off, because a system must always be resilient to unexpected process failure.
And worst of all - the kernel requires swap (and its bugs) to be enabled for hibernation to work.
It really wouldn't be hard to design a working swap system (just calculate how much to keep of different purposes of swap, and launch the OOM killer earlier), but apparently nobody in kernel-land understands the real-world problems enough to bother.
em-bee · 1h ago
the kernel requires swap (and its bugs) to be enabled for hibernation to work
this one gets me irritated every time i think about it. i don't want to use swap, but i do want hibernation. why is there no way to disable swap without that?
hmm, i suppose one could write a script that enables an inactive swap partition just before shutdown, and disables it again after boot.
nonameiguess · 2h ago
User paulv already posted this 3 hours ago in a comment currently lower than this one, but tmpfs by default can't use all of your RAM. /tmp can get filled up and be unavailable for anything else to write to, but you'll still have memory. It won't crash the entire system.
paulv · 6h ago
The default configuration for tmpfs is to "only" use 50% of physical ram, which still isn't great, but it's something.
foresto · 2h ago
To be clear, that 50% (or whatever you configure) is a limit, not a constant.
juujian · 4h ago
Have been using Trixie on my laptop for a year (?) now, it has been a very positive experience. I had brought a brand new, very recent ThinkPad, not considering that the relevant drivers would not be in Debian Stable yet. Now on Trixie, having a relatively recent version of everything KDE plasma is a blessing. Things have changed so much, for the better, particularly regarding Wayland. The experience with Trixie is already better than it ever was for me with Ubuntu (good riddance!), and I cannot believe that this is supposed to be an unstable release. I broke stuff once, and that was my own fault (forcing update when not all necessary packages were staged yet, learned my lesson on that!).
willemlaurentz · 4h ago
Warning for those running Debian and Dovecot under stable.
Not only that: Dovecot 2.4 will also remove the functionalities of dsync, replicator and director [1]. This is frustrating and a big loss as these enabled e.g. very simple and reliable two-node (active-active) redundant setups, which will not be possible anymore with 2.4.
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
I usually fix those kind of problems by running the offending software in a docker container, with the correct version. Sometimes the boundaries of the container create their own problems. Dovecot 2.3 is at https://hub.docker.com/r/dovecot/dovecot/tags?name=2.3
Fair warning: the Trixie update does not allow you to roll back. It is in theory possible but practically it not only fails every single time, but leaves the system in an inconsistent and broken state. (Code for 'soon to be unbootable').
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
Ask me how I know.
yjftsjthsd-h · 18m ago
In all fairness... How would that work? Not even just on Debian; in the general case, I don't see how to avoid that other than full filesystem snapshots or backups of some sort. Even on, say, a NixOS system where rolling back all the software and config (basically, /, /usr, and /etc) to exactly its old config is as easy as rebooting and picking the old generation, databases will still have migrated their on-disk format.
gilbertbw · 6h ago
The page about upgrading [0] does have this warning:
Back up your data
Performing a release upgrade is never without risk. The upgrade may fail, leaving the system in a non-functioning state. USERS SHOULD BACKUP ALL DATA before attempting a release upgrade. DebianStability contains more information on these steps.
Yet Windows will let you roll back an upgrade with a single click within 10 days.
Of course anyone can restore from backups. It's a pain and it's time consuming.
My post serves more as a warning to those who may develop buyer's remorse.
sellmesoap · 1h ago
I always find the rough edges on upgrading windows (and macos), I've had several computers that take 3-4 hours to hit a roadblock, give a inscrutable error message and rollback. I feel spoiled using nixos (once you get over the learning curve)
jraph · 5h ago
You might like snapshot based solutions like Snapper
42lux · 5h ago
You know imaging your machine is still an option...
mikae1 · 4h ago
But you can't do that on a live system as you can with Windows or macOS. Not a problem for pre release upgrade perhaps. But I'm so missing this feature from macOS.
pak9rabid · 3h ago
You can if you're using LVM. Take a snapshot of the logical volume your system is on, then run `dd' against the snapshot, as it's essentially a frozen point-in-time.
I've used this trick many times in a live, rw environment.
hysan · 3h ago
Depends on your filesystem. For example, I certainly can as I’m using btrfs. I’m also using Timeshift for easy management of snapshots. As others have mentioned, there are other choices too like Snapper that all work well.
SAI_Peregrinus · 3h ago
You can snapshot the filesystem if you're using BTRFS, ZFS, or another Copy-on-Write filesystem.
wiz21c · 3h ago
as much as I love Debian (been a faithful user since 25 years or so, no more Windows at home since then), that Windows ability is just really cool and Debian is still not on par I believe...
Too many bits of 'advice' on Stack Overflow, etc. claiming it's possible as top Google results.
I'm here to say unequivocally: it does not work, will not work, and will leave the system in an irreversibly broken state. Do not attempt.
paulv · 6h ago
> Ask me how I know.
What problems did you have that made you want to roll back the update?
sugarpimpdorsey · 4h ago
I had some containerized application software break and start misbehaving in odd ways which was indicative of a deeper incompatibility issue. Possibly GPU related. No time to debug, had to roll it back.
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.
esaym · 4h ago
You should be using an lvm snapshot. You are not even making a valid complaint.
elcapitan · 4h ago
I've been running Trixie since I bought my Framework laptop last September, and it has been great. First Linux experience after 20 years of Mac, and everything has been incredibly stable.
Now I need to figure out what happens when my testing suddenly is stable, and how to get on the next testing, I guess.
juujian · 3h ago
There is basically two different configurations. If your `sources.list` is explicitly on Trixie, it will stay there. If it is on testing, then you will get the next testing release in time.
bootsmann · 3h ago
Debian upgrading Podman to a version above 4.3.1 hopefully also means we get Quadlet support on raspberry pis. Took them forever to add this.
olivierestsage · 2h ago
Pumped for this. I was (and am) massively impressed with Debian 12. I've been an on-again off-again Linux user since around 2003, but this release was the one that finally got me to switch completely. The jank factor actually seems to be less than that of Windows and macOS at this point, which I never thought I'd say.
ducktective · 5h ago
Can anyone experienced with debian package development, point me to some valid, recent and Best Practice™ guides or blog posts explaining how to package stuff for Debian?
For actually uploading new packages to the archive you need to be "DD" (Debian Developer), which is a bit more involved process to go through. "DM"s (Debian Maintainer) is easier and can do already lots of things. It's also possible to start out by finding an existing DD that sponsors your upload, i.e. checks your final packaging work and if it looks alright will upload it in your name to the repositories.
You might also check out the wiki of Debian, it's sometimes a bit dated and got lots of info packed in, but can still be valuable if you're willing to work through the outdated bits. E.g.:
https://wiki.debian.org/DebianMentorsFaq
o11c · 3h ago
The native Debian package tooling is very far from sane, even compared to other distros - and they actively refuse to make it saner (instead just adding layers of cruft without addressing the core problems). You're probably best off using `checkinstall` or similar, and adding dependencies by hand.
em-bee · 46m ago
is RPM that much saner? which RPM based distribution comes with long term support suitable for servers that also includes btrfs? (i used to use centos, but since red hat removed btrfs from the kernel, refusing to support it, i had to switch to debian, because i depend on btrfs support)
yjftsjthsd-h · 15m ago
> which RPM based distribution comes with long term support suitable for servers that also includes btrfs?
Sounds like OpenSUSE to me. I tend to favor the fast-updating versions, but I'm pretty sure openSUSE Leap is exactly what you're asking for.
homebrewer · 5m ago
Oracle Linux (gasp). They employ some of the main developers of btrfs, "their" distribution is just a RHEL rebuild with some patches (including btrfs), and it is very quick at delivering updates (they're usually several hours behind RHEL, while the next best — AlmaLinux — takes a day or two. Other rebuilds, very much including the somehow heavily hyped Rocky, are much slower).
I don't think there are many alternatives. OpenSUSE isn't supported for very long, and there really isn't anything else if you want btrfs, no Debian or its derivatives, and fire & forget kind of distribution.
So hyped for this release, since it will result in nice bugfixes (autorandr, Polybar) and me simplifying my dotfiles. Many thanks to all Debian developers!
demetris · 1h ago
Trixie is SUPER for desktop use!
I’ve been on sid for the last 10 months for my laptop (old T450s) and my secondary desktop, and it is really fun.
There are annoyances but they are not related to Debian itself.
FIRST
I decided it is time to switch to Wayland. Now my favorite run-or-raise app (Kupfer) cannot do run-or-raise. But there is a really nice extension to do run-or-raise on GNOME without the aggressive disruption of the Activities overview: Switcher. The other thing that is difficult on Wayland is text expansion. I have not found a solution for that part.
SECOND
The annoying to infuriating things that GNOME likes doing sometimes. But that is a constant. Nothing new.
Congrats and thanks to all the Debian people!
vanviegen · 1h ago
Just using KDE solves both your problems.
josteink · 4h ago
Just upgraded my laptop the other day.
sway and/or libinput now supports mouse-pad gestures so you can configure tjree-finger swiping between workspaces.
Very much appreciated.
anthk · 3h ago
Upgrade to trixie and enable backports. You'll maybe get the last kernel, MESA, libreoffice, browsers and whatnot without hurting the rest of the system.
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
[1] https://linuxcontainers.org
[2] https://www.debian.org/releases/trixie/release-notes/issues....
sshd still reads /etc/ssh/sshd_config at startup. As far as I know, this is hard-coded in the executable.
What Debian has changed happens before the daemon is launched: the service is socket activated. So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
I want systemd nowhere fucking near my NTP or DNS config.
(used it before, mostly to learn. went to debian for new laptop. gave up after fighting systemd. I'm aware of devuan and artix, but gentoo just worked (after all the time spent))
https://forum.qubes-os.org/uploads/db3820/original/2X/c/c774...
Once install is done, login and save this file:
After: Reboot then: There are a few edge cases, packages which require systemd, but I've been running thousands of systems including desktops this way for a decade.Yes, I also run thousands of systems with systemd too.
If you have specific issues, please file them over at systemds GitHub issue tracker.
What else is lurking that you and I aren’t aware of?
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
Yeah, but requiring a service's response is why its a dependency in the first place, no?
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
[1] https://manpages.debian.org/testing/systemd/systemd.net-nami...
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often. This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
but you also want to be able to change a card in a server without the device name changing. at least that used to be an issue in the past.
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
And note that cgroupv1 also still works in the kernel just fine, only the part that systemd controlled was removed from systemd. You can still boot with cgroupv1 support on, e.g., Alpine Linux and OpenRC as init 1. So not sure if that will lessen my concerns about no guarantees for older naming-scheme versions, maintaining triple digits of them sure has its cost too.
And don't understand me wrong, sunsetting cgroupv1 was reasonable, but it was a lot of churn, it at least was a one time thing. The network interface naming situation is periodic churn, guaranteed to bite you every now and then just by using the defaults.
Looking myself for options to keep a Debian bare metal server I admin from going deaf and mute the next time I upgrade it... It still uses an /etc/network/interfaces file that configures a bridge for VMs to use, and the bridge_ports parameter requires an interface name which, when I upgraded to Bookworm, changed.
At this rate maybe I'll write a script that runs on boot and fixes up that file with whatever interface it finds, then restarts the network.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
Systemd just doesn't "get" that.
Welcome back, eth0. :)
systemd also changes behavior in what naming policies are the default and what it considered as input, it did that since ever but started to version that since v238 [0]. Due to that the HW can stay exactly the same but names still change. I see this in VMs that stay exactly the same, no software update, not change in how the QEMU cli gets generated, really nothing changed from the outside virtual HW POV, interface name still changes.
The underlying problem was a real one, the solution seems like a bit of a sunken cost fallacy, and it added more problem dimensions than there previously exist.
Besides, even if the HW would change, shouldn't a _predicatble_ naming scheme be robust to not care about that as long as the same NIC is still plugged in somewhere?
Disclaimer, as stated elsewhere: I really like systemd, I'm not one that speaks out against it lightly, but the IF naming is not something they got right, but rather made worse for the default case. Being able to easily pin interface names through .link files is great, but requiring users to do that or have no network after an upgrade, especially for simple one-NIC use cases in a completely controlled environment like a VM is just bonkers.
[0]: https://www.freedesktop.org/software/systemd/man/latest/syst...
Regarding your rhetorical question about "the same NIC", I think the problem is in determining whether the NIC is the same, and it is not an easy one to solve. I remember that older Suse Linux versions used to pin the interface name to the NIC's MAC address in an udev rule file that got autogenerated when a NIC with a given MAC first appeared on the system, but they stopped doing that.
(I love Debian) It's going to take a bit for me to get used to having a current version of Python on the system by default.
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.
Making such claims on HN attracts edge cases like nobody's business but let's see
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
That's a very bad default.
Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled), a program that tries to allocate all memory will quickly get OOM killed and the system will recover quickly.
If /tmp/ fills up your RAM, the system will not recover automatically, and might not even be recoverable by hand without rebooting. That said, systemd-managed daemons using a private /tmp/ in RAM will correctly clear it when killed.
tmpfs by default only uses up to half your available RAM unless specified otherwise. So this isn't really a consideration unless you configure it to be a consideration you need to take into account.
(System also really recently (v258) added quotas to tmpfs and IIRC its set by default to 80% of the tmpfs, so it is even less of a problem)
Perhaps disabling overcommit as well as swap could be safer from this point of view. Unfortunately, you get other problems if you do so - as very little Linux software handles errors returned by malloc, since it's so uncommon to not have overcommit on a Linux system.
I'd also note that swap isn't even that slow for SSDs, as long as you don't use it for code.
If I see RAM close to 30 GB I restart my browser and go back to 20 GB or less. Not every month.
All those arguments would be useful if we somehow could avoid the fact that the system will use it as "emergency memory" and become unresponsive. The kernel's OOM killer is broken for this, and userland OOM daemons are unreliable. `vm.swappiness` is completely useless in the worst case, which is the only case that matters.
With swap off, all the kernel needs to do is reserve a certain threshold for disk cache to avoid the thrashing problem. I don't know what the kernel actually does here (or what its tunables are), because systems with swap off have never caused problems for me the way systems with swap on inevitably do. The OOM killer works fine with swap off, because a system must always be resilient to unexpected process failure.
And worst of all - the kernel requires swap (and its bugs) to be enabled for hibernation to work.
It really wouldn't be hard to design a working swap system (just calculate how much to keep of different purposes of swap, and launch the OOM killer earlier), but apparently nobody in kernel-land understands the real-world problems enough to bother.
this one gets me irritated every time i think about it. i don't want to use swap, but i do want hibernation. why is there no way to disable swap without that?
hmm, i suppose one could write a script that enables an inactive swap partition just before shutdown, and disables it again after boot.
In this new stable release, an update to Dovecot will break your configuration: https://willem.com/blog/2025-06-04_breaking-changes/
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
[1] https://doc.dovecot.org/2.4.0/installation/upgrade/2.3-to-2....
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
Ask me how I know.
Of course anyone can restore from backups. It's a pain and it's time consuming.
My post serves more as a warning to those who may develop buyer's remorse.
I've used this trick many times in a live, rw environment.
Too many bits of 'advice' on Stack Overflow, etc. claiming it's possible as top Google results.
I'm here to say unequivocally: it does not work, will not work, and will leave the system in an irreversibly broken state. Do not attempt.
What problems did you have that made you want to roll back the update?
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.
Now I need to figure out what happens when my testing suddenly is stable, and how to get on the next testing, I guess.
The policy manual serves as both ruleset but also explains lots of things w.r.t. packaging, as that's part of the ruleset: https://www.debian.org/doc/debian-policy/index.html#
For actually uploading new packages to the archive you need to be "DD" (Debian Developer), which is a bit more involved process to go through. "DM"s (Debian Maintainer) is easier and can do already lots of things. It's also possible to start out by finding an existing DD that sponsors your upload, i.e. checks your final packaging work and if it looks alright will upload it in your name to the repositories.
You might also check out the wiki of Debian, it's sometimes a bit dated and got lots of info packed in, but can still be valuable if you're willing to work through the outdated bits. E.g.: https://wiki.debian.org/DebianMentorsFaq
Sounds like OpenSUSE to me. I tend to favor the fast-updating versions, but I'm pretty sure openSUSE Leap is exactly what you're asking for.
I don't think there are many alternatives. OpenSUSE isn't supported for very long, and there really isn't anything else if you want btrfs, no Debian or its derivatives, and fire & forget kind of distribution.
Best Packaging Practices https://www.debian.org/doc/manuals/developers-reference/best...
I’ve been on sid for the last 10 months for my laptop (old T450s) and my secondary desktop, and it is really fun.
There are annoyances but they are not related to Debian itself.
FIRST
I decided it is time to switch to Wayland. Now my favorite run-or-raise app (Kupfer) cannot do run-or-raise. But there is a really nice extension to do run-or-raise on GNOME without the aggressive disruption of the Activities overview: Switcher. The other thing that is difficult on Wayland is text expansion. I have not found a solution for that part.
SECOND
The annoying to infuriating things that GNOME likes doing sometimes. But that is a constant. Nothing new.
Congrats and thanks to all the Debian people!
sway and/or libinput now supports mouse-pad gestures so you can configure tjree-finger swiping between workspaces.
Very much appreciated.