“Storage is cheap” goes the saying. Other people’s storage has a cost of zero, so why not just fill it up with 100 copies of the same dependency.
These package formats (I’m looking at you snap as well) are disrespectful of users’ computers to the point of creating a problem where due to size, things take so long and bog the computer down so much, that the resource being used is no longer storage, but time (system and human time). And THAT is not cheap at all.
Don’t believe me, install a few dozen snaps, turn the computer off for a week, and watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.
wtarreau · 6h ago
Not to mention the catastrophic security that comes with these systems. On a local ubuntu, I've had exactly 4 different versions of the sudo binary. One in the host OS and 3 in different snaps (some were the same but there were a total of 4 different). If they had a reason to be different, it's likely for bug fixes, but not all of them were updated, meaning that even after my main OS was updated, there were still 3 bogus binaries exposed to users and waiting for an exploit to happen. I find that this is the most shocking aspect of these systems (and I'm really not happy with the disrespect of my storage, like you mention).
brlin2021 · 4h ago
The sudo binaries in the snaps are likely to have their SUID bit stripped, so they won't cause any trouble even if they have known vulnerabilities.
yjftsjthsd-h · 4h ago
Why do snaps have sudo at all?
m4rtink · 5h ago
Snaps do zero deduplication and bundle everything AFAIK - flatpak at least does some deduplication on file level and has shared runtimes.
brlin2021 · 4h ago
This statement is false as snaps also have shared runtimes known as "content snaps".
A common example is the ones with the gnome- prefix and the ones that end with -themes suffix.
loloquwowndueo · 3h ago
Wherein snaps found themselves reinventing shared libraries - at which point, what’s really the point.
Seattle3503 · 3h ago
I think the point is that maintainers and developers now have a choice of whether they want to share libraries or not. Before the only choice was to share dependencies.
hedora · 1h ago
There has been a choice between shared and static linking since before the Linux kernel existed.
What system are you talking about?
neuroelectron · 4h ago
For a long time, storage was getting cheaper all the time but we've hit scaling walls in both CPUs and drives. I remember when I was a kid and bought Mechwarrior 2 a game that could use up to 500mb! The guy working the video game locker warned me "are you sure you have enough hard drive space?" after having just bought a 2gb drive for like $60, or something, I don't remember exactly. A concern that would have been valid maybe a year earlier.
musicnarcoman · 3h ago
"Storage is cheap" if you do not have to pay for it. It is not so cheap when you are the one paying for the organizations storage.
api · 3h ago
There are things like content defined chunking and content based lookup. Evidently that’s too hard.
XorNot · 2h ago
The problem on Linux is that hard links are exactly what you don't want.
If hard links from the get go were copy on write, then I suspect content defined storage would've become the standard because it would be easy.
Instead we have this construct which makes it hard and dangerous (hard links hide data dependencies) on most Linux filesystems and no good answers (even ZFS can't easily handle a cp --reflink operation, and the problem is it's not the default anyway).
zdragnar · 6h ago
It would be fantastic if there was a way for these to declare what libraries they needed bundled, and a manager that would install the necessary dependencies into a shared location, so that only what wasn't already installed got downloaded.
Oh wait...
Gigachad · 1h ago
Flatpaks do that. The difference is that they let you pick any version of libraries rather than locking everything to fixed versions. So you end up with software that’s less broken, updates sooner, but has multiple copies of libraries on your computer.
eikenberry · 2h ago
It would be even more fantastic if there was a way to compile everything into a single binary and distribute that so that there are no dependencies (other than the kernel).
Oh wait...
hedora · 1h ago
Yeah, but what if you wanted multiple copies of a library and also wanted to let more than one program share each version?
You’d need some sort of system that stores files and lets you name those files and also makes it possible for software to look them up by name.
Oh wait…
eikenberry · 31m ago
Shared libraries suck. It is much better to have a single binary w/ zero library dependencies. The only reason we can't have this is because of sub-standard tooling that makes compiling so expensive that we must suffer shared libraries.
gjsman-1000 · 6h ago
Sure, but we’ve tried that technique for about 20 years.
We learned that most app developers hate it; to the point they don’t even bother supporting the platform unless they are FOSS diehards.
Those that do screech about not using the packaged version on almost all of their developer forums, most often because they are out of date and users blame them for bugs that were already fixed.
This actually is infuriating - imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update, and users are blaming you and still opening bug reports. The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.
Basically, Linux sucks terribly, either way, with app distribution. Linux distributions have nobody to blame but themselves for being ineffectual here.
dredmorbius · 3h ago
...imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update...
This grossly misstates the concept of a stable distribution (e.g., Debian stable, with which I'm most familiar).
Debian stable isn't "stable" in that packages don't change, to the point that updates aren't applied at all, it's stable in that functionality and interfaces are stable. The user experience (modulo bugs and security fixes) does not change.
Stable does receive updates that address bugs and security issues. What Stable does not do is radically revise programs, applications, and libraries.
Though it's more nuanced than that even: stable provides several options for tracking rapidly-evolving software, the most notorious and significant of which are Web browsers with the major contenders updating quite frequently (quarterly or monthly, for example, for Google Chrome "stable" and "dev" respectively). That's expanded further with Flatpack, k8s, and other options, in recent years.
The catch is that updates require package maintainers to work on integrating and backporting fixes to code. More prominent and widely-used packages do this. The issue of old bugs being reported to upstream ... is a breakage of the system in several ways: distro's bug-tracking systems (BTSes) should catch (and be used by) their users, upstream BTSes arguably should reject tickets opened on older (and backported) versions. The solutions are neither purely technical nor social, which makes solutions challenging. But in reality we should admit that:
- Upstream developers don't like dealing with the noise of stale bugs.
- Users are going to rant to upstream regardless of distro-level alternatives.
- Upstreams' BTSes should anticipate this and automate redirection of bugs to the appropriate channel with as little dev intervention as possible. Preferably none.
- Distros should increase awareness and availability of their own BTS systems to address bugs specific to the context of that distro.
- Distro maintainers should be dilligent about being aware of and backporting fixes and only fixes.
- Distros should increase awareness and availability of alternatives for running newer versions of software which aren't in the distro's own stable repos.
Widespread distance technological education is a tough nut regardless, there will be failings. The key is that to the extent possible those shouldn't fall on upstream devs. Though part of that responsibilty, and awareness of the overall problem, does* fall on those upstream devs.
rlpb · 5h ago
> The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.
This is exactly what users want, though. Eg. if they want to receive updates more frequently on Ubuntu then they can use the six monthly releases, but most Ubuntu users deliberately choose the LTS over that option because they don't want everything updated.
martinald · 3h ago
At the end of the day the 'traditional' Linux packaging system in where distributions do it all for you is totally outdated. Tbh I can remember in the early/mid 2000s being extremely annoyed with this so I don't know if it was ever a good model.
On SaaS/mobile apps you have often daily new versions of software coming out. That's what users/developers want. They do not want 3 year+ stale versions of their software being 'supported' by a third party distro. I put supported in comments as it only really applies to security and what not; not terrible bugs in the software that are fixed in later versions.
Even on servers where it arguably makes more sense it has been entirely supplanted by Docker which ships the _entire OS_ more or less as the 'app'. And even more damingly, most/nearly all people will use a 3rd party Docker repo to manage the docker 'core' software updates itself.
And the reason noone uses the six monthly releases is because the upgrade process is too painful and regresses too much. But - even if it was 100% bulletproof, noone wants to be running 6-12 month out of date software on that either. Chrom(ium) is updated monthly and has a lot of important new features in it. You don't really want to be running 6-9 months out of date on that.
HdS84 · 2h ago
Exactly. In theory, the original windows 10 model is the one most users want: a perpetually up to date os which runs also up to date software. Yes, of there might be reasons the pin something to an older version, but it this pc is on a network, security alone tells you to update ASAP.
Don't get me wrong, a working package manager is a very good addition to this model. But currently, most of the time setting up a ltm Linux system I spent on updating ancient git/docker whatever versions.
hedora · 1h ago
Which users want UIs to change (and often break) multiple times a month?
Do you have any evidence to back that statement up?
gjsman-1000 · 5h ago
But if you’re a developer, that doesn’t change that many users do not understand, will not understand, and will open bug reports regularly.
When that happens, guess what you do? You trademark your software’s name and use the law to force distributions to not package unless concessions are granted. We’re beginning to see this with OBS, but Firefox also did this for a while.
As Fedora quickly found, when trademark law gets involved, any hope of forcing developers to allow packaging through a policy or opinion vote becomes hilariously, comically ineffectual.
The other alternative is to just not support Linux. Almost all major software has been happily taking that path, and the whole packaging mess gives no incentive to change.
rlpb · 2h ago
> You trademark your software’s name and use the law to force distributions to not package unless concessions are granted.
It isn't clear if this behaviour is legally enforceable. Distributions typically try to avoid the conflict. But they could argue that "we modified Firefox to meet our standards and here is the result" is a legally permitted use of that trademark. To my knowledge, this has never been tested.
mananaysiempre · 5h ago
> When that happens, guess what you do?
Ban the user that did not read go to the distro’s maintainers first.
dredmorbius · 3h ago
What's the Fedora trademark issue?
mook · 2h ago
Fedora had their own OBS Flatpak that had known-buggy (but newer and "supported") dependencies.
That is how Flatpak works right now? If you read the article you can read about two different ways of handling it, runtimes and deduplication.
The problem is the applications have to use the exact same version of a library to get the benefits. With traditional package managers they usually only have 1 version available. With Flatpak you can choose your own version which results in many versions and as such they do not share dependencies. If distros had multiple versions of libraries you would end up with the exact same problem.
throwaway314155 · 3h ago
> watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.
A touch overly dramatic...
dheera · 5h ago
To be fair, shared libraries have been problematic since the beginning of time.
In the Python world, something wants numpy>=2.5.13, another wants numpy<=2.5.12, yet Python has still not come up with a way to just do "import numpy==2.5.13" and have it pluck exactly that version and import it.
In the C++ world, I've seen code that spits out syntax errors if you use a newer version of gcc, others that spit out syntax errors if you use an older version of gcc, apt-get overwrites the shared library you depended on with a newer version, lots of other issues. Install CUDA 11.2, it tries to uninstall CUDA 11.1, never mind that you had something linked to it, and that everything else in that ecosystem disobeys semantics and doesn't work with later minor revisions.
It's such a shitshow that it fully makes sense to bundle all your dependencies if you want to ship something that "just works".
For your customer, storage is cheaper than employee time wasted getting something to work.
loloquwowndueo · 3h ago
Right but snaps don’t solve dependency hell (see content snaps which are shared library bundles).
o11c · 3h ago
That's what everybody uses `venv` for. Or `virtualenv` if you're stuck on old Python.
But as a rule, `<=` dependencies mean there's either a disastrous fault with the library, or else the caller is blatantly passing all the "do not enter" signs. `!=` dependencies by contrast are meaningful just to avoid a particular bug.
int_19h · 1h ago
Virtual environments don't solve the problem of two dependencies that you need having conflicting requirements.
2OEH8eoCRo0 · 3h ago
Devil's advocate- We have hundreds of stupid distros making choices and the less I need to deal with their builds the better.
api · 3h ago
Containerization is and always was the ultimate “fuck it” answer to these problems.
“Fuck it, just distribute software in the form of tarballs of the entire OS.”
delusional · 3h ago
Yeah, I only trust the random developers that are probably running windows to package my Linux software.
The people making those "stupid distros" are (most likely by number) volunteers working hard to give us an integrated experience, and they deserve better than to be called "stupid".
INTPenis · 3h ago
I use flatpaks daily but not many apps. Because I've been on Atomic Linux for a couple years now flatpak has become part of my daily life.
On this work laptop I have three flatpaks, Signal, Chromium and Firefox. They all take 1.6GiB in total.
On my gaming PC I have Signal, Flatseal, Firefox, PrismLauncher, Fedora MediaWriter and Steam, and obviously they take over 700G because of the games in Steam, but if I count just the other flatpaks they're 2.2GiB.
So yeah, not great, but on the other hand I don't care because I love the packaging of cgroups based software and I don't need many of them. I mean my container images take up a lot more space than my flatpaks.
qbane · 6h ago
I hope articles like this can at least provide some hints when the size of a flatpak store grows without bound. It is definitely more involved than "it bundles everything like a node_modules directory hence..."
Your comment probably took more effort for this article than the prompter of the AI that produced said article.
Conclusion:
Thank you for the links
massysett · 2h ago
“ The name "Flatpak" is even a nod to IKEA's flatpacking,”
Which is hilarious: an IKEA flat pack takes up less space than the finished product. Linux flatpack is the exact opposite.
jasonpeacock · 7h ago
The article mentions that Flatpack is not suitable for servers because it uses desktop features.
Does anyone know what those features are or have more details?
Linux generally draws a thin line between server and desktop, having “desktop only” dependencies is unusual less it’s something like needing the KDE or Gnome GUI libraries?
mananaysiempre · 6h ago
This may refer to xdg-desktop-portal[1], which is usable without Flatpak, but Flatpak forces you to go through it to access anything outside the app’s private sandbox. In particular, access to user files is mediated through a powerbox (trusted file dialog) [2] provided by the desktop environment. In a sense, Flatpak apps are normal Linux apps to about the same extent that WinRT/UWP apps are normal Windows apps—close, but more limited, and you’re going to need significant porting in either direction.
(This has also made an otherwise nice music player[3] unusable to me other than by dragging and dropping individual files from the file manager, as all of my music lives in git-annex, and accesses through git-annex symlinks are indistinguishable from sandbox escape attempts. On one hand, understandable; on the other, again, the software is effectively useless because of this.)
> On one hand, understandable; on the other, again, the software is effectively useless because of this.
Just in case you didn't already know, you can use Flatseal[1] to add the symlinked paths outside of those in the default whitelisted paths.
I think it's a good thing Flatpak have followed a security permissions system similar to Android, as I think it's great for security, but I definitely think they need to make this process more integrated and user friendly.
It assumes that you have a DE running and depends on features like D-Bus. So it's not designed to run headless except for building flatpak packages.
LtWorf · 6h ago
AFAIK it cannot do CLI applications at all.
jeroenhd · 6h ago
It can, but because the Flatpak system depends on APIs like D-Bus getting those to work in headless environments (SSH, framebuffer console, raw TTY) is a pain.
Flatpak will even helpfully link binaries you install to a directory you can add to your $PATH to make command line invocation easy.
_Soulou · 2h ago
Something I have been wondering with Flatpak is about Ram usage. As sharing dynamic libraries allow loading it into RAM only once, while if I use Signal, Chromium and different others Flatpaks, all libs will be loaded multiple times (often with their own version). So maybe disk is cheap but RAM may be more limited, which looks kind of a limit in the generalization of this method of distribution. (You could tell me it's the same with containers)
Am I right to think that? Has someone measured that difference on their workstation?
account-5 · 7h ago
I can't really comment about snap since I don't use Ubuntu but I thought flatpaks would work similar to how portable apps on windows do. Clearly I'm wrong, but how is it that windows can have portable apps of a similar size to their installable versions and Linux cannot? I know I'm missing something fundamental here, like how people blame Linux for lack of hardware support without acknowledging that hardware vendors do the work for windows to work correctly.
Either way disk space is cheap and abundant now. If I need thenlastest version of something I will use flatpaks.
blahaj · 7h ago
Just a guess, but Windows executables probably depend on a bunch of Windows APIs that are guaranteed to be there, while Linux systems are much more modular and do not have a common, let alone stable ABI interface in the userspace. You can probably get small graphically capable binaries if you depend on QT and just assume it to be present, but Flatpak precisely does not do that and bundles all the dependencies to be independent from shared dependencies of the OS outside of its control.
The article also mentions that AppImages can be smaller probably because they assume some common dependencies to be present.
And of course there are also tons of huge Windows software that come with all sorts of their own dependencies.
Edit: I think I somewhat misread your comment and progval is more spot on. On Linux you usually install software with a package manager that resolves dependencies and only installs the unsatisfied dependencies resulting in small install size for many cases while on Windows that is not really a thing and installers just package all the dependencies they cannot expect to be present and the portable version just does the same.
badsectoracula · 7h ago
The equivalent of "Windows portable apps" on Linux isn't flatpaks (these add a bunch of extra stuff and need some sort of support from the OS) but AppImages[0]. AppImages are still not 100% the same (and can never be as Windows applications can rely on A LOT more stuff to be there than Linux desktop apps) but functionally/UX-wise they're the closest: you download some program, chmod +x it and run it like any other binary you'd have on your PC.
Personally i vastly prefer AppImages to flatpaks (in fact i do not use flatpaks at all, i'd rather build the program from source - or not use it if the build process is too convoluted - instead).
It's a matter of standardization and ABI stability. Linux itself promises an eternally stable syscall ABI, but everything else around it changes constantly. Windows is basically the opposite: no public syscall ABI, but you can always get a window on screen by linking USER.dll and poking it with the correct structures. As a result, Windows apps can assume more, while desktop Linux apps have to ship more.
int_19h · 1h ago
If you're targeting Windows, you can assume the following things to be present:
- the entirety of Win32 API
- all the Windows Runtime APIs
- .NET Framework 4.7+
This is a lot of functionality. For example, the list above includes four different widget toolkits alone (Win32, WinForms, WPF, WinRT XAML), several libraries to handle networking (including HTTP), USB, 2D and 3D graphics including text rendering, HTML renderer etc.
And all of this has a highly stable ABI, so long as you do everything by the book. COM/WinRT and .NET provide a stable ABI for high-level object-oriented APIs above and beyond what the basic C ABI can offer.
johnny22 · 2h ago
> Clearly I'm wrong, but how is it that windows can have portable apps of a similar size to their installable versions and Linux cann
They can't depend on many apis existing or at the right version. Linux distros are made from a collection of various third party projects and distros just integrate those. Each of these third party projects has it's own development speed and ABI and API stability policies.
Each distro also has it's own development speed and release policy, which means they might have things that could either be too new or to old. Most distros try to avoid packaging multiple versions of the same project when they can avoid it to ease maintenance as well.
Heck, you can't even guarantee that you have the exact same libc. Most distros use glibc, but there are plenty of systems that use musl.
progval · 7h ago
Installable versions of Windows apps still bundle most of the libraries like portable apps do, because Windows does not have a package manager to install them.
maccard · 7h ago
Windows does have a package manager and has for the last 5 years.
kbolino · 6h ago
Apart from the Microsoft Visual C++ Runtime, there's not much in the way of third-party dependencies that you as a developer would want to pull in from there. Winget is great for installing lots of self-contained software that you as an end user want to keep up to date. But it doesn't really provide a curated ecosystem of compatible dependencies in the way that the usual Linux distribution does.
maccard · 6h ago
Ok but that’s a different argument to “windows doesn’t have a package manager”
kbolino · 4h ago
No, this is directly relevant to the comparison, especially since the original context of this discussion is about how Windows portable apps are no bigger than their locally installed counterparts.
A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.
So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).
maccard · 2h ago
I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.
There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.
Not as understood by users of every other operating system, even macOS. It's more of an "application manager". Microsoft has a history of developing something and reusing the well-understood term to mean something completely different.
keyringlight · 5h ago
Assuming you're talking about winget, that seems to operate either as an alternative CLI interface to the MS store with a separate database developers would need to add their manifests to, or to download and run normal installers in silent mode. For example if you do winget show "adobe acrobat reader (64-bit) you can see what it will grab. It's a far cry from how most linux package managers operate
mjevans · 6h ago
Windows 2020 - Welcome to Linux 1999 where the distro has a package manager that has just about everything most users will ever need as options to install from the web.
maccard · 6h ago
I can say the same thing about Linux - it’s 2025 and multi monitor, Bluetooth and WiFi support still doesn’t work.
yjftsjthsd-h · 4h ago
Er, yes they do? I guess things could be spotty if you don't have drivers (which... is true of any OS), but IME that's rare. But I have to ask because I keep hearing variations of this: What exactly is wrong with */Linux handling of multi-monitor? The worst I think I've ever had with it is having to go to the relevant settings screen and tell it how my monitors are laid out and hitting apply.
maccard · 3h ago
>I guess things could be spotty if you don’t have drivers
Sure, and this unfortunately isn’t uncommon.
> What exactly is wrong with */Linux handling of multi-monitor?
X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.
account-5 · 5h ago
The only thing you can say in the context of the few bleeding edge hardware that isn't supported by Linux is that:
1. The hardware vendors are still not providing support the way they do for windows.
2. The Linux Devs haven't managed to adapt to these new hardwares.
mjevans · 5h ago
FUD (Fear Uncertainty Doubt).
Every OS has it's quirks, things you might not recall as friction points because they're expected.
I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.
As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.
maccard · 3h ago
I don’t think I am spreading FUD. Hardware issues with Linux on non well trodden paths is a well known issue. X11 (still widely used on many distros) has a myriad of problems with multi monitor setups - particularly when resolutions and refresh rates don’t match.
You’re right that the manufacturers could provide better support, but they don’t.
wmf · 6h ago
Unfortunately a lot of Windows devs are targeting 10 year old versions.
account-5 · 6h ago
I'm replying to myself in reply to everyone who replied to me.
Thanks all for the explanations, much appreciated, I thought I was missing something. I really should have known though, Ive been using portable apps for over 20 years on windows and remember.net apps not being considered portable way back when, which are now considered portable since the run time is on all modern windows.
dismalaf · 7h ago
"Portable" apps on Windows just don't write into the registry or save state in a system directory. They can still assume every Windows DLL since the beginning of time will be there.
Versus Linux where you have Gnome vs. KDE vs. Other and there's less emphasis on backwards compatibility and more on minimalism, so they need to package a lot more dependencies (potentially).
If you only install Gnome Flatpaks they end up smaller since they can share a bunch of components.
butz · 6h ago
If you are space concious, you should try to select Flatpak apps that are using the same runtime (Freedesktop, GNOME or KDE), and make sure all of them are using exactly the same version of runtime. Correct me if I'm wrong, but only two versions of Flatpak runtimes are supported at a time - current and previous. So during times when transitioning happens to newer runtime, some application upgrades are not done at once, and user ends up using more than one (and sometimes more than two) runtimes.
In addition to higher disk space usage, one must account for usual updates too. The more programs and runtimes you have, more updates to download. Good thing, at least updates are partial.
wltr · 5h ago
That was so useless and the style was so bad, I’m pretty sure it was written with (if not by) LLMs. Not even sure if I’m disappointed finding this low effort content here, or rather not surprised at all. I wish the content here would be more interesting, but maybe I’d want to find some other community for that.
I mean, the comments are much more interesting than this piece of content, but the content itself is almost offending. At least the discussion is much more valuable than what I’ve just read by following that link.
haunter · 5h ago
What made Flatpaks more popular than Appimage? I thought the latter is "vastly" superior and really portable?
kalaksi · 2h ago
I don't claim to know the answer but flatpaks have easier distribution (flathub), package management, sandboxing, can share runtimes and are also portable in the sense that they work across linux distros. AppImage is a simple and even more portable format but not much else so I guess it's superior if you only want to maximize portability.
ReptileMan · 6h ago
Why does it seems that we try to both avoid and reinvent the static linker poorly with every new technology and generation. Windows has been fighting with dll hell for 30 years now. Linux seems to not be able to produce alternative to dll hell. Not sure how osx world is.
gjsman-1000 · 6h ago
It feels, to me, like the Linux desktop has become an overly complicated behemoth, never getting anywhere due to its weight.
I still feel the pinnacle for modern OS design might be Horizon, by Nintendo of all people. A capability-based microkernel OS that updates in seconds, fits into under 400 MB (WebKit and NVIDIA drivers included), is fast enough for games, and hasn’t had an unsigned code exploit in half a decade. (The OS is extremely secure, but NVIDIA’s boot code wasn’t.)
Why can’t we build something like that?
wk_end · 6h ago
We can't build something quite like that because we demand a whole lot more from our general-purpose computing devices than we demand from our Switches.
For instance, the Switch - and I don't know where in the stack this limitation lies - won't even let you run multiple programs that use the network. You can't, say, download a new game while playing another one that happens to have online connectivity - even if you aren't using it!
On a computer, we want to be able to run dozens of programs at the same time, freely and seamlessly; we want them to be able to interoperate: share data, resources, libraries, you name it; we want support for a vast array of hardware and peripherals. And on and on.
A Switch, fundamentally, is a device that plays games. Simpler requirements leads to simpler software.
gjsman-1000 · 6h ago
This isn’t actually true, as you can use the Nintendo Switch Online app, or the eShop, while downloading games.
You just can’t play games at the same time one is downloading. That’s a deliberate storage speed and network use optimization than a software limitation. You can also tell this by the notifications about online players from the system, even as you are playing an online game.
(Edit for posting too fast: The Switch does have a web browser, full WebKit even, which is used for the eShop and for logging in to captive portal Wi-Fi. Exploits are found occasionally, but the sandboxing has so far rendered these exploits mostly useless. Personally, I support this, as then Nintendo doesn’t have to worry about website parental controls.)
m4rtink · 5h ago
But AFAIK it still does not have a web browser, because they are scared of all the exploits webkit exploits people used to enable custom software on the PlayStation Vita. So rather than that they released Switch without a built-in web browser, even if it would be perfectly usable on the hardware and very useful in many cases.
yjftsjthsd-h · 4h ago
> fits into under 400 MB (WebKit and NVIDIA drivers included),
I don't think that's particularly hard if you only include support for one set of hardware and a single API/ABI for applications. Notably, no general-purpose OS does either of these things and people would probably not be pleased if one tried.
jeroenhd · 5h ago
Linux has supported online replacement for a while now, and can be compiled to dozens of megabytes in size. Whatever cruft Nvidia adds in their binary drivers will push the OS beyond 400MiB, but building a Linux equivalent isn't exactly impossible.
The problem with it is that it's a lot of work (just getting secure boot to work is a massive pain in itself) and there are a lot of drivers you need to manually disable or settings to manually toggle to get a Switch equivalent system. The Switch includes only code paths necessary for the Switch, so anything that looks like a USB webcam should be completely useless. Bluetooth/WiFi chipset drivers are necessary, but of course you only need the BLOBs for the specific hardware you're using.
Part of Nintendo's security strategy is the inability to get binary code onto the system that wasn't signed by Nintendo. You can replicate this relatively easily (basic GPG/etc. signature checks + marking externally accessible mount points as non-executable + only allowing execution of those mounts/copies from those mounts after full signature verification). Also add some kind of fTPM-based encryption mechanism to make sure your device storage can't be altered. You then need to figure out some method of signing all the software any user of your OS could possibly need to execute, but if you're an OEM that shouldn't be impossible.
Once you've locked down the system enough, you can start enforcing whatever sandboxing you need on top of whatever UI you prefer so your games can't be hacked. Flatpak/Snap/Docker all provide APIs for this already.
The tooling is all there, but there's no incentive for anyone to actually make it work. Some hardware OEMs do a pretty good job (Samsung's Tizen, for instance) but anything with a freely accessible debug interface or development interface is often quickly hacked. Most of the Linux user base want to use the same OS on their laptop and desktop and have all of their components work, and would also like the ability to run their own programs. To accomplish that, you have to give up a lot of security layers.
I doubt Nintendo's kernel is that secure, but without access to the source code and without a way to attack it, exploiting it is much harder. Add to that the tendency of Nintendo to sue, harass, and intimidate people trying to get code execution on their devices, and they end up with hardware that looks pretty secure from the outside.
Android and ChromeOS are also pretty solid operating systems in terms of general security, but their dependence on supporting a range of (vendor) drivers makes them vulnerable. Still, escalating from webkit to root on Android is quite the challenge, you'll need a few extra exploits for that, and those will probably only work on specific phones running specific software.
For what it's worth, you can get a pretty solid system by installing an immutable OS (Silverblue style) without root privileges. That still has some security mechansism disabled for usability purposes, but it's a solid basis for an easily updateable, low-security-risk OS when installed correctly.
gjsman-1000 · 23m ago
For what it’s worth, Nintendo’s kernel has been reimplemented entirely as open-source, under the mesosphere/ folder of the Atmosphere project. Modded Switches use it over Nintendo’s original binary.
The creator of it (SciresM) has publicly stated that after reverse engineering and re-implementing both it and the 3DS kernel, he firmly believes it has zero security flaws remaining. The last 6 or so years of not one software exploit in Horizon capable of enabling piracy also bears this out.
He also is giving the Switch 2, assuming it inherits the NVIDIA boot bug fix and new anti-glitching capabilities, possibly over a decade of being crack free at this rate. Even then, it will almost certainly be a hardware mod.
anthk · 6h ago
Alpine Linux?
gjsman-1000 · 6h ago
Close; but the security still isn’t anywhere close.
On Alpine, if there’s a zero day in WebKit, you’d better check how your security is set up, and hope there’s not an escalation chain.
On Horizon, dozens of bugs in WebKit, the Broadcom Bluetooth stack, and the games have been found; they are still found regularly. They are also boring and completely useless, because the sandboxing is so tight.
You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
yjftsjthsd-h · 4h ago
> Close; but the security still isn’t anywhere close. [...]
I think a lot of the security comes down to what compromises you're willing to make. Horizon doesn't have to support the same breadth of hardware or software as we expect out of normal OSs, so they can afford to reinvent the world on a secure microkernel. If we want to maintain backwards-compatibility (and we do, because otherwise it's dead on arrival) then we have to take smaller steps. Of course, we can take those steps; if you care about security then you should run your browser in a sandbox (firejail, bubblewrap, docker/podman) at which point a zero-day in the browser is lower impact (not zero risk, true, but again I don't see any way to fix that without throwing out performance or compatibility).
> You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
I rather assumed that the Switch doesn't actually install OS updates in 5s either? The obvious way to do what they're doing is A/B updates in the background, after which you "apply" by rebooting, which Linux can do in 5s.
gjsman-1000 · 25m ago
You should pay attention to when a Switch updates. There is no A/B system - the installation literally is 5 seconds after the download is finished, followed by a reboot. It makes other operating systems look downright shameful.
As for backwards compatibility, Flatpak solves it, mostly. The underlying system doesn’t necessarily need to be Linux if it provides the right environment to run Flatpaks, maybe in a container.
pdimitar · 7h ago
[flagged]
dang · 4h ago
Can you please follow the site guidelines when posting to HN? You broke them badly in this thread, and we've had to ask you this many times before.
Apparently I did. Seems I underestimated the impact of what I perceived as a small rant.
[no longer replying non-constructively to anyone in this sub-thread]
yjftsjthsd-h · 4h ago
Okay, fair enough. Which part are you working on and how far have you gotten?
pdimitar · 3h ago
Elixir -> Rust -> SQLite library (FFI bridge). The FFI library is completed (without some of SQLite's advanced features that I don't deem important for a v1) and I am just adding more tests now, though the integration layer with Elixir's de-facto data mapper library (Ecto) has not been started yet. Which means that an announcement would be met with crickets, hence I'll work on that integration layer VerySoon™. Otherwise the whole thing wouldn't ever help anyone.
I do feel strongly about it as I believe most apps don't need a full-blown database server. I've seen what SQLite can do and to me it's still a hidden gem (or a blind spot) to many programmers.
So I am putting my sweat where my mouth is and will provide that to the Elixir community, for 100% free, no one-time payments and no subscription software.
And yes, I do get annoyed by privileged people casually working on completely irrelevant stuff that's never going to move anything forward. Obviously everyone has the right to do whatever they like in their free time, but announcements on HN I can't combine with that and they do annoy me greatly. "Oh look, it's just a hobby project but I want you all to look at it!" -- don't know, it does not make any sense to me. Seems pretentious and ego-serving but I could be looking at it wrong. Triggers tend to remove nuance after all.
renewiltord · 6h ago
But I don’t want to solve actual problems. I want to write the 3689th lisp interpreter in the world.
pdimitar · 6h ago
Your right and prerogative, obviously.
But out there, a stranger you care nothing about, will think less of you.
Wish I had that free time and freedom though... The things I would do.
renewiltord · 5h ago
You can have that free time. Stop posting on HN and write some code. I can do both but if I couldn’t I’d pick the latter.
pdimitar · 5h ago
[flagged]
teddyh · 2h ago
OK, say, for the sake of argument, that DwarFS solves the disk space problem. What about the RAM problem?
pdimitar · 2h ago
Addressing this requires me being interested in all the details (which would get me half way there on the road to being a contributor, which I'm not aiming at). I was responding to the central point of the article + ranted a bit.
I'm simply getting annoyed by the low effort that is put in such prominent open-source software.
And here I am, working in my corner on a soon-to-be-released open-source library that will likely see 100 users for a year at most... agonizing on increasing test coverage that actually paid off and it uncovered bugs and I fixed them. And enlisted LLMs and professional acquaintances to minimize or eliminate memory copying in the FFI part of the code...
...and a very prominent open-source software maintainers have not even bothered to start looking at the lowest-hanging fruit: reducing disk usage.
These package formats (I’m looking at you snap as well) are disrespectful of users’ computers to the point of creating a problem where due to size, things take so long and bog the computer down so much, that the resource being used is no longer storage, but time (system and human time). And THAT is not cheap at all.
Don’t believe me, install a few dozen snaps, turn the computer off for a week, and watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.
A common example is the ones with the gnome- prefix and the ones that end with -themes suffix.
What system are you talking about?
If hard links from the get go were copy on write, then I suspect content defined storage would've become the standard because it would be easy.
Instead we have this construct which makes it hard and dangerous (hard links hide data dependencies) on most Linux filesystems and no good answers (even ZFS can't easily handle a cp --reflink operation, and the problem is it's not the default anyway).
Oh wait...
Oh wait...
You’d need some sort of system that stores files and lets you name those files and also makes it possible for software to look them up by name.
Oh wait…
We learned that most app developers hate it; to the point they don’t even bother supporting the platform unless they are FOSS diehards.
Those that do screech about not using the packaged version on almost all of their developer forums, most often because they are out of date and users blame them for bugs that were already fixed.
This actually is infuriating - imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update, and users are blaming you and still opening bug reports. The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.
Basically, Linux sucks terribly, either way, with app distribution. Linux distributions have nobody to blame but themselves for being ineffectual here.
This grossly misstates the concept of a stable distribution (e.g., Debian stable, with which I'm most familiar).
Debian stable isn't "stable" in that packages don't change, to the point that updates aren't applied at all, it's stable in that functionality and interfaces are stable. The user experience (modulo bugs and security fixes) does not change.
Stable does receive updates that address bugs and security issues. What Stable does not do is radically revise programs, applications, and libraries.
Though it's more nuanced than that even: stable provides several options for tracking rapidly-evolving software, the most notorious and significant of which are Web browsers with the major contenders updating quite frequently (quarterly or monthly, for example, for Google Chrome "stable" and "dev" respectively). That's expanded further with Flatpack, k8s, and other options, in recent years.
The catch is that updates require package maintainers to work on integrating and backporting fixes to code. More prominent and widely-used packages do this. The issue of old bugs being reported to upstream ... is a breakage of the system in several ways: distro's bug-tracking systems (BTSes) should catch (and be used by) their users, upstream BTSes arguably should reject tickets opened on older (and backported) versions. The solutions are neither purely technical nor social, which makes solutions challenging. But in reality we should admit that:
- Upstream developers don't like dealing with the noise of stale bugs.
- Users are going to rant to upstream regardless of distro-level alternatives.
- Upstreams' BTSes should anticipate this and automate redirection of bugs to the appropriate channel with as little dev intervention as possible. Preferably none.
- Distros should increase awareness and availability of their own BTS systems to address bugs specific to the context of that distro.
- Distro maintainers should be dilligent about being aware of and backporting fixes and only fixes.
- Distros should increase awareness and availability of alternatives for running newer versions of software which aren't in the distro's own stable repos.
Widespread distance technological education is a tough nut regardless, there will be failings. The key is that to the extent possible those shouldn't fall on upstream devs. Though part of that responsibilty, and awareness of the overall problem, does* fall on those upstream devs.
This is exactly what users want, though. Eg. if they want to receive updates more frequently on Ubuntu then they can use the six monthly releases, but most Ubuntu users deliberately choose the LTS over that option because they don't want everything updated.
On SaaS/mobile apps you have often daily new versions of software coming out. That's what users/developers want. They do not want 3 year+ stale versions of their software being 'supported' by a third party distro. I put supported in comments as it only really applies to security and what not; not terrible bugs in the software that are fixed in later versions.
Even on servers where it arguably makes more sense it has been entirely supplanted by Docker which ships the _entire OS_ more or less as the 'app'. And even more damingly, most/nearly all people will use a 3rd party Docker repo to manage the docker 'core' software updates itself.
And the reason noone uses the six monthly releases is because the upgrade process is too painful and regresses too much. But - even if it was 100% bulletproof, noone wants to be running 6-12 month out of date software on that either. Chrom(ium) is updated monthly and has a lot of important new features in it. You don't really want to be running 6-9 months out of date on that.
Do you have any evidence to back that statement up?
When that happens, guess what you do? You trademark your software’s name and use the law to force distributions to not package unless concessions are granted. We’re beginning to see this with OBS, but Firefox also did this for a while.
As Fedora quickly found, when trademark law gets involved, any hope of forcing developers to allow packaging through a policy or opinion vote becomes hilariously, comically ineffectual.
The other alternative is to just not support Linux. Almost all major software has been happily taking that path, and the whole packaging mess gives no incentive to change.
It isn't clear if this behaviour is legally enforceable. Distributions typically try to avoid the conflict. But they could argue that "we modified Firefox to meet our standards and here is the result" is a legally permitted use of that trademark. To my knowledge, this has never been tested.
Ban the user that did not read go to the distro’s maintainers first.
https://lwn.net/Articles/1011511/
The problem is the applications have to use the exact same version of a library to get the benefits. With traditional package managers they usually only have 1 version available. With Flatpak you can choose your own version which results in many versions and as such they do not share dependencies. If distros had multiple versions of libraries you would end up with the exact same problem.
A touch overly dramatic...
In the Python world, something wants numpy>=2.5.13, another wants numpy<=2.5.12, yet Python has still not come up with a way to just do "import numpy==2.5.13" and have it pluck exactly that version and import it.
In the C++ world, I've seen code that spits out syntax errors if you use a newer version of gcc, others that spit out syntax errors if you use an older version of gcc, apt-get overwrites the shared library you depended on with a newer version, lots of other issues. Install CUDA 11.2, it tries to uninstall CUDA 11.1, never mind that you had something linked to it, and that everything else in that ecosystem disobeys semantics and doesn't work with later minor revisions.
It's such a shitshow that it fully makes sense to bundle all your dependencies if you want to ship something that "just works".
For your customer, storage is cheaper than employee time wasted getting something to work.
But as a rule, `<=` dependencies mean there's either a disastrous fault with the library, or else the caller is blatantly passing all the "do not enter" signs. `!=` dependencies by contrast are meaningful just to avoid a particular bug.
“Fuck it, just distribute software in the form of tarballs of the entire OS.”
The people making those "stupid distros" are (most likely by number) volunteers working hard to give us an integrated experience, and they deserve better than to be called "stupid".
On this work laptop I have three flatpaks, Signal, Chromium and Firefox. They all take 1.6GiB in total.
On my gaming PC I have Signal, Flatseal, Firefox, PrismLauncher, Fedora MediaWriter and Steam, and obviously they take over 700G because of the games in Steam, but if I count just the other flatpaks they're 2.2GiB.
So yeah, not great, but on the other hand I don't care because I love the packaging of cgroups based software and I don't need many of them. I mean my container images take up a lot more space than my flatpaks.
[Bug]: /var/lib/flatpak/repo/objects/ taking up 295GB of space: https://github.com/flatpak/flatpak/issues/5904
Why flatpak apps are so huge in size: https://forums.linuxmint.com/viewtopic.php?t=275123
Flatpak using much more storage space than installed packages: https://discussion.fedoraproject.org/t/flatpak-using-much-mo...
Conclusion: Thank you for the links
Which is hilarious: an IKEA flat pack takes up less space than the finished product. Linux flatpack is the exact opposite.
Does anyone know what those features are or have more details?
Linux generally draws a thin line between server and desktop, having “desktop only” dependencies is unusual less it’s something like needing the KDE or Gnome GUI libraries?
(This has also made an otherwise nice music player[3] unusable to me other than by dragging and dropping individual files from the file manager, as all of my music lives in git-annex, and accesses through git-annex symlinks are indistinguishable from sandbox escape attempts. On one hand, understandable; on the other, again, the software is effectively useless because of this.)
[1] https://wiki.archlinux.org/title/XDG_Desktop_Portal
[2] https://wiki.c2.com/?PowerBox
[3] https://apps.gnome.org/Amberol
Just in case you didn't already know, you can use Flatseal[1] to add the symlinked paths outside of those in the default whitelisted paths.
I think it's a good thing Flatpak have followed a security permissions system similar to Android, as I think it's great for security, but I definitely think they need to make this process more integrated and user friendly.
[1] https://flathub.org/apps/com.github.tchx84.Flatseal
Flatpak will even helpfully link binaries you install to a directory you can add to your $PATH to make command line invocation easy.
Am I right to think that? Has someone measured that difference on their workstation?
Either way disk space is cheap and abundant now. If I need thenlastest version of something I will use flatpaks.
And of course there are also tons of huge Windows software that come with all sorts of their own dependencies.
Edit: I think I somewhat misread your comment and progval is more spot on. On Linux you usually install software with a package manager that resolves dependencies and only installs the unsatisfied dependencies resulting in small install size for many cases while on Windows that is not really a thing and installers just package all the dependencies they cannot expect to be present and the portable version just does the same.
Personally i vastly prefer AppImages to flatpaks (in fact i do not use flatpaks at all, i'd rather build the program from source - or not use it if the build process is too convoluted - instead).
[0] https://appimage.org/
- the entirety of Win32 API
- all the Windows Runtime APIs
- .NET Framework 4.7+
This is a lot of functionality. For example, the list above includes four different widget toolkits alone (Win32, WinForms, WPF, WinRT XAML), several libraries to handle networking (including HTTP), USB, 2D and 3D graphics including text rendering, HTML renderer etc.
And all of this has a highly stable ABI, so long as you do everything by the book. COM/WinRT and .NET provide a stable ABI for high-level object-oriented APIs above and beyond what the basic C ABI can offer.
They can't depend on many apis existing or at the right version. Linux distros are made from a collection of various third party projects and distros just integrate those. Each of these third party projects has it's own development speed and ABI and API stability policies.
Each distro also has it's own development speed and release policy, which means they might have things that could either be too new or to old. Most distros try to avoid packaging multiple versions of the same project when they can avoid it to ease maintenance as well.
Heck, you can't even guarantee that you have the exact same libc. Most distros use glibc, but there are plenty of systems that use musl.
A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.
So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).
There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.
[0] https://docs.docker.com/engine/install/fedora/#install-using...
Sure, and this unfortunately isn’t uncommon.
> What exactly is wrong with */Linux handling of multi-monitor?
X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.
1. The hardware vendors are still not providing support the way they do for windows.
2. The Linux Devs haven't managed to adapt to these new hardwares.
Every OS has it's quirks, things you might not recall as friction points because they're expected.
I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.
As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.
You’re right that the manufacturers could provide better support, but they don’t.
Thanks all for the explanations, much appreciated, I thought I was missing something. I really should have known though, Ive been using portable apps for over 20 years on windows and remember.net apps not being considered portable way back when, which are now considered portable since the run time is on all modern windows.
Versus Linux where you have Gnome vs. KDE vs. Other and there's less emphasis on backwards compatibility and more on minimalism, so they need to package a lot more dependencies (potentially).
If you only install Gnome Flatpaks they end up smaller since they can share a bunch of components.
I mean, the comments are much more interesting than this piece of content, but the content itself is almost offending. At least the discussion is much more valuable than what I’ve just read by following that link.
I still feel the pinnacle for modern OS design might be Horizon, by Nintendo of all people. A capability-based microkernel OS that updates in seconds, fits into under 400 MB (WebKit and NVIDIA drivers included), is fast enough for games, and hasn’t had an unsigned code exploit in half a decade. (The OS is extremely secure, but NVIDIA’s boot code wasn’t.)
Why can’t we build something like that?
For instance, the Switch - and I don't know where in the stack this limitation lies - won't even let you run multiple programs that use the network. You can't, say, download a new game while playing another one that happens to have online connectivity - even if you aren't using it!
On a computer, we want to be able to run dozens of programs at the same time, freely and seamlessly; we want them to be able to interoperate: share data, resources, libraries, you name it; we want support for a vast array of hardware and peripherals. And on and on.
A Switch, fundamentally, is a device that plays games. Simpler requirements leads to simpler software.
You just can’t play games at the same time one is downloading. That’s a deliberate storage speed and network use optimization than a software limitation. You can also tell this by the notifications about online players from the system, even as you are playing an online game.
(Edit for posting too fast: The Switch does have a web browser, full WebKit even, which is used for the eShop and for logging in to captive portal Wi-Fi. Exploits are found occasionally, but the sandboxing has so far rendered these exploits mostly useless. Personally, I support this, as then Nintendo doesn’t have to worry about website parental controls.)
I don't think that's particularly hard if you only include support for one set of hardware and a single API/ABI for applications. Notably, no general-purpose OS does either of these things and people would probably not be pleased if one tried.
The problem with it is that it's a lot of work (just getting secure boot to work is a massive pain in itself) and there are a lot of drivers you need to manually disable or settings to manually toggle to get a Switch equivalent system. The Switch includes only code paths necessary for the Switch, so anything that looks like a USB webcam should be completely useless. Bluetooth/WiFi chipset drivers are necessary, but of course you only need the BLOBs for the specific hardware you're using.
Part of Nintendo's security strategy is the inability to get binary code onto the system that wasn't signed by Nintendo. You can replicate this relatively easily (basic GPG/etc. signature checks + marking externally accessible mount points as non-executable + only allowing execution of those mounts/copies from those mounts after full signature verification). Also add some kind of fTPM-based encryption mechanism to make sure your device storage can't be altered. You then need to figure out some method of signing all the software any user of your OS could possibly need to execute, but if you're an OEM that shouldn't be impossible.
Once you've locked down the system enough, you can start enforcing whatever sandboxing you need on top of whatever UI you prefer so your games can't be hacked. Flatpak/Snap/Docker all provide APIs for this already.
The tooling is all there, but there's no incentive for anyone to actually make it work. Some hardware OEMs do a pretty good job (Samsung's Tizen, for instance) but anything with a freely accessible debug interface or development interface is often quickly hacked. Most of the Linux user base want to use the same OS on their laptop and desktop and have all of their components work, and would also like the ability to run their own programs. To accomplish that, you have to give up a lot of security layers.
I doubt Nintendo's kernel is that secure, but without access to the source code and without a way to attack it, exploiting it is much harder. Add to that the tendency of Nintendo to sue, harass, and intimidate people trying to get code execution on their devices, and they end up with hardware that looks pretty secure from the outside.
Android and ChromeOS are also pretty solid operating systems in terms of general security, but their dependence on supporting a range of (vendor) drivers makes them vulnerable. Still, escalating from webkit to root on Android is quite the challenge, you'll need a few extra exploits for that, and those will probably only work on specific phones running specific software.
For what it's worth, you can get a pretty solid system by installing an immutable OS (Silverblue style) without root privileges. That still has some security mechansism disabled for usability purposes, but it's a solid basis for an easily updateable, low-security-risk OS when installed correctly.
The creator of it (SciresM) has publicly stated that after reverse engineering and re-implementing both it and the 3DS kernel, he firmly believes it has zero security flaws remaining. The last 6 or so years of not one software exploit in Horizon capable of enabling piracy also bears this out.
https://x.com/SciresM/status/1327631019583836160?ref_src=tws...
He also is giving the Switch 2, assuming it inherits the NVIDIA boot bug fix and new anti-glitching capabilities, possibly over a decade of being crack free at this rate. Even then, it will almost certainly be a hardware mod.
On Alpine, if there’s a zero day in WebKit, you’d better check how your security is set up, and hope there’s not an escalation chain.
On Horizon, dozens of bugs in WebKit, the Broadcom Bluetooth stack, and the games have been found; they are still found regularly. They are also boring and completely useless, because the sandboxing is so tight.
You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
I think a lot of the security comes down to what compromises you're willing to make. Horizon doesn't have to support the same breadth of hardware or software as we expect out of normal OSs, so they can afford to reinvent the world on a secure microkernel. If we want to maintain backwards-compatibility (and we do, because otherwise it's dead on arrival) then we have to take smaller steps. Of course, we can take those steps; if you care about security then you should run your browser in a sandbox (firejail, bubblewrap, docker/podman) at which point a zero-day in the browser is lower impact (not zero risk, true, but again I don't see any way to fix that without throwing out performance or compatibility).
> You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
I rather assumed that the Switch doesn't actually install OS updates in 5s either? The obvious way to do what they're doing is A/B updates in the background, after which you "apply" by rebooting, which Linux can do in 5s.
As for backwards compatibility, Flatpak solves it, mostly. The underlying system doesn’t necessarily need to be Linux if it provides the right environment to run Flatpaks, maybe in a container.
https://news.ycombinator.com/newsguidelines.html
[no longer replying non-constructively to anyone in this sub-thread]
I do feel strongly about it as I believe most apps don't need a full-blown database server. I've seen what SQLite can do and to me it's still a hidden gem (or a blind spot) to many programmers.
So I am putting my sweat where my mouth is and will provide that to the Elixir community, for 100% free, no one-time payments and no subscription software.
And yes, I do get annoyed by privileged people casually working on completely irrelevant stuff that's never going to move anything forward. Obviously everyone has the right to do whatever they like in their free time, but announcements on HN I can't combine with that and they do annoy me greatly. "Oh look, it's just a hobby project but I want you all to look at it!" -- don't know, it does not make any sense to me. Seems pretentious and ego-serving but I could be looking at it wrong. Triggers tend to remove nuance after all.
But out there, a stranger you care nothing about, will think less of you.
Wish I had that free time and freedom though... The things I would do.
I'm simply getting annoyed by the low effort that is put in such prominent open-source software.
And here I am, working in my corner on a soon-to-be-released open-source library that will likely see 100 users for a year at most... agonizing on increasing test coverage that actually paid off and it uncovered bugs and I fixed them. And enlisted LLMs and professional acquaintances to minimize or eliminate memory copying in the FFI part of the code...
...and a very prominent open-source software maintainers have not even bothered to start looking at the lowest-hanging fruit: reducing disk usage.
That frustrated me and I expressed it.
No comments yet