I don't use Debian for servers nor personal computers anymore, but the fact that they themselves host a page explaining potential privacy issues with Debian makes me trust them a lot more, and feel safer recommending it to others when it fits.
pabs3 · 1h ago
Thats just a wiki page, written by myself and a bunch of other Debian members/contributors. Don't read too much into it :)
keysdev · 5h ago
What are you using instead now? Nixos?
diggan · 5h ago
Yeah, NixOS for all servers (homelab + dedicated remote ones) and Arch on desktop.
spookie · 5h ago
Arch is a minefield on this regard tbh
diggan · 4h ago
To be even more honest, it is what you make of it ¯\_(ツ)_/¯
moffkalast · 2h ago
Windows is also what you make it with enough registry hacks, I'm not recommending it to anyone though.
lukan · 2h ago
Well, but windows comes with spyware by default and tries to activly keep it that way. A registry hack might stop working anytime.
Windows is activly hostile to anything privacy related.
Arch comes with the default of do it yourself. Lots of footguns, but not hostile OS behavior. Great difference to me.
diggan · 2h ago
Not really, sometimes it forces me to apply updates on shutdown/restart, even though I don't want to do it. None of the registry hacks seems to be able to disable this behavior. I've heard some people talking about a special distribution/version of Windows where you can disable this, but don't really feel like re-installing the entire OS just so when I boot into/away from Windows I don't get forced to wait for the slow update twice (one now, another in the future when I boot Windows next time).
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
s_ting765 · 1h ago
Arch has been bliss for me. I'm heavy on Flatpaks and primarily use Arch as a base operating system with very minimal config changes.
sshine · 7h ago
This policy is missing from nixpkgs, although there is a similar policy for the build process for technical reasons.
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
mort96 · 6h ago
In the process of trying to update, Spotify on NixOS will likely display some big error message about how it's unable to install updates, which results in a pretty bad user experience when everything is actually working as intended. It seems fair to patch software to remove such error messages.
jchw · 2h ago
To be fair, we (Nixpkgs maintainers) do remove or disable features that phone home sometimes even though it's not policy. That said, it would be nice if it was policy. Definitely was discussed before (most recently after the devbox thing I guess.)
pabs3 · 6h ago
I'm glad that opensnitch is available in Debian trixie too, to mitigate the issues that Debian has not found yet.
binaryturtle · 1h ago
Why can't I get GNOME stop calling home? (on a Debian installation) Each time I fire up my Debian VM with GNOME here on my OSX host system Little Snitch pops up because some weird connection to a GNOME web endpoint. One major pet peeve of mine.
28304283409234 · 30m ago
Please send patches.
vaporary · 5h ago
I was extremely disappointed to recently learn that visidata(1) phones home, and that this functionality has not been disabled in the Debian package, despite many people requesting its removal:
The maintainer’s responses in that thread are really frustrating. They just keep describing the bug as though the package’s behavior is acceptable.
I wonder what debian’s process is for dealing with such maintainers.
I hope they make “no phone home” actual policy soon.
ryandrake · 59m ago
Infuriating. The developer is just making excuses and refusing to address the users' actual concern. And why are they phoning home in the first place? What is this critical use case that requires this intrusion?
"This daily count of users is what keeps us working on the project, because
otherwise we have feel like we are coding into a void."
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!
JohnFen · 1h ago
This is one of my favorite things about Debian.
guappa · 7h ago
It's not guaranteed that they manage to catch all the software that does this though :D
phoe-krk · 7h ago
Any such leftover behavior is going to be a reportable and fixable bug then.
guappa · 7h ago
I'm not sure it's explicitly in the policy or if any team can decide what to do…
It's not guaranteed that policies enforce every possible case though.
pjmlp · 4h ago
So they have their own Go fork?
Just one possible example, among many others that have telemetry code into them.
deng · 4h ago
No they don't. The formulation in TFA is a bit too generic - Debian will usually not remove any code that "calls home". There are perfectly valid reasons for software to "phone home", and yes, that includes telemetry. In fact, Debian has its own "telemetry" system:
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
RetroTechie · 3h ago
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
dsr_ · 1h ago
The ongoing problem with popcon is that it's known not to be accurate, but since it's the data that's available, people make decisions based on it.
popcon is least likely to be turned on by:
- organizations with any kind of sensible privacy policy (which includes almost everyone running more than a handful of machines)
- individuals concerned about privacy
popcon is most likely to be turned on by Debian developers, and people new to Debian who have just installed it for the first time.
deng · 28m ago
Yeah, isn't that a shame? Wouldn't it be nice if instead of catastrophizing that telemetry data is always only ever there to spy on us, that we might assume that there are actually trustworthy projects out there? Especially for FOSS projects, which can usually not afford extensive in-house user testing, telemetry provides extremely valuable data to see how their software is used and where it can be improved, especially in the UX department, where many FOSS is severely lacking. This thread here is a perfect example of this kind of black/white thinking that telemetry must be ripped out of software no matter what, usually based on some fundamental viewpoint that anonymity is impossible anyway, so why bother even trying. This is not helping. I usually turn on telemetry for FOSS that offers it, because I hope they will use this to actually improve it.
diggan · 3h ago
> Telemetry contains personal data by definition
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
kevin_thibedeau · 1h ago
IPs are PII. You hit the server, and your anonymity is breached.
deng · 1h ago
Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.
ryandrake · 1h ago
Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
deng · 3h ago
> Telemetry contains personal data by definition.
No. Please look up the definition of "telemetry" and "personal data". The latter always refers to an identifiable person.
hedora · 2h ago
Virtually all anonymization schemes are reversible, so “identifiable” isn’t carrying any weight in your definition.
“Person” isn’t either, unless the software knows for sure it’s not being uses by a person.
deng · 1h ago
By your definition, all data is PII.
Bender · 33m ago
Many corporate privacy policies per their customer contracts agree with this. Even a single packet regardless of contents is sending the IP address and that is considered by many companies to be PII. Not my opinion, it's in thousands of contracts. Many companies want to know every third party involved in tracking their employees. Deviating from this is a compliance violation and can lead to an audit failure and monetary credits. These policies are strictly followed on servers and less so on workstations but I suspect with time that will change.
deng · 19m ago
I can only repeat myself from above: it's about what data you store and analyze. By your definition, all internet traffic would fall under PII regulations because it contains IP addresses, which would be ludicrous, because at least in the EU, there are very strict regulations how this data must be handled.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
Bender · 17m ago
By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
enfuse · 2h ago
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
If golang doesn't fully address this I guess Debian really should at least change the default (of they haven't already).
deng · 2h ago
It creates telemetry data, but actually transmitting it is opt-in.
enfuse · 1h ago
Attempts to contact external telemetry servers under default configuration is the issue. That not all of the needlessly locally aggregated data would actually be transmitted is separate.
layer8 · 4h ago
“Will remove” means that it’s one of the typical/accepted reasons why patches are applied by Debian maintainers, as in meaning 4 here [0], not that there is a guarantee of all telemetry being removed.
One of the many reasons I switched from Ubuntu to Debian 2 years ago. Another reason was snap.
beerandt · 2h ago
Between snap and having completely different network implementations between "desktop" and "server" versions really made me fall back down the learning curve of nix.
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
master_crab · 5h ago
Yup. Snap is emblematic of all the complexity Canonical bakes into Ubuntu.
hedora · 2h ago
That and the whole systemd stack. Canonical employees had enough votes to force upstream it into debian.
I switched to devuan. It’s great, but it sucks that the community split over something so needlessly destructive.
mystified5016 · 2h ago
God, I wish someone would do this to discord already. I'm so sick of updating it through my package manager every other day only for discord to then download its own updates anyway.
Yes, I've disabled the update check. No, it doesn't solve the problem.
BarbaryCoast · 2h ago
This is no longer true.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
Bender · 30m ago
Firefox only updates on its own if installed outside of the package manager. This applies to Debian and its forks. If I click on Help -> About it says, "Updates disabled by your organization". I personally would like to see distributions suggest installing Betterfox [1] or Arkenfox [2] to tighten up Firefox a bit.
> Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
officeplant · 2h ago
Long time Debian fan, current Devuan user. I'm sure it still has it's problems, but it feels nice and stable, especially on older hardware that is struggling with the times. (Thinkpad R61i w/core2duo T8100 swapped in and middleton bios)
michaelmrose · 1h ago
>Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
Void is nice on this front for instance.
rmccue · 2h ago
As someone who maintained a (PHP) library that Debian distributed, it fucking sucked that they made source modifications. There were a number of times where they broke the library in subtle ways, and there was little to no indication to users of the library that they were running a forked version. I also never had any contact from them about the supposed "bugs" they were patching.
diggan · 2h ago
> it fucking sucked that they made source modifications
As a maintainer, I can certainly understand how it feels like that, I'd probably wouldn't feel great about it either. As a user, I'm curious what kind of modifications they felt were needed, what exactly did they change in your library?
rmccue · 2h ago
The library I was maintaining (SimplePie) was an RSS feed parser which supported every flavour of the RSS/Atom specs. Because of the history of those particular formats, there were a huge number of compatibility hacks necessary to parse real-world data, and cases where the "spec" (actually just a vague page on a website) was inaccurate compared to actual usage.
This was a while ago (10+ years), but my recollection is that someone presumably had reported that parts of the library didn't conform to the spec, and Debian patched those. This broke parsing actual feeds, and caused weeks of debugging issues that couldn't be replicated. Had they reported upstream in the first instance, I could have investigated, but there was no opportunity to do so.
dgb23 · 2h ago
That sounds very careless. Not only does this break an obviously deliberate feature, it also violates the robustness principle. Whether one likes it or not, it’s a guiding principle for the web. Most importantly this „fix“ was bad for its users.
Good intentions, but unfortunately bad outcome.
There was a somewhat recent discussion on here on how OS projects on GitHub are pestered by reports as well. Some athors commented that it even took away their motivation to publish code.
It’s always the same mechanism isn’t it. The „why we can’t have nice things“ issue. Making everything at least slightly worse, because there are people who exploit a system or trust based relationship.
bityard · 20m ago
First I want to say that I love Debian. They have a great distro that is simple and quite frankly a joy to use, and manage to keep it all going on basically nothing but volunteer effort.
However, I do believe that in certain areas, they give too much freedom to package maintainers. The bar for being a package maintainer in Debian is relatively low, but once a package _has_ a maintainer--and barring any actual Debian policy violations--that person seems to have the final say in all decisions related to the package. Sometimes those decisions end up being controversial.
Your case is one example. Package maintainers ideally _should_ work with upstream authors, but are not required to because a LOT of upstream authors either cannot be reached, or actively refuse to be bothered by any downstream user. (The source tarball is linked on their home page and that's where their support ends!) I don't know what the solution is here, but there are probably improvements that could and should be made that don't require all upstream authors to subscribe to Debian development mailing lists.
diggan · 2h ago
Ah, that sounds like a terrible solution from Debian's side, and very unexpected. Sure, patching security/privacy issues kind of makes sense (from a user's POV), but change functionality like that? Makes less sense, especially if they didn't even try to get the changes upstream.
lifthrasiir · 7h ago
The counterpoint would be the Debian-specific loss of private key entropy [1] back in 2008. While this is now a very ancient bug, the obvious follow-up question would be: how does Debian prevent or mitigate such incidents today? Was there any later (non-security, of course) incident of similar nature?
The upstream GnuPG project (and the standards faction they belong to) specifically opposes the use of keys without user IDs as it is a potential security issue. It is also specifically disallowed by the RFC4880 OpenPGP standard. By working through the Debian process, the proponents of such keys are bypassing the position of the upstream project and the standard.
RetroTechie · 2h ago
> There is a lot of political stuff in there related to standards.
To be fair, in Debian's case politics come with the territory. Debian is a vision of what an OS should be like. With policies, standards & guidelines aimed at that, treating the OS as a whole.
That goes well beyond "gather packages, glue together & upload".
Same goes for other distributions I suppose (some more, some less).
guappa · 7h ago
Do you have any statistics that show that Debian patches introduce more CVE worthy bugs than the software already contains? OpenSSL doesn't really have a pristine history.
Let's not forget that the patch had been posted on the OpenSSL mailing list and had received a go ahead comment before that.
Having said that, if you're asking if there's a penetration test team that reviews all the patches. No there isn't. Like there isn't any such thing on 99.999999999% of all software that exists.
lmm · 4h ago
The patch was posted on the wrong OpenSSL mailing list, and frankly that particular Debian bug was worse than anything else we've seen even from OpenSSL.
Last I knew Debian didn't do dedicated security review of patches to security-critical software, which is normal practice for other distributions.
kragen · 3h ago
It was plausibly the worst computer security bug in human history, but by the same token, it's hard to see it as indicating a systemic problem with either Debian or OpenSSL. When we're dealing with a once-in-history event like that, where it happens is pretty random. It's the problem of inference from a small sample.
ak217 · 57m ago
I think it's important to learn from incidents. It's clear there were design issues on both projects' sides that allowed that bug to happen, and in fact several of them were fixed in subsequent years (though not quickly, and not until major corporate sponsors got concerned about OpenSSL's maintenance).
SAI_Peregrinus · 2h ago
On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
lmm · 1h ago
> On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
No it wasn't. It was reading (and xoring into the randomness that would become the key being generated) uninitialised char values from an array whose address was taken, that results in unspecified values not undefined behaviour.
SAI_Peregrinus · 13m ago
I see you're correct, I misremembered. That isn't really much better, since there's no requirement that unspecified values ever actually change. Compiler developers are free to always return `0x00` when reading any unspecified `char` value, which wouldn't provide any entropy. XORing it in guaranteed that it couldn't subtract entropy, but if there were no other entropy sources they failed to return an error. OpenSSL being able to generate 0 entropy and not return an error in its RNG was still an important bug to fix.
nullc · 1h ago
The crazy thing is that after this incident they restored the uninitialized usage and retained it there for the next half decade. It wasn't as mild as being a risk of future compilers destroying the universe: it made valgrind much less useful on essentially all users of OpenSSL, exactly what you want for security critical software.
(meanwhile, long before this incident fedora just compiled openssl with -DPURIFY which disabled the bad behavior in a safe and correct way).
lifthrasiir · 7h ago
That was the kind of answer I wanted to hear, thanks. (Of course I don't think Debian should be blamed for incidents.) Does Debian send other patches as well? For example, I didn't know that Debian also often creates a man page by its own.
pabs3 · 6h ago
Debian definitely aims to contribute upstream, but that doesn't always happen, due to upstream CLAs, because most Debian packagers are volunteers, many Debian contributors are busy, many upstreams are inactive and other reasons.
Ah, I meant more about policies and guidelines. I'm not well-versed in Debian processes so I can for example imagine that only some patches get sent to the upstream only at the maintainers' discretion. It seems that Debian at least has a policy to maintain patches separate from the upstream source though.
bayindirh · 6h ago
Debian uses Quilt system for per-package patch maintenance. While packaging a software you get the original source (i.e. orig.tar.gz), and add patches on top of it with Quilt, and build it that way.
Then you run the tests, and if they pass, you package and upload it.
This allows a patch(set) can be sent to the upstream as a package saying "we did this, and if you want to include them, this apply cleanly to version x.y.z, any feedback is welcome".
guappa · 7h ago
In theory you want all patches sent to upstream, but if they're for some specific debian reason then you can not send them.
Patches are maintained separately because debian doesn't normally repack the .tar.gz (or whatever) that the projects publish, as to not invalidate signatures and let people check that the file is in fact the same. An exception is done when the project publishes a file that contains files that cannot legally be redistributed.
aragilar · 7h ago
https://research.swtch.com/openssl provides more context: openssl was asked about the change, and seemingly approved it (whether everyone understood what was being approved is a different question). It's not clear why openssl never adopted the patch (was everyone else just lucky?), but I wonder what the reaction would have been if the patch had been applied (or the lines hidden away by a build switch).
nullc · 1h ago
> It's not clear why openssl never adopted the patch
OpenSSL already had an option to safely disable the bad behavior, -DPURIFY.
t312227 · 2h ago
hello,
as always: imho (!)
i remember this incident - if my memory doesn't trick me:
it was openssl which accessed memory it didn't allocated to collect randomness / entropy for key-generation.
and valgrind complained about a possible memory-leak - its a profiling-tool with the focus on detecting memory-mgmt problems.
instead of taking a closer look / trying to understand what exactly went on there / causes the problem, the maintainer simply commented out / disabled those accesses...
mistakes happen, but the debian-community handled this problem very well - as in my impression they always do and did.
idk ... i prefere the open and community-driven approach from debian anytime over distributions which are associated to companies.
last but not least, the have a social contract.
long story short: at least for me this was an argument for the debian gnu/linux distribution, not against :))
just my 0.02€
hedora · 1h ago
But why patch it in debian, and not file an upstream bug?
It’s doubly important to upstream issues for security libraries: There are numerous examples of bad actors intentionally sabotaging crypto implementations. They always make it look like an honest mistake.
For all we know, prior or future debian maintainers of that package are working for some three letter agency. Such changes should be against debian policy.
master_crab · 5h ago
If it involves OpenSSL, I will give the benefit of the doubt to everyone else first over OpenSSL.
Why? Heartbleed.
jbverschoor · 3h ago
What would that have to do with phoning home?
frainfreeze · 25m ago
> Debian avoids including anything in the main part of its package archive it can’t legally distribute...
I respect the hell out of Debian and am grateful for everything they do for the larger ecosystem, but this is why I use Arch. It's so much easier just to refer to the official documentation for the software and know it will be correct. Also, I've never really encountered a situation where interop between software is broken by just sticking to vanilla upstream. Seems like modifying upstream is just a ton of work with so many potential breakages and downsides it's not really worth it.
bityard · 14m ago
You seem to be implying that Debian makes large significant changes to upstream software for the sake of integration with the rest of the OS and that Arch makes none at all. Neither of these is true.
jeroenhd · 6h ago
All of these reasons are good, but they're not comprehensive. Unless someone can tell me what category Debian's alterations to xscreensaver fall under, maybe. As far as I can tell, that was just done for aesthetic reasons and packagers disagreeing with upstream.
pabs3 · 6h ago
The patches and their explanations are listed here:
Edit: can't find any that are for aesthetic reasons.
mnau · 6h ago
91_remove_version_upgrade_warnings.patch is the one for asthetic reasons.
Debian keeps ancient versions that have many fixed bugs. Upstream maintainer has to deal with fallout of bug reports of obsolete version. To mitigate his workload, he added obsolete version warning. Debian removed it.
quietbritishjim · 5h ago
I'll admit that I haven't inspected the patch, but how could that warning possibly work without checking version information somewhere on the internet? That was listed in OP.
sneak · 5h ago
IIRC it just hardcodes the release date and complains if it is more than 2 or 3 years later.
It’s somewhat reasonable. I agree Debian should patch out phone-home and autoupdate (aka developer RCE). They should have left the xscreensaver local-only warning in, though. It is not a privacy or system integrity issue.
jwz however is also off the rails with entitlement.
They’re both wrong.
mnau · 32m ago
People unfamiliar with code base can easily screw it, here is SimplePie example:
I don't think that approach is reasonable. When you are effectively making a fork, don't freeload on existing project name and burden him with problems you cause.
mschuster91 · 4h ago
> jwz however is also off the rails with entitlement.
Always remember to not link to his site from HN because you'll get a testicle NSFW image when you click on a link to his site from HN. dang used to have rel=noreferrer on outgoing links, but that led to even more drama with other people...
Some people in the FOSS scene just love to stir drama, and jwz is far from the only one. Another person with such issues IMHO is the systemd crowd, although in this case ... IMHO it's excusable to a degree, as they're trying to solve real problems that make life difficult for everyone.
esperent · 2h ago
> Always remember to not link to his site from HN because you'll get a testicle NSFW image
What's his reason for targeting HN users this way?
mschuster91 · 1h ago
The testicle speaks for itself [1]. He has held a serious political grudge against VC way over a decade back [2], the earliest mention of the JWZ testicles appearing on HN that I could find is over 9 years old [3].
The best part is when they swap FFmpeg or other libraries, make things compile somehow, don't test the results, and then ship completely broken software.
dfedbeef · 2h ago
You run another distro that does things better?
yoyohello13 · 41m ago
Fedora? Arch Linux? I have massive respect for the Debian maintainers, but I've had way fewer problems on those Distros.
JdeBP · 7h ago
The point about manual pages has always seemed to me to be one of the points where the system fails us. There are a fair number of manual pages that the world at large would benefit from having in the original softwares, that are instead stuck buried in a patches subdirectory in a Debian git repository, and have been for years.
This is not to say that Debian is the sole example of this. The FreeBSD/NetBSD packages/ports systems have their share of globally useful stuff that is squirrelled away as a local patch. The point is not that Debian is a problem, but that it too systematizes the idea that (specifically) manual pages for external stuff go primarily into an operating system's own source control, instead of that being the last resort.
jmmv · 1h ago
I spent years packaging software (mostly Gnome 2.x) for NetBSD. I almost-always tried to upstream the local patches that were needed either for build fixes or improvements (like flexibility to adapt to non-Linux file system hierarchies or using different APIs).
It was exhausting though, and an uphill battle. Most patches were ignored for months or years, with common “is this still necessary?” or “please update the patch; it doesn’t apply anymore” responses. And it was generally a lot of effort. So patches staying in their distros is… “normal”.
pabs3 · 6h ago
Usually the Debian manual page author or package maintainer will send that upstream. Same goes for patches. Sometimes upstream doesn't want manual pages, or wants it in a different format, and the Debian person doesn't have time to rewrite it.
JdeBP · 6h ago
There's a belief that this is usual. But having watched the process for a couple of decades, it seems to me that that is just a belief, and actual practice doesn't work that way. A lot of times this stuff just gets stuck and never sent along.
I also think that the idea that original authors must not accept manual pages is a way of explaining how the belief does not match reality, without accepting that it is the belief itself that is wrong. Certainly, the number of times that things work out like the net-tools example elsethread, where clearly the original authors do want manual pages, because they eventually wrote some, and end up duplicating Debian's (and FreeBSD's/NetBSD's) efforts, is evidence that contradicts the belief that there's some widespread no-manual-pages culture amongst developers.
capitol_ · 5h ago
It's also easy for people to have the opinion the those who do the unpaid work of packaging software should do even more work for free.
I have sent about 50 or so patches upstream for the 300 packages I maintain and while it reduces the amount of work long-term it's also surprisingly amount of work.
Typically the Debian patches are licensed under the same license as the original project. So there is nothing stopping anyone who feels that more patches should be sent upstream to send them.
Typically the Debian maintai
arp242 · 1h ago
I didn't ask for you to second-guess my software. I didn't ask you to ship modified (potentially broken and/or substantially different in opinionated ways) versions of my software under the same name.
If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Debian has given me the most grief of any Linux distro by far. Actually, Debian is the only system I can recall giving me grief. Debian pushes a lot of work to the broader ecosystem to people who never asked for it.
I didn't choose to be associated with Debian, but I have no choice in the matter. You did choose to be associated with the packages you maintain.
So don't give me any of that "but my unpaid time!". Either do the job properly or don't do it at all. Both are fine; no maintainer asked you to package anything. They're just asking you to not indirectly push work on them by shipping random (potentially broken and/or highly opinionated) patches they're never even told about.
ckastner · 6h ago
This.
And often it's not an unhelpful upstream, just an upstream that sees little use for man pages in their releases, and doesn't want to spend time maintaining documentation in parallel to what their README.md or --help provides (with which the man page must be kept in sync).
arp242 · 1h ago
Another issue is that these manpages can become outdated (and/or are downright wrong).
Overall I feel it's one of those Debian policies stuck in 1995. There are other reasonable ways to get documentation these days, and while manpages can be useful for some types of programs, they're less useful for others.
guappa · 7h ago
That only happens if the project lacks a manual page or if it's really bad.
JdeBP · 6h ago
"only happens" is a lot more often that you think. In my experience, "only" is quite frequent.
A randomly picked case in point:
Debian has had a local manual page for the original software's undocumented (in the old Sourceforge version) iptunnel(8) command for 7 years:
Not the best name for the article. My first guess was version changes, or software being added/removed from repo. Turns out this is about source code modification.
alias_neo · 7h ago
As a native (British) English speaker, I was also unclear until reading the article.
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
pabs3 · 6h ago
Patching out privacy issues isn't in Debian Policy, its just part of the culture of Debian, but there are still unfixed/unfound issues too, it is best to run opensnitch to mitigate some of those problems.
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
pabs3 · 1h ago
Homelabs are usually running software not from a distro too, so potentially more privacy issues there too. Firewalling outgoing networking, along with a filtering SOCKS proxy like privoxy might be a good start.
twic · 3h ago
I understood what it meant immediately, but i think only because i already knew that Debian are infamous for doing this.
mnw21cam · 4h ago
Me too. I was hoping for an explanation of why the software I have got used to and works very well and isn't broken keeps being removed from Debian in the next version because it is "unmaintained".
Affric · 7h ago
But what does Debian see as the risks of patching the software they distribute and how do they mitigate them?
guappa · 7h ago
Debian isn't a single person. A lot of patches are backport fixes for CVEs.
Then there's stuff like: "this project only compiles with an obsolete version of gcc" so the alternatives are dropping it or fixing it. Closely related are bugs that only manifest on certain architectures, because the developers only use amd64 and never compile or run it on anything else, so they make incorrect assumptions.
Then there's python that drops standard library modules every release, breaking stuff. So they get packaged separately outside of python.
There's also cherry picking of bugfixes from projects that haven't released yet.
Is there any reason you think debian developers are genetically more prone to making mistakes than anyone else? Considering that debian has an intensive test setup that most projects don't even have.
Affric · 46m ago
I have phrased it as the author has phrased it.
What gives you the idea I think Debian are any more prone to mistakes than anyone else? It’s one of the two distros I use at home. I admire the devs a great deal.
aragilar · 7h ago
I mean, it would depend on what the patch is? If you're adding a missing manpage, I'm not sure what can go wrong? Is changing the build options (e.g. enabling or disabling features) a patch, or an expected change (and if such a config option is bad, what blame should be put on upstream for providing it)? What about default config files (which could both make the software more secure or less, such as what cyphers to use with TLS or SSH)?
anthk · 6h ago
OpenBSD too, but for security and proper POSIX functions vs Linuxisms, such as wordexp.
hsbauauvhabzb · 7h ago
Do distro maintainers share patches, man pages, call home metrics and other data with other distros’ maintainers (and them back)?
Further, do they publish any change information publicly?
pabs3 · 6h ago
They usually send everything upstream, and everything is public in their source control. Some maintainers look at repology.org to find package stuff from other distros.
pjc50 · 6h ago
There should be a source package for every binary package, and patches are usually in a subdirectory of the package.
steeleduncan · 6h ago
> ... do they publish any change information publicly?
This is utter FUD, of course they do, it is an open source distribution. Everything can be found from packages.debian.org
sgc · 5h ago
They even have a portal that publishes this information specifically, with statistics, and many notes as to why a specific change has been made: https://udd.debian.org/patches
gspr · 5h ago
> Do distro maintainers share patches, man pages, call home metrics and other data with other distros’ maintainers (and them back)?
Yes, at a minimum the patches are in the Debian source packages. Moreover, maintainers are highly encouraged to send patches upstream, both for the social good and to ease the distribution's maintenance burden. An automated tool to look for such patches is the "patches not yet forwarded upstream" field on https://udd.debian.org/patches.cgi
commandersaki · 6h ago
Yeah no thanks, just look at the abomination like pure-ftpd, apache, nginx, etc. I don't need some weird opinion configuration framework to go with the software I use.
liveoneggs · 3h ago
MySQL? Nah you get mariadb
sgarland · 2h ago
Tbh I’d rather have MariaDB. It’s wire-compatible, but has way more features, like a RETURNING clause. Why MySQL has never had that is a mystery (just kidding, it’s because Oracle).
Barrin92 · 5h ago
I second that. Not only are there not infrequent cases of package maintainers breaking software, it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
It's why I'm really glad flatpaks/snaps/appimages and containerization are where they are at now, because it's greatly dis-intermediated software distribution.
gspr · 4h ago
Since this is the FOSS world, you are of course free to eschew distributions. But:
> it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
is just factually wrong. Distributions like Debian try to make a coherent operating system from tens of thousands of pieces of independently developed software. It's fine not to like that. It's fine to instead want to use and manage those tens of thousands of pieces of independent software yourself. But a distribution is neither an "app store", nor does it "insert itself" between the user and the software. The latter is impossible in the FOSS world. Many users choose to place distros between them and software. You can choose otherwise.
vbezhenar · 4h ago
I'm using Arch and, AFAIK, it tries to use upstream code as much as possible. That's much better model IMO.
gspr · 3h ago
I'm not trying to argue which distribution model is best, or whether one should avoid distributions altogether. That's messy, complicated, and full of personal variables for each individual.
I'm just trying to correct the notion that somehow a distro is an "app store" that "inserts itself" between the software and its users. A distribution is an attempt to make lots of disparate pieces of software "work together", at varying degrees. Varying degrees of modification may or may not factor into that. On one extreme is perhaps just a collection of entirely disjoint software, without anything attaching those pieces of software together. On the other extreme is perhaps something like the BSDs. Arch and Debian sit somewhere in between, at either side.
Thoughtful people can certainly disagree about what the correct degree of "work together" or modification is.
guappa · 4h ago
It's a better model until you fix a bug, but upstream is unresponsive.
vbezhenar · 4h ago
Don't fix bugs, leave it to developers.
sgarland · 2h ago
Do you also leave trash on the ground when you come across it in public? Try to leave things better than you found them.
skydhash · 2h ago
> Don't fix bugs, leave it to developers
Said the developer.
Meanwhile the user is stuck with a broken software.
Hackbraten · 4h ago
Why do you assume Debian packagers don’t do the same?
vbezhenar · 4h ago
Because it's well known that debian packagers alter software they package with unnecessary patches.
INTPenis · 7h ago
This is one of the reasons I switched to RHEL 10+ years ago.
I actually prefer the RHEL policy of leaving packages the way upstream packaged them, it means upstream docs are more accurate, I don't have to learn how my OS moves things around.
One example that sticks out in memory is postgres, RHEL made no attempt to link its binaries into PATH, I can do that myself with ansible.
Another annoying example that sticks out in Debian was how they create admin accounts in mysql, or how apt replaces one server software with another just because they both use the same port.
I want more control over what happens, Debian takes away control and attempts to think for me which is not appreciated in server context.
It swings both ways too, right now Fedora is annoying me with its nano-default-editor package. Meaning I have to first uninstall this meta package and then install vim, or it'll be a package conflict. Don't try and think for me what editor I want to use.
steeleduncan · 6h ago
> I actually prefer the RHEL policy of leaving packages the way upstream packaged them
I don't think RHEL is the right choice if this is your criteria. Arch is probably what you are looking for
ahoka · 7h ago
"I actually prefer the RHEL policy of leaving packages the way upstream packaged them"
Are you kidding now? Red Hat was always notorious of patching their packages heavily, just look download an SRPM and have a look.
mrweasel · 6h ago
I don't think that's true for Red Hat, but it is true for Slackware.
If you want packages that works just like the upstream documentation, run Slackware.
Debian does add some really nice features in many of their packages, like a easy way to configure multiple uWSGI application using a file per application in a .d directory. It's a feature of uWSGI, but Debian has just package it up really nicely.
yxhuvud · 4h ago
> I actually prefer the RHEL policy of leaving packages the way upstream packaged them
Unless something has changed in the last 10 years that has passed since I last used anything RHEL-based, there are definitely no such policy.
cess11 · 6h ago
Pretty much everyone has had nano as default for ages, at least that's how it seems to me from having had to figure out which package has script support and installing vim myself after OS install for a long time.
And RedHat does a lot of fiddling in their distributions, you probably want something like Arch, which is more hands-off in that regard. Personally, I prefer Debian, it's the granite rock of Linux distributions.
Debian indeed does this. In release FF has disabled telemetry: https://wiki.debian.org/Firefox
For example, when closing firefox on OpenSUSE Leap 15.6, "pingsender" is launched to collect telemetry:
https://imgur.com/a/k3Nnbbj
It has been there for years. It is also on other distros.
https://wiki.debian.org/PrivacyIssues
Windows is activly hostile to anything privacy related.
Arch comes with the default of do it yourself. Lots of footguns, but not hostile OS behavior. Great difference to me.
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1001647
https://github.com/saulpw/visidata/discussions/940
I wonder what debian’s process is for dealing with such maintainers.
I hope they make “no phone home” actual policy soon.
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!
https://wiki.debian.org/PrivacyIssues
Just one possible example, among many others that have telemetry code into them.
https://popcon.debian.org/
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
popcon is least likely to be turned on by:
- organizations with any kind of sensible privacy policy (which includes almost everyone running more than a handful of machines)
- individuals concerned about privacy
popcon is most likely to be turned on by Debian developers, and people new to Debian who have just installed it for the first time.
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
No. Please look up the definition of "telemetry" and "personal data". The latter always refers to an identifiable person.
“Person” isn’t either, unless the software knows for sure it’s not being uses by a person.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
I am only aware since I relatively recently ran into something similar to this on a fresh VM without internet egress: https://github.com/golang/go/issues/68976
https://github.com/golang/go/issues/68946
If golang doesn't fully address this I guess Debian really should at least change the default (of they haven't already).
[0] https://www.merriam-webster.com/dictionary/will
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
I switched to devuan. It’s great, but it sucks that the community split over something so needlessly destructive.
Yes, I've disabled the update check. No, it doesn't solve the problem.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
[1] - https://github.com/yokoffing/Betterfox
[2] - https://github.com/arkenfox/user.js
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
Void is nice on this front for instance.
As a maintainer, I can certainly understand how it feels like that, I'd probably wouldn't feel great about it either. As a user, I'm curious what kind of modifications they felt were needed, what exactly did they change in your library?
This was a while ago (10+ years), but my recollection is that someone presumably had reported that parts of the library didn't conform to the spec, and Debian patched those. This broke parsing actual feeds, and caused weeks of debugging issues that couldn't be replicated. Had they reported upstream in the first instance, I could have investigated, but there was no opportunity to do so.
Good intentions, but unfortunately bad outcome.
There was a somewhat recent discussion on here on how OS projects on GitHub are pestered by reports as well. Some athors commented that it even took away their motivation to publish code.
It’s always the same mechanism isn’t it. The „why we can’t have nice things“ issue. Making everything at least slightly worse, because there are people who exploit a system or trust based relationship.
However, I do believe that in certain areas, they give too much freedom to package maintainers. The bar for being a package maintainer in Debian is relatively low, but once a package _has_ a maintainer--and barring any actual Debian policy violations--that person seems to have the final say in all decisions related to the package. Sometimes those decisions end up being controversial.
Your case is one example. Package maintainers ideally _should_ work with upstream authors, but are not required to because a LOT of upstream authors either cannot be reached, or actively refuse to be bothered by any downstream user. (The source tarball is linked on their home page and that's where their support ends!) I don't know what the solution is here, but there are probably improvements that could and should be made that don't require all upstream authors to subscribe to Debian development mailing lists.
[1] https://en.wikipedia.org/wiki/OpenSSL#Predictable_private_ke...
* https://udd.debian.org/patches.cgi?src=gnupg2&version=2.4.7-...
There is a lot of political stuff in there related to standards. For a specific example see:
* https://sources.debian.org/src/gnupg2/2.4.7-19/debian/patche...
The upstream GnuPG project (and the standards faction they belong to) specifically opposes the use of keys without user IDs as it is a potential security issue. It is also specifically disallowed by the RFC4880 OpenPGP standard. By working through the Debian process, the proponents of such keys are bypassing the position of the upstream project and the standard.
To be fair, in Debian's case politics come with the territory. Debian is a vision of what an OS should be like. With policies, standards & guidelines aimed at that, treating the OS as a whole.
That goes well beyond "gather packages, glue together & upload".
Same goes for other distributions I suppose (some more, some less).
Let's not forget that the patch had been posted on the OpenSSL mailing list and had received a go ahead comment before that.
Having said that, if you're asking if there's a penetration test team that reviews all the patches. No there isn't. Like there isn't any such thing on 99.999999999% of all software that exists.
Last I knew Debian didn't do dedicated security review of patches to security-critical software, which is normal practice for other distributions.
No it wasn't. It was reading (and xoring into the randomness that would become the key being generated) uninitialised char values from an array whose address was taken, that results in unspecified values not undefined behaviour.
(meanwhile, long before this incident fedora just compiled openssl with -DPURIFY which disabled the bad behavior in a safe and correct way).
Then you run the tests, and if they pass, you package and upload it.
This allows a patch(set) can be sent to the upstream as a package saying "we did this, and if you want to include them, this apply cleanly to version x.y.z, any feedback is welcome".
Patches are maintained separately because debian doesn't normally repack the .tar.gz (or whatever) that the projects publish, as to not invalidate signatures and let people check that the file is in fact the same. An exception is done when the project publishes a file that contains files that cannot legally be redistributed.
OpenSSL already had an option to safely disable the bad behavior, -DPURIFY.
as always: imho (!)
i remember this incident - if my memory doesn't trick me:
it was openssl which accessed memory it didn't allocated to collect randomness / entropy for key-generation.
and valgrind complained about a possible memory-leak - its a profiling-tool with the focus on detecting memory-mgmt problems.
* https://valgrind.org/
instead of taking a closer look / trying to understand what exactly went on there / causes the problem, the maintainer simply commented out / disabled those accesses...
mistakes happen, but the debian-community handled this problem very well - as in my impression they always do and did.
idk ... i prefere the open and community-driven approach from debian anytime over distributions which are associated to companies.
last but not least, the have a social contract.
long story short: at least for me this was an argument for the debian gnu/linux distribution, not against :))
just my 0.02€
It’s doubly important to upstream issues for security libraries: There are numerous examples of bad actors intentionally sabotaging crypto implementations. They always make it look like an honest mistake.
For all we know, prior or future debian maintainers of that package are working for some three letter agency. Such changes should be against debian policy.
Why? Heartbleed.
Related: netadata to be removed form Debian https://github.com/coreinfrastructure/best-practices-badge/i...
https://udd.debian.org/patches.cgi?src=xscreensaver&version=...
Edit: can't find any that are for aesthetic reasons.
Debian keeps ancient versions that have many fixed bugs. Upstream maintainer has to deal with fallout of bug reports of obsolete version. To mitigate his workload, he added obsolete version warning. Debian removed it.
It’s somewhat reasonable. I agree Debian should patch out phone-home and autoupdate (aka developer RCE). They should have left the xscreensaver local-only warning in, though. It is not a privacy or system integrity issue.
jwz however is also off the rails with entitlement.
They’re both wrong.
https://news.ycombinator.com/item?id=44061563
I don't think that approach is reasonable. When you are effectively making a fork, don't freeload on existing project name and burden him with problems you cause.
Always remember to not link to his site from HN because you'll get a testicle NSFW image when you click on a link to his site from HN. dang used to have rel=noreferrer on outgoing links, but that led to even more drama with other people...
Some people in the FOSS scene just love to stir drama, and jwz is far from the only one. Another person with such issues IMHO is the systemd crowd, although in this case ... IMHO it's excusable to a degree, as they're trying to solve real problems that make life difficult for everyone.
What's his reason for targeting HN users this way?
[1] NSFW https://imgur.com/32R3qLv
[2] (Redirects to NSFW, so open in incognito or you'll get the testicles) https://www.jwz.org/blog/2011/11/watch-a-vc-use-my-name-to-s...
[3] https://news.ycombinator.com/item?id=10804953
This is not to say that Debian is the sole example of this. The FreeBSD/NetBSD packages/ports systems have their share of globally useful stuff that is squirrelled away as a local patch. The point is not that Debian is a problem, but that it too systematizes the idea that (specifically) manual pages for external stuff go primarily into an operating system's own source control, instead of that being the last resort.
It was exhausting though, and an uphill battle. Most patches were ignored for months or years, with common “is this still necessary?” or “please update the patch; it doesn’t apply anymore” responses. And it was generally a lot of effort. So patches staying in their distros is… “normal”.
I also think that the idea that original authors must not accept manual pages is a way of explaining how the belief does not match reality, without accepting that it is the belief itself that is wrong. Certainly, the number of times that things work out like the net-tools example elsethread, where clearly the original authors do want manual pages, because they eventually wrote some, and end up duplicating Debian's (and FreeBSD's/NetBSD's) efforts, is evidence that contradicts the belief that there's some widespread no-manual-pages culture amongst developers.
I have sent about 50 or so patches upstream for the 300 packages I maintain and while it reduces the amount of work long-term it's also surprisingly amount of work.
Typically the Debian patches are licensed under the same license as the original project. So there is nothing stopping anyone who feels that more patches should be sent upstream to send them.
Typically the Debian maintai
If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Debian has given me the most grief of any Linux distro by far. Actually, Debian is the only system I can recall giving me grief. Debian pushes a lot of work to the broader ecosystem to people who never asked for it.
I didn't choose to be associated with Debian, but I have no choice in the matter. You did choose to be associated with the packages you maintain.
So don't give me any of that "but my unpaid time!". Either do the job properly or don't do it at all. Both are fine; no maintainer asked you to package anything. They're just asking you to not indirectly push work on them by shipping random (potentially broken and/or highly opinionated) patches they're never even told about.
And often it's not an unhelpful upstream, just an upstream that sees little use for man pages in their releases, and doesn't want to spend time maintaining documentation in parallel to what their README.md or --help provides (with which the man page must be kept in sync).
Overall I feel it's one of those Debian policies stuck in 1995. There are other reasonable ways to get documentation these days, and while manpages can be useful for some types of programs, they're less useful for others.
A randomly picked case in point:
Debian has had a local manual page for the original software's undocumented (in the old Sourceforge version) iptunnel(8) command for 7 years:
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
Independently, the original came up with its own, quite different, manual page 3 years later:
https://github.com/ecki/net-tools/blob/master/man/en_US/iptu...
Then Debian imported that!
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
This sort of thing isn't a rare occurrence.
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
https://wiki.debian.org/PrivacyIssues
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
Then there's stuff like: "this project only compiles with an obsolete version of gcc" so the alternatives are dropping it or fixing it. Closely related are bugs that only manifest on certain architectures, because the developers only use amd64 and never compile or run it on anything else, so they make incorrect assumptions.
Then there's python that drops standard library modules every release, breaking stuff. So they get packaged separately outside of python.
There's also cherry picking of bugfixes from projects that haven't released yet.
Is there any reason you think debian developers are genetically more prone to making mistakes than anyone else? Considering that debian has an intensive test setup that most projects don't even have.
What gives you the idea I think Debian are any more prone to mistakes than anyone else? It’s one of the two distros I use at home. I admire the devs a great deal.
Further, do they publish any change information publicly?
This is utter FUD, of course they do, it is an open source distribution. Everything can be found from packages.debian.org
Yes, at a minimum the patches are in the Debian source packages. Moreover, maintainers are highly encouraged to send patches upstream, both for the social good and to ease the distribution's maintenance burden. An automated tool to look for such patches is the "patches not yet forwarded upstream" field on https://udd.debian.org/patches.cgi
It's why I'm really glad flatpaks/snaps/appimages and containerization are where they are at now, because it's greatly dis-intermediated software distribution.
> it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
is just factually wrong. Distributions like Debian try to make a coherent operating system from tens of thousands of pieces of independently developed software. It's fine not to like that. It's fine to instead want to use and manage those tens of thousands of pieces of independent software yourself. But a distribution is neither an "app store", nor does it "insert itself" between the user and the software. The latter is impossible in the FOSS world. Many users choose to place distros between them and software. You can choose otherwise.
I'm just trying to correct the notion that somehow a distro is an "app store" that "inserts itself" between the software and its users. A distribution is an attempt to make lots of disparate pieces of software "work together", at varying degrees. Varying degrees of modification may or may not factor into that. On one extreme is perhaps just a collection of entirely disjoint software, without anything attaching those pieces of software together. On the other extreme is perhaps something like the BSDs. Arch and Debian sit somewhere in between, at either side.
Thoughtful people can certainly disagree about what the correct degree of "work together" or modification is.
Said the developer.
Meanwhile the user is stuck with a broken software.
I actually prefer the RHEL policy of leaving packages the way upstream packaged them, it means upstream docs are more accurate, I don't have to learn how my OS moves things around.
One example that sticks out in memory is postgres, RHEL made no attempt to link its binaries into PATH, I can do that myself with ansible.
Another annoying example that sticks out in Debian was how they create admin accounts in mysql, or how apt replaces one server software with another just because they both use the same port.
I want more control over what happens, Debian takes away control and attempts to think for me which is not appreciated in server context.
It swings both ways too, right now Fedora is annoying me with its nano-default-editor package. Meaning I have to first uninstall this meta package and then install vim, or it'll be a package conflict. Don't try and think for me what editor I want to use.
I don't think RHEL is the right choice if this is your criteria. Arch is probably what you are looking for
Are you kidding now? Red Hat was always notorious of patching their packages heavily, just look download an SRPM and have a look.
If you want packages that works just like the upstream documentation, run Slackware.
Debian does add some really nice features in many of their packages, like a easy way to configure multiple uWSGI application using a file per application in a .d directory. It's a feature of uWSGI, but Debian has just package it up really nicely.
Unless something has changed in the last 10 years that has passed since I last used anything RHEL-based, there are definitely no such policy.
And RedHat does a lot of fiddling in their distributions, you probably want something like Arch, which is more hands-off in that regard. Personally, I prefer Debian, it's the granite rock of Linux distributions.