I am beginning to think that the terrible situation with dependency management in traditional C and C++ is a good thing.
Now, with systems like npm, maven or cargo, all you need to do to get a package is to add a line in a configuration file, and it fetches all the dependencies you need automatically from a central repository. Very convenient, however, you can quickly find yourself with 100+ packages from who knows where and 100s of MB of code.
In C, traditionally, every library you include requires some consideration. There is no auto-download, and the library the user has may be a different version from the one you worked with, and you have to accommodate it, and so does the library publisher. Or you may have to ship is with your own code. Anyways, it is so messy that the simplest solution is often not to use a library at all and write the thing yourself, or even better, realize that you don't need the feature you would have used that library for.
Bad reason, and reinventing the wheel comes with its own set of problems, but at least, the resulting code is of a manageable size.
otikik · 5h ago
I thought about this several years ago and I think I hit the right balance with these 2 rules of thumb:
* The closer something is to your core business, the less you externalize.
* You always externalize security (unless security is your exclusive core business)
Say you are building a tax calculation web app. You use dependencies for things like the css generation or database access. You do not rely on an external library for tax calculation. You maintain your own code. You might use an external library for handling currencies properly, because it's a tricky math problem. But you may want to use your own fork instead, as it is close to your core business.
On the security side, unless that's your speciality, there's guys out there smarter than you and/or who have dedicated more time and resources than you to figure that stuff out. If you are programming a tax calculation web app you shouldn't be implementing your own authentication algorithm, even if having your tax information secure is one of your core needs. The exception to this is that your core business is literally implementing authentication and nothing else.
cogman10 · 51m ago
I think this helps, but I also think the default for any dev (particularly library authors) should be to minimize dependencies as much as possible. Dependencies have both a maintenance and a security cost. Bad libraries have deep and sprawling trees.
I've seen devs pull in frameworks just to get access to single simple to write functions.
pphysch · 17m ago
There have been major F-ups in recent history with Okta, CrowdStrike, and so on. Keycloak had some major long-standing vulnerabilities. I've had PRs accepted in popular open-source IAM libraries a bit too easily.
Yeah, we shouldn't roll our own cryptography, but security isn't as clean cut as this comment implies. It also frequently bleeds into your business logic.
Don't confuse externalizing security with externalizing liability.
the__alchemist · 2h ago
I would like to dig into point 2 a bit. Do you think this is a matter of degree, or of kind? Does security, in this, imply a network connection, or some other way that exposes your application to vulnerabilities, or is it something else? Are there any other categories that you would treat in a similar way as security, but to a lesser degree, or that almost meet that threshold for a special category, but don't?
SkiFire13 · 8h ago
How many vulnerabilities were due to badly reinventing the wheel in C/C++ though?
Also, people often complain about "bloat", but don't realize that C/C++ are often the most bloated ones precisely because importing libraries is a pain, so they try to include everything in a single library, even though you only need to use less than 10% of it. Look for example at Qt, it is supposed to be a UI framework but it ends up implementing vectors, strings, json parser and who knows how much more stuff. But it's just 1 dependency so it's fine, right?
phkahler · 1h ago
>> Look for example at Qt, it is supposed to be a UI framework but it ends up implementing vectors, strings, json parser and who knows how much more stuff. But it's just 1 dependency so it's fine, right?
Qt is an application development framework, not a GUI toolkit. This is one reason I prefer GTK (there are things I dislike about it too).
reaperducer · 16m ago
How many vulnerabilities were due to badly reinventing the wheel in C/C++ though?
I don't know. Suppose you tell us.
ChrisSD · 12h ago
In my experience every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc that just end up being included into new projects sooner or later (and everyone and their dog has their own bespoke string handling libraries). Copy/pasting large chunks of code from elsewhere is also rampant.
I'm not so sure C/C++ solves the actual problem. Only sweeps it under a carpet so it's much less visible.
ryandrake · 30m ago
> In my experience every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc that just end up being included into new projects sooner or later
Same here. And a lot of those homegrown functions, utilities and classes are actually already available, and better implemented, in the C++ Standard Library. Every C++ place I've worked had its own homegrown String class, and it was always, ALWAYS worse in all ways than std::string. Maddening. And you could never make a good business case to switch over to sanity. The homegrown functions had tendrils everywhere and many homegrown classes relied on each other, so your refactor would end up touching every file in the source tree. Nobody is going to approve that risky project. Once you start down the path of rolling your own standard library stuff, the cancer spreads through your whole codebase and becomes permanent.
achierius · 10h ago
It definitely does solve one problem. Like it or not, you can't be hit by supply chain attacks if you don't have a supply chain.
dgfitz · 7h ago
I mirror all deps locally and only build from the mirror. It isn’t an issue. C/C++ is my dayjob
josephg · 1h ago
This runs the risk of shipping C/C++ libraries with known vulnerabilities. How do you keep track of that? At least with npm / cargo / etc, updating dependencies is a single command away.
dgfitz · 1m ago
Pull, update, build?
procaryote · 5h ago
at some point you could mirror a supply chain attack... xz was a pretty long game and only found by accident for example
dgfitz · 4h ago
I’m sure I will.
Frieren · 6h ago
> every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc
You are right. But my conclusion is different.
If it is a stable and people have been there for a while then developers know that code as well as the rest. So, when something fails they know how to fix it.
Bringing generic libraries may create long callstacks of very generic code (usually templates) that is very difficult to debug while adding a lot of functionality that is never used.
Bringing a new library into the code base need to be a though decision.
grg0 · 12h ago
This is something that I think about constantly and I have come to the same conclusion. While the idea of being able to trivially share code worldwide is appealing, so far it seems to encourage shittier software more than anything else, and the benefit of sharing trivially seems to be defeated by the downsides that bloat and bad software bring with it. Adding friction to code re-use (by means of having to manually download shit from a website and compile it yourself like it's 1995) seems to be a good thing for now until a better package management system is figured out. The friction forces you to think seriously where you actually need that shit or you can write the subset of the functionality you need yourself. To be clear, I also think C++ projects suffer a lot from re-inventing the wheel, particularly in the gamedev world, but that seems to be less worse than, e.g., initializing some nodejs framework project and starting with 100+ dependencies when you haven't even started to write shit.
rglullis · 5h ago
Cathedrals vs Bazaars.
Cathedrals are conservative. Reactionary, even. You can measure the rate of change by generations.
Bazaars are accessible and universal. The whole system is chaotic. Changes happen every day. No single agent is in control.
We need both to make meaningful progress, and it's the job of engineers to take any given problem and see where to look for the solution.
pixl97 · 12h ago
When doing SBOM/SCA we see apps with 1000+ deps. It's insane. It's so often we see large packages pulled in because a single function/behavior is needed and ends up massively increasing the risk profile.
1over137 · 4h ago
Holy cow. What domain is this? Web-based probably?
whstl · 59m ago
Could be a Hello World React app using the legacy creator-tool :/
Of course, this is the whole environment except for Node.js itself. And Vite has improved it.
But there are definitely some tools that are worse than others.
pixl97 · 1h ago
Npm/node_modules is typically one of the worst offenders, but programmers can do this with any import/library based system.
staunton · 8h ago
> While the idea of being able to trivially share code worldwide is appealing, so far it seems to encourage shittier software more than anything else, and the benefit of sharing trivially seems to be defeated by the downsides that bloat and bad software bring with it.
A lot of projects would simply not exist without it. Linux, comes to mind. I guess one might take the position that "Windows is fine" but would there ever have been even competition for Windows?
Another example, everyone would be rolling their own crypto without openssl, and that would mean software that's yet a lot more insecure than what we have. Writing software with any cryptography functionality in mind would be the privilege of giant companies only (and still suck a lot more than what we have).
There's a lot more things. The internet and software in general would be set back ~20years. Even with all the nostalgia I can muster, that seems like a much worse situation than today.
rgavuliak · 5h ago
I agree fully, most users care about making their lives easier, not about development purity. If you can't do both, the puritanistic approach loses.
HPsquared · 9h ago
The phrase "cheap and nasty" comes to mind. Over time, some markets tend towards the cheap and nasty.
TeMPOraL · 5h ago
Some? Almost all. That's the default end state if there's actual competition on the market.
crabbone · 48m ago
This is all heuristic (read "guessing") and not a real solution to the problem.
The ground truth is that software bloat isn't bad enough of a problem for software developers to try and fight it. We already know how to prevent this, if really want to. And if the problem was really hurting so much, we'd have automated ways of slimming down the executables / libraries.
In my role in creating CI for Python libraries, I did more hands-on dependency management. My approach was to first install libraries with pip, see what was installed, research why particular dependencies have been pulled in, then, if necessary, modify the packages in such a way that unnecessary dependencies would've been removed, and "vendor" the third party code (i.e. store it in my repository, at the version I need). This, obviously, works better for programs, where you typically end up distributing the program with its dependencies anyways. Less so for libraries, but in the context of CI this saved some long minutes of reinstalling dependencies afresh for every CI run.
In the end, it was a much better experience than what you usually get with CI targeting Pyhon. But, in the end, nobody really cared. If CI took less than a minute to complete instead of twenty minutes, very little was actually gained. The project didn't have enough CI traffic for this to have any actual effect. So, it was a nice proof of concept, but ended up being not all that useful.
ryandrake · 40m ago
The reason bloat doesn't get fixed is that it's a problem that doesn't really harm software developers. It is a negative externality whose pain is spread uniformly across users. Every little dependency developers add to make their work more convenient might increase the download size over the user's network by 100MB, or use another 0.5% of the user's CPU, or another 50MB of the user's RAM. The user gets hit, ever so slightly, but the developer sees only upside.
BrouteMinou · 2h ago
When you "Reinvent the wheel", you implement only what you need in an optimized way.
This gives a couple of advantages: you own your code, no bloat, usually simpler due to not having all the bells and whistles, less abstraction, so faster because there is no free lunch, minimize the attack surface for supply chain attacks...
For fun, the next time you are tempted to install a BlaZiNg FaSt MaDe in RuSt software: get the source, install cargo audit and run the cargo audit on that project.
See how many vulnerabilities there are. So far, in my experience, all the software I checked come with their list of vulnerabilities from transitive dependencies.
I don't know about npm, I only know by reputation and it's enough for me to avoid.
nebula8804 · 2h ago
That wheel is only as good as your skill in making it. For many people (the majority i'd guess) someone else making that wheel will have a better end result.
doublerabbit · 1h ago
The skill is produced by carving the wheel. You've got to start somewhere. Whether a mess or not the returned product is a product of your own. By relying on dependencies you're forever reaching for a goal you'll never achieve.
ozim · 9h ago
Writing everything from scratch by hand is an insane take. It is not just reinventing the wheel but there are whole frameworks one should use because writing that thing on your own will take you a lifetime.
Yes you should not just pull as dependency thing that kid in his parents basement wrote for fun or to get OSS maintainer on his CV.
But there are tons of legitimate libraries and frameworks from people who are better than you at that specific domain.
barrkel · 5h ago
That's not how it works.
Here's a scenario. You pull in some library - maybe it resizes images or something. It in turn pulls in image decoders and encoders that you may or may not need. They in turn pull in metadata readers, and those pull in XML libraries to parse metadata, and before you know it a fairly simple resize is costing you 10s of MB.
Worse, you pull in different libraries and they all pull in different versions of their own dependencies, with lots of duplication of similar but slightly different code. Node_modules usually ends up like this.
The point is not writing the resize code yourself. It's the cultural effect of friction. If pulling in the resize library means you need to chase down the dependencies yourself, first, you're more aware of the cost, and second, the library author will probably give you knobs to eliminate dependencies. Perhaps you only pull in a JPEG decoder because that's all you need, and you exclude the metadata functionality.
It's an example, but can you see how adding friction to pulling in every extra transitive dependency would have the effect of librabry authors giving engineers options to prune the dependency tree? The easier a library is to use, the more popular it will be, and a library that has you chasing dependencies won't be easy to use.
lmm · 3h ago
> You pull in some library - maybe it resizes images or something. It in turn pulls in image decoders and encoders that you may or may not need. They in turn pull in metadata readers, and those pull in XML libraries to parse metadata, and before you know it a fairly simple resize is costing you 10s of MB.
This is more likely to happen in C++, where any library that isn't header-only is forced to be an all encompassing framework, precisely because of all that packaging friction. In an ecosystem with decent package management your image resizing library will have a core library and then extensions for each image format, and you can pull in only the ones you actually need, because it didn't cost them anything to split up their library into 30 tiny pieces.
nolist_policy · 2h ago
Do you have an example?
MonkeyClub · 3h ago
> The easier a library is to use, the more popular it will be
You're thinking correctly on principle, but I think this is also the cause of the issue: it's too easy to pull in a Node dependency even thoughtlessly, so it's become popular.
It would require adding friction to move back from that and render it less easy, which would probably give rise to a new, easy and frictionless solution that ends up in the same place.
procaryote · 5h ago
There's a difference between "I need to connect to the database and I need to parse json, so I need two commonly used libs for those two things" and whatever npm is doing, and to some extent cargo or popular java frameworks are doing.
Building everything from scratch is insane, but so's uncritically growing a dependency jungle
actionfromafar · 8h ago
I feel you are arguing a bit of a strawman. The take is much more nuanced than write everything from scratch.
ozim · 6h ago
... simplest solution is often not to use a library at all and write the thing yourself, or even better, realize that you don't need the feature you would have used that library for ... the resulting code is of a manageable size..
I don't see the nuance there, that is my take of the comment, those are pretty much strongest statements and points about using libraries are minimal.
That is why I added mine strongly pointing that real world systems are not going to be "managable size" unless they are really small or a single person is working on the.
actionfromafar · 5h ago
For me "realize that you don't need the feature" is strong and also hits home. I sometimes prototype in C because it makes me think really hard about "what does this thing really have to do? What can I omit for now?"
While in for instance C# I tend to think "this would be simple to implement with whatever-fancy-thing-is-just-a-package-away".
Neither way is impossible to judge as good or bad on its own.
A real world system is almost always part of a larger system or system of systems. Making one thing simple can make another complex. The world is messy.
socalgal2 · 12h ago
100, ha! The official rust docs, built in rust, use ~750 dependencies - queue the apoligists
nradov · 11h ago
There are no absolute good or bad reasons here, it depends on the problem domain and usage environment. If you're writing code where safety or security matters then of course you need to carefully manage the software supply chain. On the other hand, if you're writing an internal utility for limited use with no exposure then who cares, pull in all the dependencies you need and git 'er done.
reaperducer · 17m ago
Now, with systems like npm, maven or cargo, all you need to do to get a package is to add a line in a configuration file
They can't hack what doesn't exist.
Reducing surface area is sometimes the easiest security measure one can take.
account-5 · 9h ago
I'm not a professional Dev but thought this is was tree-shaking is about? Certainly this happens in flutter, whatever you feel about flutter/dart.
Or is this a sticking plaster? Genuinely don't know as I only develop personal projects.
victorNicollet · 8h ago
Tree-shaking is able to remove code that will never be called. And it's not necessarily good at it: we can detect some situations where a function is never called, and remove that function, but it's mostly the obvious situations such as "this function is never referenced".
It cannot detect a case such as: if the string argument to this function contains a substring shaped like XYZ, then replace that substring with a value from the environment variables (the Log4j vulnerability), or from the file system (the XML Entity Extension vulnerability). From the point of view of tree-shaking, this is legitimate code that could be called. This is the kind of vulnerable bloat that comes with importing large libraries (large in the sense of "has many complex features", rather than of megabytes).
account-5 · 6h ago
Thanks for the explanations, much appreciated.
I suppose the options are then:
1. Write everything yourself, time consuming and hard, less likely to lead to these types of vulnerabilities.
2. Import others code, easy and takes no time, can lead to vulnerabilities.
3. Use others code, but only what you actually need. Maybe less time consuming than 1 but more than 2, adds a different sort of complexity, done correctly less likely to lead to these vulnerabilities.
Not sure if there's any other options here?
klysm · 13h ago
Unfortunately that comes with the baggage of terrible memory safety. I do agree with the sentiment though, that deps should be taken with more consideration.
privong · 12h ago
> Unfortunately that comes with the baggage of terrible memory safety.
Isn't this unrelated to the parent post's thoughts about the benefit's of the C/C++ ecosystem (or lack thereof) for dependency management? I.e., a Rust-like language could still exist with a dependency management system similar to what C/C++ have now -- that isn't predicated on how the language handles memory.
codr7 · 13h ago
Given how much critical software is written in C, and the number of problems we run into; I don't see a reason to keep repeating that line outside of the Rust marketing department.
Some people will always prefer C to Rust, might as well learn to live with that fact.
udev4096 · 10h ago
Remember how cloudflare (in 2017) leaked pretty much everyone's secret tokens in search engine cache due to a simple buffer overflow? Yeah, that wouldn't have happened with Rust
guappa · 8h ago
I've seen segmentation faults in java, go, python. All you need is a bug in a hidden library :)
MrJohz · 4h ago
A segfault won't leak sensitive data, though.
lelanthran · 9h ago
Remember that the most expensive exploit the world has ever seen was in a memory safe GC language?
My argument is that you are missing the point: the point is that a larger attack surface enables more exploits regardless of language.
When using a language that has tremendous friction in expanding the attack surface you tend to have a small attack surface as a result.
Theres obviously a crossover point where you'd be safer with a memory safe language and a larger attack surface than with a memory unsafe language and a minuscule attack surface.
lmm · 3h ago
> Remember that the most expensive exploit the world has ever seen was in a memory safe GC language?
No I don't, which exploit are you talking about? The most expensive exploit I can think of was caused by heartbleed which was in a memory unsafe language. The "most expensive software bug" (not an exploit) caused by turning off the safe overflow handler in the language being used can hardly be considered an indictment of language level safety either. So what exploit are you talking about?
throw1111221 · 1h ago
Not the person you replied to, but they're probably talking about Log4j. It's a Java logging library that had a helpful feature where logging a special format string would pull code from a remote URL and execute it. So anywhere you can get a Java server to log something you can run arbitrary code. (Ex: by setting a malicious User-Agent.) Estimates say 93% of enterprise cloud environments where affected.
I suppose Stuxnet could also count, where the initial infection depends on the human curiosity of plugging an unknown usb drive into an air gapped system.
codr7 · 9h ago
Yeah I know, if only we could rewrite the entire world in Rust everything would be rainbows and unicorns. But it's not going to happen, deal with it.
klysm · 10h ago
I never mentioned rust. I’m just saying C and C++ have terrible memory safety.
codr7 · 9h ago
And what's the alternative then, from your perspective?
What did you have in mind when you wrote the comment?
klysm · 2h ago
I had no alternative in mind. The topic at hand is security and bloat, and C/C++ might be leaner apps in practice but they are generally going to have memory safety bugs which is a security problem.
matheusmoreira · 1h ago
> There is no auto-download
There is. Linux distributions have package managers whose entire purpose is to distribute and manage applications and their dependencies.
The key difference between Linux distribution package managers and programming language package managers is the presence of maintainers. Any random person can push packages to the likes of npm or PyPI. To push packages to Debian or Arch Linux, you must be known and trusted.
Programming language package managers are made for developers who love the convenience of pushing their projects to the world whenever they want. Linux distribution package managers are made for users who prefer to trust the maintainers not to let malware into the repositories.
Some measured amount of elitism can be a force for good.
jajko · 2h ago
Yeah everybody should reimplement their own security for example, that's a really smart fool-proof approach especially down the line, no real cases for any contrarian opinions.
I do get what you mean, but it works only on some very specific types of projects, when you & potentially comparably (very) good & skilled peers are maintaining and evolving it long term. This was never the case in my 20 years of dev career.
This sort of shared well tested libraries -> gradually dependency hell is in some form shared across all similar languages since its pretty basic use case of software development as an engineering discipline. I haven't seen a good silver bullet so far, and ie past 14 years of my work wouldn't be possible with approach you describe.
atoav · 10h ago
Bloat might be correlated with the ease of bloating software and it indeed easier to do precisely that if you don't have to write it yourself.
Bloat is uncontrolled complexity and making it harder to manage complexity reduces bloat. But it also makes it harder to write software that has to be complex for legitimate reasons. Not everybody should write their own library handling SSL, SQL or regex for example. But those libraries are rarely the problem, things like leftpad are.
Or: you can use package systems for good and for evil. The only real way to fight bloat is to be diciplined an vet your dependencies. It must cost you something to pull them in. If you have to read and understand everything you pull in, pulling in everybody and their dog suddenly becomes less desireable.
Also I think this is much more an issue off the quality of dependencies than it is about using dependencies themselves (it would be stupid to write 1000 implementations of HTTP for a language, one that works really well is better).
udev4096 · 10h ago
Go has the most lean and simple dependency management. It's far better than npm or pypi dumpster fire
watermelon0 · 10h ago
It's also worth mentioning the extensive standard library and golang.org/x/, which means that you generally don't even need that many 3rd party packages.
udev4096 · 10h ago
Also the extensive measures to combat supply chain security for packages [0]
edit: lol at the downvotes. Go developers showing how insecure they are once again.
staunton · 8h ago
Since you're apparently interested in downvotes (why?), I'm pretty sure it's not due to criticism of Go but rather the fact that your criticism is entirely non-specific and therefore doesn't add anything to the discussion...
guappa · 7h ago
Because the comment I replied to was so specific?
There's plenty of perfectly good libraries on npm and pypi, and there's awful ones. Likewise for go which pulls from "the internet".
Must I really demonstrate that bad code exists in go? You want examples? There's plenty of bad libraries in go, and pinning to a commit is a terrible practice in any language. Encourages unstable APIs and unfixable bugs.
lmm · 3h ago
It added just as much to the discussion as the comment it was in reply to, so downvoting one but not the other seems somewhat unfair.
dvh · 8h ago
People often think "speed" when they read "bloat". But bloat often means layers upon layers of indirection. You want to change the color of the button in one dialog. You find the dialog code, change the color and nothing. You dig deeper and find that some modules use different colors for common button, so you find the module setting, change the color and nothing. You dig deeper and find that global themes can change colors. You find the global theme, change the color and nothing. You start searching entire codebase and find that over 17 files change the color of that particular button and one of those files does it in a timer loop because your predecessor couldn't find out why the button color changed 16 times on startup so he just constantly change it to brown once a second. That is bloat. Trivial change will take you half a day. And PM is breathing on your neck asking why changing button color takes so long.
alganet · 8h ago
No. What you described is known as technical debt.
Bloat affects the end user, and it's a loose definition. Anything that was planned, went wrong, and affects user experience could be defined as bloat (many toolbars like Office had, many purposes like iTunes had, etc).
Bloat and technical debt are related, but not the same. There is a lot of software that has a very clean codebase and bloated experience, and vice-versa.
Speed is an ambiguous term. It is often better to think in terms of real performance and user-perceived performance.
For example, many Apple UX choices prioritize user perceived performance instead of real performance. Smooth animations to cover up loading times, things such as that. Their own users don't even know why, they often cannot explain why it feels smooth, even experienced tech people.
Things that are not performant but appear to be fast are good examples of good user-perceived performance.
Things that are performant but appear to be slow exist as well (fast backend lacking proper cache layer, fast responses but throttled by concurrent requests, etc).
FirmwareBurner · 8h ago
>many Apple UX choices prioritize user perceived performance instead of real performance.
Then why does Apple still ship 60Hz displays in 2025? The perceived performance on scrolling a web page on 60Hz is jarring no matter how performant your SoC is.
jsheard · 3h ago
Apple backed themselves into a corner with desktop monitors by setting the bar for Retina pixel density so high, display manufacturers still aren't able to provide panels which are that large and very dense and very fast. Nobody makes 5K 27" 120hz+ monitors because the panels just don't exist, not to mention that DisplayPort couldn't carry that much data losslessly until quite recently.
There's no excuse for 60hz iPhones though, that's just to upsell you to more expensive models.
os2warpman · 5h ago
> Then why does Apple still ship 60Hz displays in 2025?
To push people who want faster displays to their more expensive offerings.
60Hz: $1000
120Hz: $1600
That's one reason, among many, why Apple has a $3 trillion market cap.
For a site with so many people slavishly obsessed with startups and venture capital, there seems to be a profound lack of understanding of what the function of a business is. (mr_krabs_saying_the_word_money.avi)
alganet · 7h ago
I don't know why.
I said many choices are focused on user-perceived performance, not all of them.
Refresh rate only really makes a case for performance in games. In everyday tasks, like scrolling, it's more about aesthetics and comfort.
Also, their scrolling on 60Hz looks better than scrolling on Android at 60Hz. They know this. Why they didn't prioritize using 120Hz screens is out of my knowledge.
Also, you lack attention. These we're merely examples to expand on the idea of bloat versus technical debt.
I am answering out of kindness and in the spirit of sharing my perspective to point the thread in a more positive discussion.
FirmwareBurner · 7h ago
>Refresh rate only really makes a case for performance in games
Refresh rate really matters for everything in motion, not just games, that's why I said scrolling.
> In everyday tasks, like scrolling, it's more about aesthetics and comfort.
Smooth scrolling IS everyday comfort. Try going from 120Hz to 60Hz and see how you feel.
>their scrolling on 60Hz looks better than scrolling on Android at 60Hz.
Apple beat physics?
insomagent · 6h ago
Battery life? Temperature? Price-to-performance ratio? These are not decisions that are solved as simply as decreeing "every device must have at least 3000Hz refresh rate."
nicce · 3h ago
I have heard that battery life is the primary reason. After all, it is screen and modem that consumes most of it.
You lack attention. It matters for comfort in everything. It matters for performance on games much more. Most users don't even know about refresh rate, they just know their iPhones feels good.
They don't let you scroll as fast as Android does, which makes the flickering disorienting sensation of speed scrolling in a low refresh rate less prominent. It optimizes for comfort given the hardware they opted to use.
Android lets you scroll faster, and it does not adjust the scrolling dynamics according to the refresh rate setting. It's optimized for the high end models with 120Hz or more, so it sucks on low end settings or phones.
Some people take years to understand those things. It requires attention.
bob1029 · 5h ago
When it comes to building software for money, I prefer to put all of my eggs into one really big basket.
The fewer 3rd parties you involve in your product, the more likely you will encounter a comprehensive resolution to whatever vulnerability as soon as a response is mounted. If it takes 40+ vendors to get pixels to your customers eyeballs, the chances of a comprehensive resolution rocket toward zero.
If every component is essential, does it matter that we have diversified the vendor base? Break one thing and nothing works. There is no gradient or portfolio of options. It is crystalline in every instance I've ever encountered.
BobbyTables2 · 12h ago
At the library level, I dislike how coarse grained most things are. Sadly becomes easier to reimplement things to avoid huge dependency chains.
Want a simple web server ? Well, you’re going to get something with a JSON parser, PAM authentication, SSL, QUIC, websockets, an async framework, database for https auth, etc.
Ever look at “curl”? The number protocols is dizzing — one could easily think that HTTP is only a minor feature.
At the distro level, it is ridiculous that so long after Alpine Linux, the chasm between them and Debian/RHEL remains. A minimal Linux install shouldn’t be 1GB…
We used to boot Linux from a 1.44mb floppy disk. A modern Grub installation would require a sizable stack of floppies! (Grub and Windows 3.0 are similar in size!)
_fat_santa · 1h ago
> At the distro level, it is ridiculous that so long after Alpine Linux, the chasm between them and Debian/RHEL remains. A minimal Linux install shouldn’t be 1GB…
I would say this is a feature and not a bug. Alpine Linux is largely designed to be run in containerized environments so you can have an extremely small footprint cause you don't have to ship stuff like a desktop or really anything beyond the very very basics.
Compare that to Ubuntu which for the 5GB download is the "Desktop" variant that comes with much more software
procaryote · 5h ago
> Want a simple web server ? Well, you’re going to get something with a JSON parser, PAM authentication, SSL, QUIC, websockets, an async framework, database for https auth, etc.
Simple means different things for different people it seems. For a simple web server you need a tcp socket.
If you want a full featured high performance web server, it's not gonna be simple.
udev4096 · 10h ago
Alpine's biggest hurdle is musl. Most of the software still relies on libc. You should look into unikernels [0], it's the most slimmed down version of linux that you can ship. I am not sure how different a unikernel is from a distroless image tho
I think we lost something with static linking when going from C to Dotnet. (And I guess Java.) Many C (and C++, especially "header only") libraries when statically linked are pretty good at filtering out unused code.
Bundling stuff in Dotnet are done much more "runtime" often both by design of the library (it uses introspection¹) and the tools².
1: Simplified argument - one can use introspection and not expect all of the library to be there, but it's trickier.
2: Even when generating a self contained EXE, the standard toolchain performs no end-linking of the program, it just bundles everything up in one file.
michaelmrose · 9h ago
>A minimal Linux install shouldn’t be 1GB
Why not this seems pretty arbitrary. Seemingly developer time or functionality would suffer to achieve this goal. To what end?
Who cares how many floppies grub would require when its actually running on a 2TB ssd. The actually simpler thing is instead of duplicating effort to boot into Linux and use Linux to show the boot menu then kexec into the actual kernal or set it to boot next. See zfsbootmenu and "no more boot loader" this is simpler and less bloated but it doesnt use less space
spacerzasp · 5h ago
There is more to size than storage space. Larger applications take more memory, more cpu caches; things spill over to normal memory, latencies grow and everything runs much slower
ronbenton · 13h ago
>Even companies with near-infinite resources (like Apple and Google) made trivial “worst practice” security mistakes that put their customers in danger. Yet we continue to rely on all these products.
I am at a big tech company and have seen some wildly insecure code make it into the codebase. I will forever maintain that we should consider checking if candidates actually understand software engineering rather than spending 4 or 5 hours seeing if they can solve brainteasers.
spooky_action · 11h ago
How do you propose we do this?
udev4096 · 10h ago
Look at their code, from projects or any open source contributions. Ask how they intend to write secure code, rather than asking a bunch of useless algorithmic problems
shakna · 9h ago
When tech reports a library as insecure, but it takes a year to approve removal, much of the difficulty doesn't lie at the coder level of the corporation's infrastructure.
boznz · 14h ago
Yet if you deliver a system without a modern bloated framework or a massive cloud stack and you are "old fashioned" and "out of touch" - been there done that, got the tee-shirt.
al_borland · 13h ago
Being mandated to throw away simple and stable code in favor of the “new platform” that changes every 18 months has been one of the most frustrating experiences of my working life and turned me into a bit of a nihilist (in a work context).
To me, the root cause of this problem is the externalizing of knowledge. The number of tools used in building software has exploded. Each such tool, while purporting to make the job of the developer easy, hides what it really takes to make software. In turn, the developer unwittingly grows reliant on the tools, thereby externalizing the essential knowledge of what it really takes to build software, or what the real cost of adding a dependency is. Everything turns into, "pff, I'll just click that button on my IDE--job done!".
Every software component follows the same pattern. Software, thus made from these components, ends up being intractably complex. Nobody knows what a thing is, nor how things work.
This is where we are right now, before we add AI. Add AI and "vibe coding" to the mix, and we're in for a treat. But don't worry - there'll be another tool that'll make this problem, too, easy!
kristianp · 6h ago
They talk about the imessage vulnerability (1), but is it really an example of bloat to accidentally allow pdfs to be parsed with an extension of .gif? I guess it's an example of an unnecessary functionality, but Apple would sell a lot less iPhones if they didn't add all these UI gimmicks.
While I agree with the overall post, I think the iMessage-preview is a bad example.
If they instead had filtered/disabled previews the security problems would still exist - and potentially have less visibility.
gitroom · 13h ago
This hits hard for me because I've run into way too much extra code getting piled on for no real reason. Stuff just gets harder to handle over time and gets in the way. Kinda makes me ask myself- you think folks are just chasing easy installs or is it more about looking busy than keeping things actually simple?
voxelghost · 13h ago
Of the three great programmer virtues of Larry Wall, only laziness remains.
antfarm · 9h ago
For those who, like me, only knew Larry Wall’s quote “Lazyness is a virtue”, here are all three:
Interesting! I'm not an expert but an aging amateur and *nix/foss enthusiast. I see some parallels to what I've thought before that may, or may not be erroneous. First, it seems to point toward the original *nix philosophy of do one thing.
From a user/fanboy/paranoid point of view, I don't like systemd. I've good development arguments for it's improved coding for usb device drivers. Still, when I have to reboot, because my system is frozen. It's more complex to use than say runit. Lastly, I'm nervous, that if a company took it over, it's the one piece that might help destroy most distros. Please no hate. This is only my personal point of view, as an amateur e.g. there are people on both sides that have a much better understanding of this.
Seems to favor the microkernel? I've been hoping we one day get daily driver micro-kernel distro. I asked about this but didn't get a lot of answers, except for those that mentioned projects that aren't there yet e.g. I would love to try Redox, but from my understanding, after 10yrs it's still not there yet.
It also brings me to a point that has confused me for years. As, an amateur how to I decide what is better for what level of virtualization from program images like appimage/flatpacks, containers, to VMs. So far, I've hated snaps/flatpacks because, they make a mess of other basic admin commands, and because there seems to be missing functionality. and/or configuration. It may be better now; I haven't tried in a while. Personally, I've enjoyed portage systems in the past, and they are so fast now (to compile). A lot of forums, forget that there are home enthusiast and basically talk about it from an enterprise perspective. Is there a good article or book that might explain when to choose what. Much of what I've read are just "how to" or "how it works". I guess, I would prefer someone who acknowledges we need something for the hardware to run on and when it makes more since to use a regular install vs an image (appimage/flatpack/snap).
Anyway, thanks so much for the article. I do believe you are right, a lot of companies just put out fires because none want to invest in the future. I mean even the CEO usually only is there a few years, historically comparatively; so why would they care? Also, I think H1-B is a security risk in and of itself because, at least in TX, most IT is Indian H1-B. I mean they want a better life, and don't have as many family ties here. If they were to "fall into" a large sum...they could live like Kings in India, or elsewhere.
al_borland · 12h ago
A big issue is the speed at which teams are expected to deliver. If every sprint is expected to deliver value to the user, there is isn’t enough slack in the system to go back and prune the code to remove cruft. People end up cutting corners to meet deadlines set by management. The corners that get cut are the things that are invisible in the demo. Security, documentation, and all the chewing gum holding it all together.
JackSlateur · 4h ago
This is why cruft removal is linked to the value delivered to the user
You do not say : "there is two task: add some feature, takes 1 day, and delete some cruft, takes 1 day".
You say: "Yes, that feature. That's one task. It will take 2 days."
BLKNSLVR · 12h ago
And once a level of "story points" is achieved within a Sprint you can't go backwards and you can't deliver less value to the Customer. There is no room for re-evaluation. Forwards, moar!
As per Tame Impala's Elephant:
He pulled the mirrors off his Cadillac
Because he doesn't like it looking like he looks back
Looking back gives the impression of missteps or regret. We have no such thing!
chading · 12h ago
Scrum points are about engineering controllability, rather than performance. But that's a complexity most don't get.
JackSlateur · 4h ago
Exactly
And because it is based on nothing, you can just lie about it
kreetx · 8h ago
The article makes using dependencies look bad, while the actual issue rather is "quality controlling" the code in dependencies, as dead code elimination (or "tree shaking") removes the bloat from the final artifact. Because dependency as a concept itself is good, because going the opposite way and reinvent the wheel you'll get an even worse kind of bloat - bloat you have to maintain yourself.
whstl · 8h ago
Nah, I disagree. Dependency as a concept is 100% neutral and contextual, and treating it as 100% positive is cause for several issues in software bloat, security and compatibility.
It’s like drugs: if a doctor prescribes, it’s probably ok. If you have an addiction, then you’re in for a lifetime of trouble.
kreetx · 8h ago
If the dependency solves a thing you need and isn't part of your core business, it pretty much is 100% positive. E.g, do you really want to implement your own json parser? When will you then ship your actual product?
rjsw · 3h ago
The current usage pattern means that you end up with multiple different JSON parsers linked into your product, multiple XML parsers, etc ...
whstl · 7h ago
I'm answering to the claim that "dependency as a concept itself is good". They're not universally good, and have side effects even when they solve the problem at hand.
The answer to your questions is already in my reply.
the__alchemist · 2h ago
The part you are missing: If something in the dependency isn't working as you'd like at some point later, making your code work as you desire may be dramatically more difficult than if you hadn't brought it in.
k__ · 7h ago
I mean, deps aren't free.
You're buying them with the risk that they could become a threat in the future. At one point it's not worth it anymore.
kreetx · 4h ago
Sure, nothing is, but where you draw the line? And why would you implement something again when you are unlikely to do it better, or even have time for it?
And of course, if you're doing just recreational coding to learn something, or if what you need differs from what is available, or the available thing seems sketchy somehow, then you'd write it yourself (if it's feasible). But for most things where what you need is clear and unambiguous, I don't see why you'd invent it yourself. For an established library it's unlikely that you'd do any better anyway.
(And again, if it's recreational what you are doing, you want to learn and have a hobby, of course, do it yourself. But in that case, you aren't actually looking for dependencies anyway - your goal is elsewhere.)
whstl · 3h ago
One should draw a different line depending on the situation. That's where the engineering comes in. There is no silver bullet. We should still be suspicious and judicious about every single dependency.
the__alchemist · 7m ago
Reply to the child
> So with infinite resources it would be best to write everything from scratch?
Re-read the parent and the other replies: A critical point you are missing is your interlocutor's practical mindset in contrast to your idealistic one. This is about making engineering-mindset tradeoffs; they vary depending on the specific scenario. The answer to your Reductio ad absurdum is yes, but I believe that side tracks rather than elucidates.
kreetx · 20m ago
So with infinite resources it would be best to write everything from scratch?
whstl · 10m ago
This is not a black-or-white situation. There's no need to only go to one extreme or the other.
antfarm · 9h ago
FWIW, I started learning Elixir and OTP to overcome architectural bloat in future projects.
The Erlang ecosystem has many useful abstractions at just about the right level that allow you to build the simplest possible custom solution instead of reaching for a 3rd party solution for a component of your (distributed) system.
Building just the right wheel for the task at hand does not mean you have to reinvent it first.
hilbert42 · 10h ago
This IEEE Spectrum article on software bloat and security provides a good summary of the problems plaguing much software these days but I see no indication that we will find solutions anytime soon.
There's just too much invested in the building of software to dismantle current arrangements or change methodologies quickly, it would take years to do so. Commercial interests depend on bloat for income, so do programmers and support industries.
For example, take Microsoft Windows, these days it's so huge it will not even fit onto a DVD, that's petty outrageous really. I recall Windows expert Mark Russinovich saying that the core/essential components of Windows only take up about 50MB.
But there's no incentive for Microsoft to make Windows smaller and thus have a smaller footprint for hackers to attack. Why? As that bloatware makes Microsoft money!
Rather than dispense with all that bloatware Microsoft has build a huge security edifice around it, there are never-ending security updates, secure Windows boot/UEFI, it's even had to resort to a hardware security processor—Pluton. And much of this infrastructure is nothing but a damn nuisance and inconvienience for end users/consumers.
Microsoft doesn't just stop there, it then makes matters worse by unnecessarily changing the Windows GUI with every new version. Moreover, it's not alone, every Linux distribution is different. What this means is that there's less time to perfect code as its features keep changing.
Now take the huge numbers of programming languages out there. There are so many that many programmers have to learn multiple languages thus cannot become truly proficient in all of them. That lack of expertise alone is problematic. Surely it would be better to concentrate on fewer languages and make those more adaptable. But we know that's not going to happen for all the usual reasons.
Same goes for Web browsers and Web bloat. Every time I complain on HN about browser bloat, the abuse of JS by websites and the never-ending number of Web protocols that keep appearing, I'm voted down. That's understandable of course because programmers and others have a financial vested interest in them. Also, programmers have taken much time to learn all this tech and don't want to see their efforts wasted by its obsolescence.
And I've not yet mentioned the huge and unnecessary proliferation of video, sound codecs, image and audio formats not to mention the many document formats. Programs that use all these formats are thus bigger and more bloated and more prone to bugs and security vulnerabilities. In a more organized world only faction that number would be necessary. Again, we know it's not just technological improvements that have brought such numbers into existence but also commercial and vested interests. Simply, there's money in introducing this tech even if it's only slightly different to the existing stuff.
I've hardly touched this subject and said almost nothing about the economic structure of the industry, but even at first glance it's obvious we can't fix any of this in the near future, except perhaps by tiny incremental steps which will hardly make much impact.
pona-a · 1h ago
> Every Linux distribution is different
A distribution is just a collection of software to handle common needs. Most are quite similar: systemd, coreutils, glibc, dbus, polkit, pipewire/pulseaudio, and a DE, typically GNOME or KDE. You'll expect to see them on Debian, Ubuntu, Fedora, Nix, Arch, or anywhere else except Void, Alpine, and Gentoo. The only meaningful difference is typically the package manager. We have more standardization in the Linux ecosystem then ever and equally as much bloat, both thanks to systemd.
> Surely it would be better to concentrate on fewer languages and make those more adaptable.
Programming languages are a combination of tools and notation. Different domains have different needs and preferences. We don't lament quantum physicists using bra-ket standard linear algebra notation. Unlike notation, there are material reasons to use one beyond clarity. Some languages support deeper static analysis, some prove complete theorems about your specification, some are small enough to embed, some are easier to extend, and some exist only within a narrow domain like constraint satisfaction. We can add macros or introspection to a language, but in doing so it will fall outside a domain that might value predictability or performance.
> Now take the huge numbers of programming languages out there
~> open langs.csv | filter {|x| $x.SOPercent > 0.25} | get Year | math median
1994
I took data from the 2024 Stack Overflow survey filtered for professional developers. The median release year for languages above 25% market share is 1994. The youngest serious language on the list is Swift, dated 2014. I don't think this is evidence of a growing number of programming languages.
See converted data below. The release year was augmented by o4-mini.
In my last job, just to run the software on my local machine, I had to launch 6 different microservices running in a containerized, Linux virtualized environment on Windows and had to launch them in a particular order and had to keep each one in a separate console for debugging purposes. It took about 20 minutes to launch the software to be able to test it locally. The launch couldn't be automated easily because each service was using a mix of containers and plain Node.js servers with different versions and it was Windows so I would probably have to write some unfamiliar code for Windows to automate opening all the necessary git bash tabs...
The services usually persisted except for automatic updates so I only had to restart all the services a few times per week so it didn't make sense to invest time to automate.
n_ary · 13h ago
At the risk of sounding very naïve and making huge guesses, what you describe seems to be what docker-compose solves. Special order of services, launching several containers at once. However, I have seen my fair share of oddities in the trenches where containers are evolution of virtual machines(vagrant) running everything in one vm but now split out into containers without adapting to how containers work, because new tech lead thought vms were uncool and everything must be docker now.
jongjong · 11h ago
We do use docker compose (thank god) but I also need to run a server from source for most of the microservices in order to modify and debug the code. There are around 20 something containers in practice, 6 pods/services. All interdependent and necessary to run the product (it's a legacy codebase 10+ years old, I joined less than 1 year ago and had nothing to do with architecture decisions). Most features touch on at least 3 to 4 repos/microservices all impossible to decouple. The problem is really opening and launching code across 6 bash consoles some of which require an additional manual authentication step with various cloud providers. I need the ability to restart some independently after making code changes. It's just a very complicated system.
I'm sure the launch can be fully automated but it's kind of at the edge of not worth automating because of how relatively infrequently I need to restart everything... Also the CEO doesn't like to make time for work which doesn't yield visible features for end users.
I actually handed my resignation a month ago, without another job lined up. It became too much haha. Good practice though. Very stressful/annoying.
branko_d · 10h ago
I remember, at the turn of the century (was is 2001?) when Microsoft was touting "weak coupling" achievable through "web services" and demoing the support for SOAP in Visual Studio.
To me, that was the strangest idea - how could you "decouple" one service from another if it needs to know what to call, and what information to pass and in what format? Distributing the computing - for performance, or redundancy or security or organizational reasons - that I can understand - but "weak coupling" just never made sense to me.
codr7 · 9h ago
Yep, one of the minor details the micro service fan club don't talk about much.
Firing up the whole mess and debugging one or two of them locally is always a major pain, and god help you if you have no idea which services to stub and which to debug.
auszeph · 9h ago
Something I've felt is missing is a developer orchestration layer that makes it really easy to define the set of services like a docker-compose but just as easy to switch implementations between container, source, or remote.
Sometimes you need them all from source to debug across the stack, when you don't you might need a local container to avoid pollution from a test env, sometimes it is just fine to port-forward to a test env and save yourself the local resources.
vjvjvjvjghv · 14h ago
I had a discussion with team members and we agreed that we will make our next systems fully deployable with one script or installer. It requires a little more thought and discipline but will result in much cleaner architecture and will also document itself this way.
jongjong · 10h ago
Completely worth it IMO. My philosophy nowadays (on my side projects) is to make every software feel like a complete product that you can run out of the box, batteries included... I also try to support older engine versions to avoid setup issues.
If you take care of the developer, the project looks after itself.
bee_rider · 14h ago
I like that they are containerized microservices, but you have to launch them in a particular order. Hahaha. What a nightmare. Congrats on it being a former job. Move on to better things? Well, unemployment would be preferable.
liendolucas · 11h ago
Try CUDA in a Docker environment. Yesterday it took all day long to download an Ubuntu image (5.27Gb) and its Python dependencies (another few Gb) to install Pytorch. I've probably wasted 10Gb of bandwidth just to have the environment up and running. Fortunately in the meantime I wrote 90% what I needed to do. Oh I forgot that I still need to download a couple of hugging face models. Nice.
zelphirkalt · 13h ago
Was Windows a requirement or your own choice? Asking because I have seen people unwilling to switch to a GNU/Linux VM or boot into GNU/Linux and then forever struggling with their setup, while other people on the team used GNU/Linux or MacOS and didn't have nearly as many problems.
jongjong · 10h ago
Requirement. Had to use Azure too. I use Linux at home.
geodel · 13h ago
Considering the disdain for software which does not have a thousand external dependencies in form libraries and framework is it any surprise?
macrocyclo · 13h ago
Is there an OS that embodies this sentiment?
pbohun · 13h ago
9front, the modern fork of Plan 9.
grg0 · 13h ago
Unfortunately, this is not something that programmers, let alone security ops, can fix. In many companies, the management is too brain-dead to even conceive of the possibility of doing something that does not immediately translate into (short-term) profit. Companies that treat its software as a quality artifact are rare. At best, you have to go out of your way as a programmer to fix shit and maintain even a baseline of quality before shit hits the fan. The only way to get the bulk companies aligned with this goal is to make it so that failure to do so costs them money, AKA fining them for security breaches, accidents, etc.
chilldsgn · 8h ago
Yup. I've been advocating for leaner software to make maintainability easier, which can help prevent developer burnout (I've been there twice in 365 days). My overall health suffered because of dealing with bloated software.
Having burned out employees is a cost implication for any business. I do not have concrete data to back this up, though, but from personal experience, I can attest to this. I had to take sick leave and lose days of productivity due to illness caused by burnout from having to deal with bloated software and the deadlines associated with that. Business makes promises to clients without realising how difficult and time-consuming it is to add features and try to keep software operational and secure can be if it is so bloated and difficult to understand.
jffhn · 37m ago
>burnout from having to deal with bloated software and the deadlines associated with that
I did not have the deadlines, but to bear having to deal with bloated software, my solution was vodka: since it has no color, I filled mineral water bottles with it and everyone thought I was drinking water.
guappa · 8h ago
Most developers I've met are completely ok with pulling whatever dependency, even is_odd kind of stuff.
dmos62 · 3h ago
In other news, bad software is bad. Heh, excuse the sarcasm. Tangential: I've come to think of the major software problems (like bloat, or closed-source, or lack of interop) as an effect of our generally chaotic and self-contradictory culture: "money is the root of all evil", "how much do you make", "it has to be good", "it has to be done fast", "do what you think is best", "do what's expected", "be conservative", "be innovative". It's hard to navigate through all that, and if you do, and you have something to show for it, you have my applause, irrespective of how good it is, for whatever problematic definition of good I happen to use today.
jmclnx · 14h ago
No argument from me, I also believe bloat is a very large problem.
A get of my lawn section :)
I remember when GUIs started becoming a thing, I dreaded the move from Text to GUIs due to complexity. I also remember most programs I wrote when I started on minis were 64k code and 64k text. They were rather powerful even by today's standards, they did one thing and people had to learn which one to use to perform a task.
Now we have all in one where in some cases you need to page through endless menus or buttons to find an obscure function. In some cases you just give up looking and move on. Progress I guess.
rjsw · 3h ago
My first GUI applications used GEM, they were compiled to 8086 small model so the same 64k code and 64k data, didn't get close to running out of address space.
zelphirkalt · 13h ago
There is still a fundamental difference between move from text interface to GUI on one hand and adding bloat so many people add these days on the other hand. GUI is some entirely different paradigm of usage, while the bloat of today can often be replaced with little code and one retains the same functionality.
Now, with systems like npm, maven or cargo, all you need to do to get a package is to add a line in a configuration file, and it fetches all the dependencies you need automatically from a central repository. Very convenient, however, you can quickly find yourself with 100+ packages from who knows where and 100s of MB of code.
In C, traditionally, every library you include requires some consideration. There is no auto-download, and the library the user has may be a different version from the one you worked with, and you have to accommodate it, and so does the library publisher. Or you may have to ship is with your own code. Anyways, it is so messy that the simplest solution is often not to use a library at all and write the thing yourself, or even better, realize that you don't need the feature you would have used that library for.
Bad reason, and reinventing the wheel comes with its own set of problems, but at least, the resulting code is of a manageable size.
* The closer something is to your core business, the less you externalize.
* You always externalize security (unless security is your exclusive core business)
Say you are building a tax calculation web app. You use dependencies for things like the css generation or database access. You do not rely on an external library for tax calculation. You maintain your own code. You might use an external library for handling currencies properly, because it's a tricky math problem. But you may want to use your own fork instead, as it is close to your core business.
On the security side, unless that's your speciality, there's guys out there smarter than you and/or who have dedicated more time and resources than you to figure that stuff out. If you are programming a tax calculation web app you shouldn't be implementing your own authentication algorithm, even if having your tax information secure is one of your core needs. The exception to this is that your core business is literally implementing authentication and nothing else.
I've seen devs pull in frameworks just to get access to single simple to write functions.
Yeah, we shouldn't roll our own cryptography, but security isn't as clean cut as this comment implies. It also frequently bleeds into your business logic.
Don't confuse externalizing security with externalizing liability.
Also, people often complain about "bloat", but don't realize that C/C++ are often the most bloated ones precisely because importing libraries is a pain, so they try to include everything in a single library, even though you only need to use less than 10% of it. Look for example at Qt, it is supposed to be a UI framework but it ends up implementing vectors, strings, json parser and who knows how much more stuff. But it's just 1 dependency so it's fine, right?
Qt is an application development framework, not a GUI toolkit. This is one reason I prefer GTK (there are things I dislike about it too).
I don't know. Suppose you tell us.
I'm not so sure C/C++ solves the actual problem. Only sweeps it under a carpet so it's much less visible.
Same here. And a lot of those homegrown functions, utilities and classes are actually already available, and better implemented, in the C++ Standard Library. Every C++ place I've worked had its own homegrown String class, and it was always, ALWAYS worse in all ways than std::string. Maddening. And you could never make a good business case to switch over to sanity. The homegrown functions had tendrils everywhere and many homegrown classes relied on each other, so your refactor would end up touching every file in the source tree. Nobody is going to approve that risky project. Once you start down the path of rolling your own standard library stuff, the cancer spreads through your whole codebase and becomes permanent.
You are right. But my conclusion is different.
If it is a stable and people have been there for a while then developers know that code as well as the rest. So, when something fails they know how to fix it.
Bringing generic libraries may create long callstacks of very generic code (usually templates) that is very difficult to debug while adding a lot of functionality that is never used.
Bringing a new library into the code base need to be a though decision.
Cathedrals are conservative. Reactionary, even. You can measure the rate of change by generations.
Bazaars are accessible and universal. The whole system is chaotic. Changes happen every day. No single agent is in control.
We need both to make meaningful progress, and it's the job of engineers to take any given problem and see where to look for the solution.
Check this out: https://news.ycombinator.com/item?id=39019001
Of course, this is the whole environment except for Node.js itself. And Vite has improved it.
But there are definitely some tools that are worse than others.
A lot of projects would simply not exist without it. Linux, comes to mind. I guess one might take the position that "Windows is fine" but would there ever have been even competition for Windows?
Another example, everyone would be rolling their own crypto without openssl, and that would mean software that's yet a lot more insecure than what we have. Writing software with any cryptography functionality in mind would be the privilege of giant companies only (and still suck a lot more than what we have).
There's a lot more things. The internet and software in general would be set back ~20years. Even with all the nostalgia I can muster, that seems like a much worse situation than today.
The ground truth is that software bloat isn't bad enough of a problem for software developers to try and fight it. We already know how to prevent this, if really want to. And if the problem was really hurting so much, we'd have automated ways of slimming down the executables / libraries.
In my role in creating CI for Python libraries, I did more hands-on dependency management. My approach was to first install libraries with pip, see what was installed, research why particular dependencies have been pulled in, then, if necessary, modify the packages in such a way that unnecessary dependencies would've been removed, and "vendor" the third party code (i.e. store it in my repository, at the version I need). This, obviously, works better for programs, where you typically end up distributing the program with its dependencies anyways. Less so for libraries, but in the context of CI this saved some long minutes of reinstalling dependencies afresh for every CI run.
In the end, it was a much better experience than what you usually get with CI targeting Pyhon. But, in the end, nobody really cared. If CI took less than a minute to complete instead of twenty minutes, very little was actually gained. The project didn't have enough CI traffic for this to have any actual effect. So, it was a nice proof of concept, but ended up being not all that useful.
This gives a couple of advantages: you own your code, no bloat, usually simpler due to not having all the bells and whistles, less abstraction, so faster because there is no free lunch, minimize the attack surface for supply chain attacks...
For fun, the next time you are tempted to install a BlaZiNg FaSt MaDe in RuSt software: get the source, install cargo audit and run the cargo audit on that project.
See how many vulnerabilities there are. So far, in my experience, all the software I checked come with their list of vulnerabilities from transitive dependencies.
I don't know about npm, I only know by reputation and it's enough for me to avoid.
Yes you should not just pull as dependency thing that kid in his parents basement wrote for fun or to get OSS maintainer on his CV.
But there are tons of legitimate libraries and frameworks from people who are better than you at that specific domain.
Here's a scenario. You pull in some library - maybe it resizes images or something. It in turn pulls in image decoders and encoders that you may or may not need. They in turn pull in metadata readers, and those pull in XML libraries to parse metadata, and before you know it a fairly simple resize is costing you 10s of MB.
Worse, you pull in different libraries and they all pull in different versions of their own dependencies, with lots of duplication of similar but slightly different code. Node_modules usually ends up like this.
The point is not writing the resize code yourself. It's the cultural effect of friction. If pulling in the resize library means you need to chase down the dependencies yourself, first, you're more aware of the cost, and second, the library author will probably give you knobs to eliminate dependencies. Perhaps you only pull in a JPEG decoder because that's all you need, and you exclude the metadata functionality.
It's an example, but can you see how adding friction to pulling in every extra transitive dependency would have the effect of librabry authors giving engineers options to prune the dependency tree? The easier a library is to use, the more popular it will be, and a library that has you chasing dependencies won't be easy to use.
This is more likely to happen in C++, where any library that isn't header-only is forced to be an all encompassing framework, precisely because of all that packaging friction. In an ecosystem with decent package management your image resizing library will have a core library and then extensions for each image format, and you can pull in only the ones you actually need, because it didn't cost them anything to split up their library into 30 tiny pieces.
You're thinking correctly on principle, but I think this is also the cause of the issue: it's too easy to pull in a Node dependency even thoughtlessly, so it's become popular.
It would require adding friction to move back from that and render it less easy, which would probably give rise to a new, easy and frictionless solution that ends up in the same place.
Building everything from scratch is insane, but so's uncritically growing a dependency jungle
I don't see the nuance there, that is my take of the comment, those are pretty much strongest statements and points about using libraries are minimal.
That is why I added mine strongly pointing that real world systems are not going to be "managable size" unless they are really small or a single person is working on the.
While in for instance C# I tend to think "this would be simple to implement with whatever-fancy-thing-is-just-a-package-away".
Neither way is impossible to judge as good or bad on its own.
A real world system is almost always part of a larger system or system of systems. Making one thing simple can make another complex. The world is messy.
They can't hack what doesn't exist.
Reducing surface area is sometimes the easiest security measure one can take.
Or is this a sticking plaster? Genuinely don't know as I only develop personal projects.
It cannot detect a case such as: if the string argument to this function contains a substring shaped like XYZ, then replace that substring with a value from the environment variables (the Log4j vulnerability), or from the file system (the XML Entity Extension vulnerability). From the point of view of tree-shaking, this is legitimate code that could be called. This is the kind of vulnerable bloat that comes with importing large libraries (large in the sense of "has many complex features", rather than of megabytes).
I suppose the options are then:
1. Write everything yourself, time consuming and hard, less likely to lead to these types of vulnerabilities.
2. Import others code, easy and takes no time, can lead to vulnerabilities.
3. Use others code, but only what you actually need. Maybe less time consuming than 1 but more than 2, adds a different sort of complexity, done correctly less likely to lead to these vulnerabilities.
Not sure if there's any other options here?
Isn't this unrelated to the parent post's thoughts about the benefit's of the C/C++ ecosystem (or lack thereof) for dependency management? I.e., a Rust-like language could still exist with a dependency management system similar to what C/C++ have now -- that isn't predicated on how the language handles memory.
Some people will always prefer C to Rust, might as well learn to live with that fact.
My argument is that you are missing the point: the point is that a larger attack surface enables more exploits regardless of language.
When using a language that has tremendous friction in expanding the attack surface you tend to have a small attack surface as a result.
Theres obviously a crossover point where you'd be safer with a memory safe language and a larger attack surface than with a memory unsafe language and a minuscule attack surface.
No I don't, which exploit are you talking about? The most expensive exploit I can think of was caused by heartbleed which was in a memory unsafe language. The "most expensive software bug" (not an exploit) caused by turning off the safe overflow handler in the language being used can hardly be considered an indictment of language level safety either. So what exploit are you talking about?
I suppose Stuxnet could also count, where the initial infection depends on the human curiosity of plugging an unknown usb drive into an air gapped system.
There is. Linux distributions have package managers whose entire purpose is to distribute and manage applications and their dependencies.
The key difference between Linux distribution package managers and programming language package managers is the presence of maintainers. Any random person can push packages to the likes of npm or PyPI. To push packages to Debian or Arch Linux, you must be known and trusted.
Programming language package managers are made for developers who love the convenience of pushing their projects to the world whenever they want. Linux distribution package managers are made for users who prefer to trust the maintainers not to let malware into the repositories.
Some measured amount of elitism can be a force for good.
I do get what you mean, but it works only on some very specific types of projects, when you & potentially comparably (very) good & skilled peers are maintaining and evolving it long term. This was never the case in my 20 years of dev career.
This sort of shared well tested libraries -> gradually dependency hell is in some form shared across all similar languages since its pretty basic use case of software development as an engineering discipline. I haven't seen a good silver bullet so far, and ie past 14 years of my work wouldn't be possible with approach you describe.
Bloat is uncontrolled complexity and making it harder to manage complexity reduces bloat. But it also makes it harder to write software that has to be complex for legitimate reasons. Not everybody should write their own library handling SSL, SQL or regex for example. But those libraries are rarely the problem, things like leftpad are.
Or: you can use package systems for good and for evil. The only real way to fight bloat is to be diciplined an vet your dependencies. It must cost you something to pull them in. If you have to read and understand everything you pull in, pulling in everybody and their dog suddenly becomes less desireable.
Also I think this is much more an issue off the quality of dependencies than it is about using dependencies themselves (it would be stupid to write 1000 implementations of HTTP for a language, one that works really well is better).
[0] - https://go.dev/blog/supply-chain
edit: lol at the downvotes. Go developers showing how insecure they are once again.
There's plenty of perfectly good libraries on npm and pypi, and there's awful ones. Likewise for go which pulls from "the internet".
Must I really demonstrate that bad code exists in go? You want examples? There's plenty of bad libraries in go, and pinning to a commit is a terrible practice in any language. Encourages unstable APIs and unfixable bugs.
Bloat affects the end user, and it's a loose definition. Anything that was planned, went wrong, and affects user experience could be defined as bloat (many toolbars like Office had, many purposes like iTunes had, etc).
Bloat and technical debt are related, but not the same. There is a lot of software that has a very clean codebase and bloated experience, and vice-versa.
Speed is an ambiguous term. It is often better to think in terms of real performance and user-perceived performance.
For example, many Apple UX choices prioritize user perceived performance instead of real performance. Smooth animations to cover up loading times, things such as that. Their own users don't even know why, they often cannot explain why it feels smooth, even experienced tech people.
Things that are not performant but appear to be fast are good examples of good user-perceived performance.
Things that are performant but appear to be slow exist as well (fast backend lacking proper cache layer, fast responses but throttled by concurrent requests, etc).
Then why does Apple still ship 60Hz displays in 2025? The perceived performance on scrolling a web page on 60Hz is jarring no matter how performant your SoC is.
There's no excuse for 60hz iPhones though, that's just to upsell you to more expensive models.
To push people who want faster displays to their more expensive offerings.
60Hz: $1000
120Hz: $1600
That's one reason, among many, why Apple has a $3 trillion market cap.
For a site with so many people slavishly obsessed with startups and venture capital, there seems to be a profound lack of understanding of what the function of a business is. (mr_krabs_saying_the_word_money.avi)
I said many choices are focused on user-perceived performance, not all of them.
Refresh rate only really makes a case for performance in games. In everyday tasks, like scrolling, it's more about aesthetics and comfort.
Also, their scrolling on 60Hz looks better than scrolling on Android at 60Hz. They know this. Why they didn't prioritize using 120Hz screens is out of my knowledge.
Also, you lack attention. These we're merely examples to expand on the idea of bloat versus technical debt.
I am answering out of kindness and in the spirit of sharing my perspective to point the thread in a more positive discussion.
Refresh rate really matters for everything in motion, not just games, that's why I said scrolling.
> In everyday tasks, like scrolling, it's more about aesthetics and comfort.
Smooth scrolling IS everyday comfort. Try going from 120Hz to 60Hz and see how you feel.
>their scrolling on 60Hz looks better than scrolling on Android at 60Hz.
Apple beat physics?
Could be about 20% worse battery life.
https://www.phonearena.com/news/120Hz-vs-60hz-battery-life-c...
They don't let you scroll as fast as Android does, which makes the flickering disorienting sensation of speed scrolling in a low refresh rate less prominent. It optimizes for comfort given the hardware they opted to use.
Android lets you scroll faster, and it does not adjust the scrolling dynamics according to the refresh rate setting. It's optimized for the high end models with 120Hz or more, so it sucks on low end settings or phones.
Some people take years to understand those things. It requires attention.
The fewer 3rd parties you involve in your product, the more likely you will encounter a comprehensive resolution to whatever vulnerability as soon as a response is mounted. If it takes 40+ vendors to get pixels to your customers eyeballs, the chances of a comprehensive resolution rocket toward zero.
If every component is essential, does it matter that we have diversified the vendor base? Break one thing and nothing works. There is no gradient or portfolio of options. It is crystalline in every instance I've ever encountered.
Want a simple web server ? Well, you’re going to get something with a JSON parser, PAM authentication, SSL, QUIC, websockets, an async framework, database for https auth, etc.
Ever look at “curl”? The number protocols is dizzing — one could easily think that HTTP is only a minor feature.
At the distro level, it is ridiculous that so long after Alpine Linux, the chasm between them and Debian/RHEL remains. A minimal Linux install shouldn’t be 1GB…
We used to boot Linux from a 1.44mb floppy disk. A modern Grub installation would require a sizable stack of floppies! (Grub and Windows 3.0 are similar in size!)
I would say this is a feature and not a bug. Alpine Linux is largely designed to be run in containerized environments so you can have an extremely small footprint cause you don't have to ship stuff like a desktop or really anything beyond the very very basics.
Compare that to Ubuntu which for the 5GB download is the "Desktop" variant that comes with much more software
Simple means different things for different people it seems. For a simple web server you need a tcp socket.
If you want a full featured high performance web server, it's not gonna be simple.
[0] - https://unikraft.org/
Bundling stuff in Dotnet are done much more "runtime" often both by design of the library (it uses introspection¹) and the tools².
1: Simplified argument - one can use introspection and not expect all of the library to be there, but it's trickier.
2: Even when generating a self contained EXE, the standard toolchain performs no end-linking of the program, it just bundles everything up in one file.
Why not this seems pretty arbitrary. Seemingly developer time or functionality would suffer to achieve this goal. To what end?
Who cares how many floppies grub would require when its actually running on a 2TB ssd. The actually simpler thing is instead of duplicating effort to boot into Linux and use Linux to show the boot menu then kexec into the actual kernal or set it to boot next. See zfsbootmenu and "no more boot loader" this is simpler and less bloated but it doesnt use less space
I am at a big tech company and have seen some wildly insecure code make it into the codebase. I will forever maintain that we should consider checking if candidates actually understand software engineering rather than spending 4 or 5 hours seeing if they can solve brainteasers.
A 2024 plea for lean software - https://news.ycombinator.com/item?id=39315585 - Feb 2024 (240 comments)
Every software component follows the same pattern. Software, thus made from these components, ends up being intractably complex. Nobody knows what a thing is, nor how things work.
This is where we are right now, before we add AI. Add AI and "vibe coding" to the mix, and we're in for a treat. But don't worry - there'll be another tool that'll make this problem, too, easy!
(1) https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
If they instead had filtered/disabled previews the security problems would still exist - and potentially have less visibility.
https://thethreevirtues.com/
From a user/fanboy/paranoid point of view, I don't like systemd. I've good development arguments for it's improved coding for usb device drivers. Still, when I have to reboot, because my system is frozen. It's more complex to use than say runit. Lastly, I'm nervous, that if a company took it over, it's the one piece that might help destroy most distros. Please no hate. This is only my personal point of view, as an amateur e.g. there are people on both sides that have a much better understanding of this.
Seems to favor the microkernel? I've been hoping we one day get daily driver micro-kernel distro. I asked about this but didn't get a lot of answers, except for those that mentioned projects that aren't there yet e.g. I would love to try Redox, but from my understanding, after 10yrs it's still not there yet.
It also brings me to a point that has confused me for years. As, an amateur how to I decide what is better for what level of virtualization from program images like appimage/flatpacks, containers, to VMs. So far, I've hated snaps/flatpacks because, they make a mess of other basic admin commands, and because there seems to be missing functionality. and/or configuration. It may be better now; I haven't tried in a while. Personally, I've enjoyed portage systems in the past, and they are so fast now (to compile). A lot of forums, forget that there are home enthusiast and basically talk about it from an enterprise perspective. Is there a good article or book that might explain when to choose what. Much of what I've read are just "how to" or "how it works". I guess, I would prefer someone who acknowledges we need something for the hardware to run on and when it makes more since to use a regular install vs an image (appimage/flatpack/snap).
Anyway, thanks so much for the article. I do believe you are right, a lot of companies just put out fires because none want to invest in the future. I mean even the CEO usually only is there a few years, historically comparatively; so why would they care? Also, I think H1-B is a security risk in and of itself because, at least in TX, most IT is Indian H1-B. I mean they want a better life, and don't have as many family ties here. If they were to "fall into" a large sum...they could live like Kings in India, or elsewhere.
You do not say : "there is two task: add some feature, takes 1 day, and delete some cruft, takes 1 day".
You say: "Yes, that feature. That's one task. It will take 2 days."
As per Tame Impala's Elephant:
He pulled the mirrors off his Cadillac
Because he doesn't like it looking like he looks back
Looking back gives the impression of missteps or regret. We have no such thing!
And because it is based on nothing, you can just lie about it
It’s like drugs: if a doctor prescribes, it’s probably ok. If you have an addiction, then you’re in for a lifetime of trouble.
The answer to your questions is already in my reply.
You're buying them with the risk that they could become a threat in the future. At one point it's not worth it anymore.
And of course, if you're doing just recreational coding to learn something, or if what you need differs from what is available, or the available thing seems sketchy somehow, then you'd write it yourself (if it's feasible). But for most things where what you need is clear and unambiguous, I don't see why you'd invent it yourself. For an established library it's unlikely that you'd do any better anyway.
(And again, if it's recreational what you are doing, you want to learn and have a hobby, of course, do it yourself. But in that case, you aren't actually looking for dependencies anyway - your goal is elsewhere.)
> So with infinite resources it would be best to write everything from scratch?
Re-read the parent and the other replies: A critical point you are missing is your interlocutor's practical mindset in contrast to your idealistic one. This is about making engineering-mindset tradeoffs; they vary depending on the specific scenario. The answer to your Reductio ad absurdum is yes, but I believe that side tracks rather than elucidates.
The Erlang ecosystem has many useful abstractions at just about the right level that allow you to build the simplest possible custom solution instead of reaching for a 3rd party solution for a component of your (distributed) system.
Building just the right wheel for the task at hand does not mean you have to reinvent it first.
There's just too much invested in the building of software to dismantle current arrangements or change methodologies quickly, it would take years to do so. Commercial interests depend on bloat for income, so do programmers and support industries.
For example, take Microsoft Windows, these days it's so huge it will not even fit onto a DVD, that's petty outrageous really. I recall Windows expert Mark Russinovich saying that the core/essential components of Windows only take up about 50MB.
But there's no incentive for Microsoft to make Windows smaller and thus have a smaller footprint for hackers to attack. Why? As that bloatware makes Microsoft money!
Rather than dispense with all that bloatware Microsoft has build a huge security edifice around it, there are never-ending security updates, secure Windows boot/UEFI, it's even had to resort to a hardware security processor—Pluton. And much of this infrastructure is nothing but a damn nuisance and inconvienience for end users/consumers.
Microsoft doesn't just stop there, it then makes matters worse by unnecessarily changing the Windows GUI with every new version. Moreover, it's not alone, every Linux distribution is different. What this means is that there's less time to perfect code as its features keep changing.
Now take the huge numbers of programming languages out there. There are so many that many programmers have to learn multiple languages thus cannot become truly proficient in all of them. That lack of expertise alone is problematic. Surely it would be better to concentrate on fewer languages and make those more adaptable. But we know that's not going to happen for all the usual reasons.
Same goes for Web browsers and Web bloat. Every time I complain on HN about browser bloat, the abuse of JS by websites and the never-ending number of Web protocols that keep appearing, I'm voted down. That's understandable of course because programmers and others have a financial vested interest in them. Also, programmers have taken much time to learn all this tech and don't want to see their efforts wasted by its obsolescence.
And I've not yet mentioned the huge and unnecessary proliferation of video, sound codecs, image and audio formats not to mention the many document formats. Programs that use all these formats are thus bigger and more bloated and more prone to bugs and security vulnerabilities. In a more organized world only faction that number would be necessary. Again, we know it's not just technological improvements that have brought such numbers into existence but also commercial and vested interests. Simply, there's money in introducing this tech even if it's only slightly different to the existing stuff.
I've hardly touched this subject and said almost nothing about the economic structure of the industry, but even at first glance it's obvious we can't fix any of this in the near future, except perhaps by tiny incremental steps which will hardly make much impact.
A distribution is just a collection of software to handle common needs. Most are quite similar: systemd, coreutils, glibc, dbus, polkit, pipewire/pulseaudio, and a DE, typically GNOME or KDE. You'll expect to see them on Debian, Ubuntu, Fedora, Nix, Arch, or anywhere else except Void, Alpine, and Gentoo. The only meaningful difference is typically the package manager. We have more standardization in the Linux ecosystem then ever and equally as much bloat, both thanks to systemd.
> Surely it would be better to concentrate on fewer languages and make those more adaptable.
Programming languages are a combination of tools and notation. Different domains have different needs and preferences. We don't lament quantum physicists using bra-ket standard linear algebra notation. Unlike notation, there are material reasons to use one beyond clarity. Some languages support deeper static analysis, some prove complete theorems about your specification, some are small enough to embed, some are easier to extend, and some exist only within a narrow domain like constraint satisfaction. We can add macros or introspection to a language, but in doing so it will fall outside a domain that might value predictability or performance.
> Now take the huge numbers of programming languages out there
I took data from the 2024 Stack Overflow survey filtered for professional developers. The median release year for languages above 25% market share is 1994. The youngest serious language on the list is Swift, dated 2014. I don't think this is evidence of a growing number of programming languages.See converted data below. The release year was augmented by o4-mini.
The services usually persisted except for automatic updates so I only had to restart all the services a few times per week so it didn't make sense to invest time to automate.
I'm sure the launch can be fully automated but it's kind of at the edge of not worth automating because of how relatively infrequently I need to restart everything... Also the CEO doesn't like to make time for work which doesn't yield visible features for end users.
I actually handed my resignation a month ago, without another job lined up. It became too much haha. Good practice though. Very stressful/annoying.
To me, that was the strangest idea - how could you "decouple" one service from another if it needs to know what to call, and what information to pass and in what format? Distributing the computing - for performance, or redundancy or security or organizational reasons - that I can understand - but "weak coupling" just never made sense to me.
Firing up the whole mess and debugging one or two of them locally is always a major pain, and god help you if you have no idea which services to stub and which to debug.
Sometimes you need them all from source to debug across the stack, when you don't you might need a local container to avoid pollution from a test env, sometimes it is just fine to port-forward to a test env and save yourself the local resources.
If you take care of the developer, the project looks after itself.
Having burned out employees is a cost implication for any business. I do not have concrete data to back this up, though, but from personal experience, I can attest to this. I had to take sick leave and lose days of productivity due to illness caused by burnout from having to deal with bloated software and the deadlines associated with that. Business makes promises to clients without realising how difficult and time-consuming it is to add features and try to keep software operational and secure can be if it is so bloated and difficult to understand.
I did not have the deadlines, but to bear having to deal with bloated software, my solution was vodka: since it has no color, I filled mineral water bottles with it and everyone thought I was drinking water.
A get of my lawn section :)
I remember when GUIs started becoming a thing, I dreaded the move from Text to GUIs due to complexity. I also remember most programs I wrote when I started on minis were 64k code and 64k text. They were rather powerful even by today's standards, they did one thing and people had to learn which one to use to perform a task.
Now we have all in one where in some cases you need to page through endless menus or buttons to find an obscure function. In some cases you just give up looking and move on. Progress I guess.