People talk about SQLite's reliability but they should also mention its stability and longevity. It's first-class in both. This is what serious engineering looks like.
azemetre · 7h ago
Does SQLite talk about how they plan to exist beyond 2050 across multiple lifetimes?
Not trying to be chide but it seems like with such a young industry we need better social tools to make sure this effort is preserved for future devs.
Churn has been endemic in our industry and it has probably held us back a good 20 years.
jerf · 5h ago
As the time horizon increases, planning for the future is necessary, then prudent, then sensible, then optimistic, then aspirational, then foolish, then sheer arrogance. Claiming 25 years of support for something like SQLite is already on the farther end of the good set of those adjectives as it is. And I don't mean that as disrespect for that project; that's actually a statement of respect because for the vast majority of projects out there I'd put 25 years of support as already being at "sheer arrogance", so putting them down somewhere around "optimistic" is already high praise. Claiming they've got a 50 or 100 year plan might sound good but it wouldn't mean anything real.
What they can do is renew the promise going forward; if in 2030 they again commit to 25 years of support, that would mean something to me. Claiming they can promise to be supporting it in 2075 or something right now is just not a sensible thing to do.
azemetre · 4h ago
Having a plan for several hundred years is possible and we've seen such things happen in other facets of life. We as humans are clearly capable of building robust durable social organizations, religion and civics both being testaments.
I'm curious how these plans would look and work in the context of software development. That was more what my question is about (also only being familiar with sqlite taking this seriously).
We've seen what lawyers can accomplish with their BAR associations and those were created over 200 years ago in the US! Lawyers also work with one of the clunkiest DSLs ever (legalese).
Imagine what they could accomplished if they used an actual language. :D
clickety_clack · 3h ago
I’d be interested to know what you would classify as having been planned to last hundreds of years. Most of the long term institutions I can think of are the results of inertia and evolution, having been set up initially as an expediency in their time, rather than conforming to a plan set out hundreds of years ago.
azemetre · 16m ago
The Philadelphia BAR Association was established in ~1800. I doubt the profession of law is going to disappear anytime soon, and lawyers have done a good job building their profession all things considered. Imagine if the only way you could legally sell software was through partnerships with other developers?
Do you think such a thing would have helped or hurt our industry?
I honestly think help.
8n4vidtmkvmk · 1h ago
The easiest way would be to just write a spec for the data format, which I think they already have?
If any tooling fails in 25 years, you can at least write a new program to get your data back out. Then you can import it into the next hot thing.
lenkite · 4h ago
They should write a book on "Design and Implementation of SQLite". And make a course as well. That would interest a lot of people and ensure future generations pick up where they decided to retire.
azemetre · 3h ago
I do think this is a good approach for many open source projects.
Always thought neovim should do something like this. How to recreate the base of neovim, or how to recreate popular plugins with minimal lua code.
Got to wonder how more sustainable that would be versus relying on donations.
ectospheno · 5h ago
I’m trying and failing to think of another free software product I honestly expect to still work on my current data past 2050. And this isn’t good enough?
azemetre · 4h ago
It's good, but this also assume that the people taking care of this product in the future (which may not even born right now) will hold the same attitudes.
How do we plan to make sure the lessons we've learned during development now will still be taught 300 years from now?
I'm not putting the onus on sqlite to solve this but they are also the only organization I know of that is taking the idea seriously.
Just more thinking in the open and seeing what other people are trying to solve similar problems (ensure teachings continue past their lives) outside the context of universities.
SoftTalker · 3h ago
Thinking like this is how we ended up with a panic about Y2K. Programmers in the 1970s and 80s could not conceive that their code would still be running in 2000.
0cf8612b2e1e · 3h ago
The computing industry was in a huge amount of flux in the 1970s. How many bits are in a byte? Not a settled question. Code was being rewritten for new platforms all the time. Imagining that some design decisions would last for decades probably seemed laughable.
staticshock · 5h ago
Some churn is fads, but some is legitimate (e.g. "we know how to do this better now".) Every living system is bound to churn, and that's a good thing, because it means we're learning how to do things better. I'm happy to have rust and typescript, for instance, even though they represent some amount of churn for c and javascript.
SoftTalker · 3h ago
That’s only another 25 years. Granted it’s already been around for a good while.
begueradj · 7h ago
Maybe we could say something a bit similar about Express.js and other "boring technologies".
andrewmcwatters · 4h ago
I think it starts with us collectively not using boring tech as a term anymore. If boring helps me be productive, that's exciting, not boring.
Some people on the React team deciding in 2027 to change how everyone uses React again is NOT exciting, it's an exercise in tolerating senior amateurs and I hate it because it affects all of us down to the experience of speaking to under-qualified people in interview processes "um-ackchyually"-ing you when you forget to wrap some stupid function in some other stupid function.
Could you imagine how incredulous it would be if SQLite's C API changed every 2 years? But it doesn't. Because it was apparently designed by real professionals.
kragen · 4h ago
Hey, didn't you write kjbuckets and Gadfly? Or was that Aaron Watters? I was thinking about that the other day: that was one of the coolest pieces of software for Python 2 (though I think it predated Python 2): an embedded SQL database without needing SQLite's C API. I suppose it's succumbed to "software rot" now.
I think "boring software" is a useful term.
Exciting things are unpredictable. Predictable things aren't exciting. They're boring.
Stock car racing is interesting because it's unpredictable, and, as I understand it, it's okay for a race car to behave unpredictably, as long as it isn't the welds in the roll bars that are behaving unpredictably. But if your excitement is coming from some other source—a beautiful person has invited you to go on a ski trip with them, or your wife needs to get to the hospital within the next half hour—it's better to use a boring car that you know won't overheat halfway there.
Similarly, if a piece of software is a means to some other end, and that end is what's exciting you, it's better to use boring software to reach that end instead of exciting software.
8n4vidtmkvmk · 1h ago
I thought you were about to say go on a ski trip with your mistress while your wife is 9 months pregnant. That'd be exciting too, but in a bad/awful way.
kragen · 24m ago
It's probably better to have your wife with you and your mistress in that case.
andrewmcwatters · 4h ago
The name is close, but no cigar. :) I'm known in different circles than Python's.
kragen · 4h ago
Doh, sorry!
righthand · 3h ago
I agree, it’s “understood” tech, not “boring” tech. It’s only boring because it’s simplicity and usefulness is obvious. It’s only boring because there are few to zero use cases left to discover application of the tech. The tech isn’t boring, the person is boring.
noduerme · 14h ago
I wish I could write all the business logic I write on an NES and never have to worry about requirements going bad. I guess the thing is, if you're writing anything on top of a network layer of any kind, eventually it's going to require patches unless you literally own all the wires and all the nodes in the network, like a secure power plant or some money clearing system in a bank that's been running the same COBOL since the 1960s. And since you're probably not writing code that directly interfaces with the network layer, you're going to be reliant on all the libraries that do, which in turn will be subject to change at the whims of breaking changes in language specs and stuff like that, which in turn are subject to security patches, etc.
In other words, if you need your software to live in the dirty world we live in, and not just in a pristine bubble, things are gonna rot.
Picking tools and libraries and languages that will rot less quickly however seems like a good idea. Which to me means not chaining myself to anything that hasn't been around for a decade at least.
I got royally screwed because 50-60% of my lifetime code output before 2018, and pretty much all the large libraries I had written, were in AS3. In a way, having so much code I would have maintained become forced abandonware was sort of liberating. But now, no more closed source and no more reliance on any libs I don't roll or branch and heavily modify myself.
jeberle · 6h ago
Despite it being everyone's favorite shame language, COBOL's DATA DIVISION makes it uniquely well-suited for long-term stability. Common practice is to avoid binary fields. This means what you see in your program is what you see on the wire or in a file.
kragen · 4h ago
> In other words, if you need your software to live in the dirty world we live in, and not just in a pristine bubble, things are gonna rot.
I share your painful experience of losing my work to proprietary platforms.
gr4vityWall · 11h ago
> 50-60% of my lifetime code output before 2018, and pretty much all the large libraries I had written, were in AS3
Out of curiosity, what kind of work did you do? Regarding our old AS3, did you have any luck with Haxe? I assume it would be a straightforward port.
immibis · 10h ago
Building on bedrock is one extreme; the other is to become extremely fluid - build on quicksand but do it efficiently every time the quicksand shifts. AS3 may stop being useful, but if you can use some tool to recompile AS3 code to web JavaScript, you suffer no loss.
tracker1 · 6h ago
That's kind of my approach... KISS to the extreme. Something that's easy to replace tends to become surprisingly long lived and/or ported easily. I wrote a test launcher and client api for SCORM courseware in the early 00's. That code was later turned into a couple of LMS products, ported through a few different languages on the server and still effectively exists today. The DB schema and query code is almost exactly the same as baseline, except I didn't implement one feature that a decade in came across a course that used it and helped implement it.
Still friends with the company owner of that code. So I've had a bit more insight into follow-up on code over 2 decades old that isn't so typical for anything else I've done.
ozim · 7h ago
Another example is people from EU making fun of US buildings.
Lasting centuries may or may not be preferable.
There are places where you want to cheaply rebuild from scratch. Your castle after tornado and flooding will be irreparably bad. Most castles suck badly by not taking advantage of new materials and I myself would not like to live in 100 yo building.
Same for software, there are pieces that should be build to last but there are applications that should be replaceable in short time.
As much as I am not fan of vibe coding I don’t believe all software should be built for decades to last.
roda73 · 8h ago
This is one of the reasons I absolutely hate Linux based development and operating systems built with it.
We all know it now as dependency hell, but what it is in fact is just a lazy shortcut for the current development that will bite you down the path. The corporate software is not a problem, because the corporate users don't care as long as it works now, in the future they will still rely on paid solutions that will continue working for them. For me, I run a local mirror of arch linux, because I don't want to connect to internet all the time to download a library that I might need or some software that I may require. I like it all here, but since I haven't updated in a while I might see some destructive update if I were to choose to update now. This should never happen, another thing that should never happen is if I were to compile an old version of some software. Time and time again, I will find a useful piece of software on github and I will naturally try compiling it, it's never easy, I will have to hunt the dependency it requires, then try compiling old versions of various libraries. It's just stupid, I wish it were easier and built smarter. Yes sometimes I want to run old software, that has no reason not to work. When you look at windows, it all works magically, well it's not magic it's just done smart. On GNU+Linux smart thinking like this is not welcome, it never has been. Instead they rely on huge amounts of people that develop this software, to perpetually update their programs for no reason, but to satisfy a meaningless number of a dependency.
skydhash · 8h ago
It’s all volunteer work, not some corporation with trillions laying around. If you want something easy, use Debian (or ubuntu). They pretty much have everything under the sun.
What you want to have (download software from the net and run it) is what most distro have been trying to avoid. Instead, they vet your code, build it, and add it to a reputable repo. Because no one wants to download postgres from some random sites.
8n4vidtmkvmk · 1h ago
Ubuntu is not perpetually stable. I mean.. I have some old instances running since 2018 or so and they continue to work fine, but I've been blocked from running/updating certain apps. So my choices now are upgrade the OS and risk breaking everything (almost certain) or... Just keep using the old stuff.
gjsman-1000 · 7h ago
Last time I checked, Ubuntu/Canonical is a multimillion dollar company, Red Hat is a multibillion dollar company, SuSE sold for $2.5B, and The Linux Foundation has over $250M in revenue to spend only 3% on development of Linux specifically.
Enough of the BS of "we're just volunteers" - it's fundamentally broken and the powers that be don't care. If multiple multibillion dollar entities who already contribute don't see the Linux desktop as having a future, if Linus Torvalds himself doesn't care enough to push the Foundation on it; honestly, you probably shouldn't care either. From their perspective, it's a toy, that's only maintained, to make it easier to develop the good stuff.
skydhash · 7h ago
Those companies sell server OS support, not consumer desktop. And Linux is rock solid for that purpose.
Desktop Linux is OK. And I think it’s all volunteer work.
8n4vidtmkvmk · 1h ago
OK at best. Barely functional. Incredibly unstable.
codeguro · 5h ago
Get over yourself. Linus himself said Linux is just a hobby. It just happened to be the best because of the lack of red tape dragging development down. It got as big as it did BECAUSE it was a volunteer project with the right choice of license and remains the best DESPITE big corps pouring money all around it. https://www.reddit.com/r/linux/comments/mmmlh3/linux_has_a_i...
ndriscoll · 6h ago
Use nixos. I install updates maybe every few months and it's fine. My desktop experience has been completely solid for almost a decade.
My work computer with Windows on the other hand requires restarts every day or two after WSL inevitably gets into a state where vscode can't connect to it for some reason (when it gets into such a state it also usually pegs the CPU at 100% so even unlocking takes a minute or more, and usually I just do a hard power off).
stereolambda · 5h ago
I sympathize with what you're saying. In theory Docker and Snaps and such are supposed to more explicitly package Linux programs along with their dependencies. Though Docker especially depends heavily on being networked and servers being up.
I'm not a fan of bundling everything under the sun personally. But it could work if people had more discipline of adding a minimal number of dependencies that would be themselves lightweight. OR be big, common and maintain backwards compatibility so they can be deduplicated. So sort of the opposite of the culture of putting everything through HTTP APIs, deprecating stuff left and right every month, Electron (which puts the browser complexity into anything), and pulling whole trees of dependencies in dynamic languages.
This is probably one of the biggest pitfalls of Linux, saying this as someone to whom it's the sanest available OS despite this. But the root of the problem is wider, it's just the fact that we tend to dump the reduction of development costs onto all users in more resources usage. Unless some big corp cares to make stuff more economical, or the project is right for some mad hobbyist. As someone else said, corps don't really care about Linux desktop.
ZiiS · 4h ago
Whilst this dose applies to all distro's to some extent; it is Arch's main distinguishing feature, it is a 'rolling release' with all parts being constantly updated. RHEL for instance gives you a 13-year cycle where you will definitely not get a destructive update.
SoftTalker · 3h ago
> We all know it now as dependency hell
Too young to remember Windows 3.1 and “DLL hell?” That was worse.
pca006132 · 4h ago
C/C++ dependency management is easy on windows? Seriously? What software did you build from source there?
8n4vidtmkvmk · 1h ago
Once it's compiled, it keeps running. I can still run win32 programs and what not. Is that true of Linux programs? Can I run one compilation on any distro for years to come? I honestly don't know.
codeflo · 15h ago
We as an industry need to seriously tackle the social and market dynamics that lead to this situation. When and why has "stable" become synonymous with "unmaintained"? Why is it that practically every attempt to build a stable abstraction layer has turned out to be significantly less stable than the layer it abstracts over?
dgoldstein0 · 14h ago
So one effect I've seen over the last decade of working: if it never needs to change, and no one is adding features, then no one works on it. If no one works on it, and people quit / change teams / etc, eventually the team tasked with maintaining it doesn't know how it works. At which point they may not be suited to maintaining it anymore.
This effect gets accelerated when teams or individuals make their code more magical or even just more different than other code at the company, which makes it harder for new maintainers to step in. Add to this that not all code has all the test coverage and monitoring it should... It shouldn't be too surprising there's always some incentives to kill, change, or otherwise stop supporting what we shipped 5 years ago.
codeflo · 14h ago
That's probably true, but you're describing incentives and social dynamics, not a technological problem. I notice that every other kind of infrastructure in my life that I depend upon is maintained by qualified teams, sometimes for decades, who aren't incentivized to rebuild the thing every six months.
BobaFloutist · 6h ago
If you're asking why software has more frequent rebuild cycles than, say, buildings, or roads, or plumbing, it's because it's way cheaper and easier, can be distributed at scale for ~free (compared to road designs which necessarily have to be backward compatible since you can't very well replace every intersection in a city simultaneously), and for all the computerification of the modern world, is largely less essential and less painful to get wrong than your average bridge or bus.
tracker1 · 6h ago
It's like the difference between building a Bird house, Dog house, people house, mansion, large building and a sky scraper... it's different levels of planning, preparation and logistics involved. A lot of software can (or at least should) be done at the bird or dog house level... pretty easy to replace.
vdupras · 10h ago
Maybe it's a documentation problem? It seems to me that for a piece of software to be considered good, one has to be able to grok how it works internally without having written it.
croemer · 6h ago
Not per Naur, in his seminal "Programming as Theory Building" he claims that documentation doesn't replace the mental model that he original authors had developed: https://pages.cs.wisc.edu/~remzi/Naur.pdf
bloppe · 14h ago
At any given moment, there are 6 LTS versions of Ubuntu. Are you proposing that there should be more than that? The tradeoffs are pretty obvious. If you're maintaining a platform, and you want to innovate, you either have to deprecate old functionality or indefinitely increase your scope of responsibilities. On the other hand, if you refuse to innovate, you slide into obscurity as everyone eventually migrates to more innovative platforms. I don't want to change anything about these market and social dynamics. I like innovation
tracker1 · 6h ago
They aren't still supporting 14.04. How do you get 6? There's one every other year, and they retire one shortly after a new one comes out. They're also pretty quick to shutter non-lts release support each lts generation.
Ygg2 · 14h ago
> When and why has "stable" become synonymous with "unmaintained"?
Because the software ecosystem is not static.
People want your software to have more features, be more secure, and be more performant. So you and every one of your competitors are on an update treadmill. If you ARE standing (aka being stable) on the treadmill, you'll fall off.
If you are on the treadmill you are accumulating code, features, and bug fixes, until you either get too big to maintain or a faster competitor emerges, and people flock to it.
Solving this is just as easy as proving all your code is exactly as people wanted AND making sure people don't want anything more ever.
codeflo · 14h ago
> People want your software to have more features, have fewer bugs, and not be exploited. So you and every one of your competitors are on an update treadmill. If you ARE stable, you'll probably fall off. If you are on the treadmill you are accumulating code, features, bug fixes, until you either get off or a faster competitor emerges.
Runners on treadmills don't actually move forward.
Ygg2 · 14h ago
Kinda the point of the threadmill metaphor. If you are standing on a threadmill, you will fall right off. It requires great effort to just stay at one spot.
BobaFloutist · 6h ago
But adding features isn't staying at the same spot.
8n4vidtmkvmk · 1h ago
Could be, if you're attracting some users who wanted the new stuff and pushing out other customers who just wanted fast and stable.
Ygg2 · 1h ago
Honestly. I think it is. All software exists in a kind of attract consumer/developer evolutionary race.
If I assume your point is true, wouldn't everyone then just switch to Paint for all 2D picture editing? I mean it's the fastest - opens instantly on my machine vs 3-4sec for Krita/Gimp/Photoshop. But it's also bare bones. So why isn't Paint universal used by everyone?
My assumption: what people want is to not waste their time. If a program is 3 seconds slower to start/edit, but saves you 45 minutes of fucking around in a less featureful editor, it's obvious which one is more expedient in the long run.
myaccountonhn · 8h ago
> People want your software to have more features, be more secure, and be more performant
I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
Ygg2 · 1h ago
> still doing the same thing
The core issue, in my humble opinion, is that it's not doing the same thing. But from a thousand miles away it looks like that, because everyone uses 20% of functionality, but everyone in aggregate uses 100% of functionality.
> I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
I'd like to see some actual proof of (/thoughts on) this. Like if you didn't patch any security issues/bugs, or add any features, or fixed any dependencies, how is the code getting slower?
Like I understand some people care about performance, but I've seen many non-performant solutions (from Unity, to Photoshop to Rider, being prefered over custom C# engine, Paint, Notepad++) being used nearly universally, which leads me to believe there is more than one value in play.
gjvc · 12h ago
post a link to a stable repository on github on this site. watch as several people pipe up and say "last commit 2020; must be dead"
source code is ascii text, and ascii text is not alive. it doesn't need to breathe, modulo dependencies, yes. but this attitude that "not active, must be dead and therefore: avoid" leads people to believing that the opposite: unproven and buggy new stuff, is always better.
silly counter-example: vim from 10 years ago is just as usable for the 90% case as the latest one
tracker1 · 5h ago
I don't assume it's dead based on the last commit. I will look a little further and see where it stands. If there hasn't been a commit for a few years AND there are multiple pull requests that have been sitting unmerged for years and dozens/hundreds of really old issues... then, I'll assume it's dead, and often open another issue asking if it's dead, if there isn't one already.
forgotmypw17 · 18h ago
This and Lindy Effect factors a lot into my choices for what to use for my projects. My choice for a project I want to be as maintenance-free as possible are special subsets of ASCII/txt, SQLite, Perl, Bash, PHP, HTML, JS, CSS. The subsets I choose are the parts of these languages which have persisted the longest.
Using the Lindy Effect for guidance, I've built a stack/framework that works across 20 years of different versions of these languages, which increases the chances of it continuing to work without breaking changes for another 20 years.
eviks · 17h ago
This dogmatic approach means you lose out on ergonomics by using poorly designed tools like bash and perl, so you incur those costs all the time for little potential benefit far away in the future (after all, that effect is just a broad hypothesis)
zeta0134 · 16h ago
Very helpfully, python has stuck around for just as long and is almost always a better choice against these two specific tools for anything complicated. It's not perfect, but I'm much more likely to open a random python script I wrote 6 years ago and at least recognize what the basic syntax is supposed to be doing. Bash beyond a certain complexity threshold is... hard to parse.
Python's standard library is just fine for most tasks, I think. It's got loads of battle tested parsers for common formats. I use it for asset conversion pipelines in my game engines, and it has so far remained portable between windows, linux and mac systems with no maintenance on my part. The only unusual crate I depend on is Pillow, which is also decently well maintained.
It becomes significantly less ideal the more pip packages you add to your requirements.txt, but I think that applies to almost anything really. Dependencies suffer their own software rot and thus vastly increase the "attack surface" for this sort of thing.
sebtron · 15h ago
Python is a very bad example because of the incompatibility between Python 2 and Python 3. All my pre-2012 Python code is now legacy because of this, and since most of it is not worth updating I will only be able to run it as long as there are Python 2 interpreters around.
I like Python as a language, but I would not use it for something that I want to be around 20+ years from now, unless I am ok doing the necessary maintenance work.
mezyt · 12h ago
There's a script to update from python2 to python3, it's now the most used language in the world, and they learned their lessons about the python2 to python3 migration. A python3 script is literally the most likely candidate to be still working/maintenable by someone else in 20 years.
skydhash · 8h ago
They are actively removing modules from the base install.
gjvc · 12h ago
does 2to3 not suffice / work for your code?
integralid · 9h ago
It never worked for any of my nontrivial python files, even for simple projects it often failed. It was a good start, but it was not a fully automatic magic migration script. Otherwise ecosystem migration wouldn't take ages as it did.
forgotmypw17 · 16h ago
My main problem with python is that a script I wrote 6 years ago (or even 1 year ago) is not likely to run without requiring modifications.
If it's me running it, that's fine. But if it's someone else that's trying to use installed software, that's not OK.
Falkon1313 · 14h ago
It depends largely on what you're doing with it. True, I would never want to have to talk a customer through setting up and running a python system. I know there are ways to package them (like 37 different ways), but even that is confusing.
However, a decade ago, a coworker and I were tasked with creating some scripts to process data in the background, on a server that customers had access to. We were free to pick any tech we wanted, so long as it added zero attack surface and zero maintenance burden (aside from routine server OS updates). Which meant decidedly not the tech we work with all day every day which needs constant maintenance. We picked python because it was already on the server (even though my coworker hates it).
A decade later and those python scripts (some of which we had all but forgotten about) are still chugging along just fine. Now in a completely different environment, different server on a completely different hosting setup. To my knowledge we had to make one update about 8 years ago to add handling for a new field, and that was that.
Everything else we work with had to be substantially modified just to move to the new hosting. Never mind the routine maintenance every single sprint just to keep all the dependencies and junk up to date and deal with all the security updates. But those python scripts? Still plugging away exactly as they did in 2015. Just doing their job.
forgotmypw17 · 7h ago
It's not just 2 to 3, either. Both 3.12 and 3.13 introduced breaking changes, that's once per year that you at minimum need to audit all your Python code to ensure it doesn't break.
esseph · 15h ago
This is one of the problems that containers help solve - no OS, just the dependencies required to run your code.
zeta0134 · 15h ago
Python even has venv and other tooling for this sort of thing. Though, admittedly I seem to have dodged most of this by not seriously writing lots of python until after Python3 had already happened. With any luck the maintainers of the language have factored that negative backlash into future language plans, but we'll see.
Mostly I recoiled in horror at bash specifically, which in addition to bash version, also ends up invisibly depending on a whole bunch of external environment stuff that is also updating constantly. That's sortof bash's job, so it's still arguably the right tool to write that sort of interface, but it ends up incredibly fragile as a result. Porting a complex bash script to a different distro is a giant pain.
gbin · 14h ago
How many times the virtualenv/pipenv/pyenv/... changed though? The package management also between wheels and setup and all the breakages.
Even for somebody that did not aim to have python programs for 20y, python is definitely not a good example of a "pdf for programs"
timw4mail · 4h ago
I dislike Python for that reason. I don't love the offside-rule syntax, but compared to how often I have an issue with software written in Python due to some old/deprecated/broken packaging issue...
I've lately been pretty deep into 3d printing, and basically all the software has Python...and breaks quite easily. Whether because of a new version of Pip with some new packaging rule, forced venvs...I really don't like dealing with Python software.
integralid · 9h ago
I used virtualenv for the past 15 years and I don't recall it changing significantly. I don't get why people use new fancy tools like pipenv/pyenv/poetry/uv and then complain that there are too many tools to learn. There is nothing wrong with just using virtualenv. It has its warts but it always worked for me and it's stable.
I think if you had chased every single latest hotness then you would have hit lots of breakages, but depending on what you are doing and where you are running (and what dependencies you are using) then I think you could easily have something from 10-15 years ago work today. Part of the trick would have been to aware enough to pick the boring long-term options (but at some level that applies to every language and ecosystem), but the other part is understanding what the tools are actually doing and how they are maintained.
esseph · 14h ago
uv means you have to download something and put it together.
A container has that already done, including all supporting libraries.
Edit: then ship the bash script in a container with a bash binary ;)
cjfd · 15h ago
It is also quite possible for old containers to no longer build.
esseph · 14h ago
That's why your build pipeline alerts you when tests no longer work, and then you have a release of the previous build still available for download at any time. This is how containers are released!
cjfd · 14h ago
Sure. It still is burdensome, though. Now there are lots of nightly build from old projects that break at random times and require developer attention.
esseph · 12h ago
You just described all software, for the decades it often runs.
Entropy sucks.
antepodius · 10h ago
The point of the article is that this does not have to be the case, and is not the case for all software.
esseph · 6h ago
There's a lot of software that ends up lasting for decades, through multiple OS platform refreshes. Normally there's a small platform/OS team that gets to slog through gardening that mess while everyone else is long gone.
saurik · 14h ago
But now I have frozen an old language runtime and a bunch of old libraries into my environment, all of which are not just security hazards but interoperability landmines (a common one being lack of support for a new TLS standard).
esseph · 14h ago
Write a wrapper, don't expose the container.
These are different problems from the distribution/bundling piece, they won't be solved the same way.
saurik · 7h ago
I don't see how that solves either problem. If the thing in the container makes a web request out, that code both might become obsolete and offers an attack surface to get back in, and wrapping the outside of it doesn't change anything.
ndriscoll · 3h ago
As far as TLS is concerned it does: if you are running a server, run it through a TLS terminating reverse proxy. If you are running a client, run it through a TLS terminating forward proxy. As long as your application logic isn't exposed to a security issue, you're fine.
argomo · 16h ago
It has to be weighed against all the time spent learning, evaluating, and struggling with new tools. Personally, I've probably wasted a lot of time learning /new/ that I should have spent learning /well/.
eviks · 16h ago
Right, nothing is free, but switching costs is a different argument.
No comments yet
forgotmypw17 · 16h ago
Is it still dogmatic if I consider Perl to be well-designed and have already evaluated more popular tools?
eviks · 16h ago
If "This and Lindy Effect" do not "factors a lot", but instead the major factor is you believe perl is better designed, then no, dogmatism of vague future risk is replaced with pragmatism of the immediate usefulness
jwrallie · 15h ago
At the point being discussed, which is not breaking backward compatibility, it indeed is arguably better than more popular tools, and I believe perl has other advantages too.
blueflow · 13h ago
Its not "far away in the future". Every other IT job right now is supporting, maintaining and fixing legacy software. These are the software choices of the past and you pay them in manpower.
No comments yet
calvinmorrison · 8h ago
Calling perl poorly designed is absurd
oguz-ismail · 14h ago
> poorly designed tools like bash and perl
Skill issue, plus what's the alternative? Python was close until the 3.x fiasco
eviks · 14h ago
Indeed, double skill issue: one with the designers and the other one with the users having poor tool evaluation skills
oguz-ismail · 14h ago
> poor tool evaluation skills
Both tools in question are installed everywhere and get the job done. There isn't much to evaluate, and nothing to compare against
eviks · 14h ago
> There isn't much to evaluate, and nothing to compare against
These are exactly the skill issues I meant! Git gud in evaluating and you'll be able to come up with many more sophisticated evaluation criteria than the primitive "it's installed everywhere"
legends2k · 13h ago
While there are other parameters I would consider like maintainability, ergonomics, mind share, ease of deployment, etc. The ubiquitous availability point triumphs most others though. Installation of new toolchain is usually a hassle when the same task can be done with existing tools. Also when I present it in a company setting installing new software and broadening the security attack surface is the first pushback I get.
eviks · 11h ago
Do you advocate the use of Notepad on Windows to edit text because it already exists? What about the increase in the security attack surface from using languages that make it easy to make mistakes in something basic like quoting/escaping? Does it get in the top 10 of pushbacks?
8n4vidtmkvmk · 1h ago
I'd advocate for 'nano' on Linux because it's widely installed and easy for newcomers. A seasoned professional will know they can substitute vim or what have you, I don't need to explain that to them. So yes... If I was trying to explain to a noob how to open a text file on windows and I don't know what they have installed, I'd absolutely tell them to use notepad.
Would I advocate writing my core business software in bash or perl? No, I'd hire and train for what was chosen. For small scripts I might need to share with coworkers? 100%
donatj · 8h ago
I love PHP, however since around 7.4 they have become pretty happy to make breaking changes to the language, including recently in ways where you cannot satisfy older and newer versions of the runtime simultaneously.
I end up spending often a couple weeks of my life on and off fixing things after every major release.
hu3 · 6h ago
Was it related to magic quotes or global variables created from user input? If so, the writing has been on the wall for over a decade for these.
akkartik · 17h ago
For similar considerations + considerations of code size and hackability, I lately almost always prefer the Lua eco-system: https://akkartik.name/freewheeling
testthetest · 12h ago
It’s getting harder to make perfect choices as projects grow more complex. Even simple products often mean building for iOS, Android, web, backend, etc. You can only lean on html/js for parts of that, but in practice, the mobile apps will probably get rewritten every few years anyway.
From my side I think it’s more useful to focus on surfacing issues early. We want to know about bugs, slowdowns, regressions before they hit users, so everything we write is written using TDD. But because unit tests are couple with the environment they "rot" together. So we usually set up monitoring, integration and black-box tests super early on and keep them running as long as the project is online.
icameron · 16h ago
Nobody has a better ecosystem of “industrial marine grade code rot resistance” than Microsoft. That I can run the same .NET web app code compiled 20 years ago on a new Server 2025 is an easy experience unequaled by others. Or the same 30 year old VBA macros still doing their thing in Excel 365. There’s a company that knows how to do backwards compatibility.
Cthulhu_ · 8h ago
I suspect this is true for a lot of software; a modern-day JVM can still run Java from 20 years ago as long as you didn't do anything weird, Linux hasn't significantly changed since then, the web is one of the most widely supported and standardized platform out there with the 1996 Space Jam website still working and rendering today as it was back then (your screen just got bigger).
Is software rot real? I'm sure, but it's not in the runtime. It's likely in the availability and compatibility of dependencies, and mainly the Node ecosystem.
veggieroll · 7h ago
I don't have experience with .NET. So that's nice to hear you've got a reliable setup. But, this has generally not been my experience with Microsoft.
There's tons of old programs from the Windows 95-XP era that I haven't been able to get running. Just last week, I was trying to install and run point and click games from 2002 and the general advise online is to just install XP. There was a way (with some effort) to get them working on Windows 7. But, there's no way to get them to work that I've seen on 10/11.
Faark · 6h ago
HellCopter in my case. Few years ago I still somehow manages to get some edition running, no luck this time. Rot is a thing and the retro gaming & archival communities a blessing.
js8 · 15h ago
As somebody who works on IBM mainframes, I disagree. IBM is probably the best at forward application compatibility.
People will laugh, but they should really look.
mkesper · 4h ago
Yes, honestly, when you're talking about planning for 40 years and more of backwards compatibility probably nothing beats IBM systems, and not only the big mainframes but also the smaller systems like AS/400 (nowadays System i). https://en.wikipedia.org/wiki/IBM_i
mrkeen · 10h ago
I had the opposite experience. Tried out XNA one year for the Global Game Jam. Was somewhat pleased with it. It was gone the next year.
1vuio0pswjnm7 · 1h ago
"HTTP vs HTTPS
It is possible to consult this wiki on port 80, that is to say using http:// instead of https://."
"If you do not have access to git on your operating system, you can download a zip file that contains both the markdown source files and the generated HTML files, with the paths fixed. The zip file is generated once a week."
Would it be appropriate to include a digital signature, as is commonly found on mirrors
Thought experiment: If it was standard practice to offer a compressed archive then would websites still be hammered by unwanted crawlers
If answer is yes, then what if remove/deny access to online pages and only allow access to the compressed archive
phkahler · 8h ago
I'm looking at GTK here. Don't get me wrong, I like GTK and think it should be the preferred choice of GUI toolkit for many reasons. However, I have the same complaints a lot of people do about constant change and API compatibility issues. In some cases things need to change, but why going form 3 to 4 have menus been removed and require using other constructs? Could you at least provide a wrapper? Don't use event struct members directly, OK use accessor functions... But then you change the names and other details of the functions. It's not a "window" any more, it's a "surface" just because what? Beause Wayland calls them that? API stability is an important feature but these guys are talking about regular (say every 5 years) major version bumps that break things.
8n4vidtmkvmk · 1h ago
I never understood those kinds of changes. If you really, really, really want to rename something, deprecate the old thing and make it an alias for the new thing. Don't delete it. Just add compatibility shims. Then after 5 or 10 years, take a pulse of the community. If they've mostly migrated off, go ahead and delete. If it's still actively used... Keep it, or even undeprecate it because clearly it's wanted.
Daub · 16h ago
As a software user and teacher, I think about software rot a lot. My concern is that it has a tendency to grow by addition rather than replacement. New features are added whilst the fundamental limits of the architecture are left unattended to.
The reason that Blender grew from being an inside joke to a real contender is the painful re-factoring it underwent between 2009 and 2011.
In contrast, I can feel the fact that the code in After Effects is now over 30 years old. Its native tracker is slow and ancient and not viable for anything but the most simple of tasks. Tracking was 'improved' by sub-contracting the task to a sub-licensed version of Mocha via a truly inelegant integration hack.
There is so much to be said for throwing everything away and starting again, like Apple successfully did with OSX (and Steve Job did to his own career when he left Apple to start Next). However, I also remember how Blackberry tried something similar and in the process lost most of their voodoo.
foxrider · 15h ago
Python 2 situation opened my eyes to this. To this day I see a lot of py2 stuff floating around, especially around work environments. So much so, in fact, that I had to make scripts that automatically pull the sources of 2.7.18 and build them in the minimal configuration to run stuff.
Ygg2 · 14h ago
Python 2 is a warning about doing backwards compatibility changes too late. As soon as you have a few massive libraries, your backward compatibility risks grow exponentially.
C# did just as big of a change by going from type-erased to reified generics. It broke the ecosystem in two (pre- and post- reified generics). No one talks about it, because the ecosystem was so, so tiny, no one encountered it.
saurik · 14h ago
It certainly didn't help that they were annoying about it; like, they actively dropped some of the forward compatibility they had added (a key one being if you had already carefully used u and b prefixes on strings) in Python 3.0, and only added it back years later after they struggled to get adoption. If they had started their war with Python 3.5 instead of 3.0 it would be a lot less infuriating.
flomo · 14h ago
Not being a python dev, there must have been some huge superficial 'ick'. Back when, I was talking to a python guy and mentioned that Python 3 was coming out. He said something like "we're just going to ignore that until they sober-up and fix it." Which it seems like a lot of people actually did. (or they really sobered-up and rewrote in Go or something.)
zahlman · 3h ago
> He said something like "we're just going to ignore that until they sober-up and fix it." Which it seems like a lot of people actually did.
"It" was fixed long before the 2.7 official sunset date. Even before the original planned date before it got extended, frankly.
vrighter · 14h ago
When did c# have type erasure?
LittleCloud · 11h ago
C# 1.0 did not have generics, period. So the standard dictionary (Hashtable†) type took keys and values typed as "System.Object". As seen in the linked documentation this class still exists in the latest .NET to this day.
Occasionally one would still encounter non-generic classes like this, when working with older frameworks/libraries, which cause a fair bit of impedence mismatch with modern styles of coding. (Also one of the causes of some people's complaints that C# has too many ways of doing things; old ways have to be retained for backwards compatibility, of course.)
I do recall some paper mentioning it. But now I'm not sure if Google is gaslighting me or it never existed. But it seems you are right.
perrygeo · 8h ago
I don't like the term "rot" - your software isn't rotting, it's exactly the same as when you last edited it. The rest of the software ecosystem didn't "rot" either, it evolved. And your old code didn't. "Extinction" seems a much better fit.
8n4vidtmkvmk · 1h ago
If the apple remains pristine, but the tree trunk dies, it doesn't much matter that the apple hasn't changed. It dies with the tree.
munchler · 8h ago
If we're just going by word meanings, "obsolescence" seems even more apt, but "rot" has an ironic charm that makes it memorable.
pvtmert · 12h ago
It is interesting that one of the most solid piece of software component that is relatively resistant to rotting is Shell/Bash scripts. (Including Makefiles)
Python, Ruby, etc. constantly get obsolete over time, packages get removed from the central repositories, ceasing to work.
Obviously shell scripts contain _many_ external dependencies, but overall the semantics are well-defined even if the actual definitions are loose against the shell script itself.
P.S: I have written couple of Bash-script projects that are still running (mostly deployment automation scripts) meanwhile, some of which I was being "smart" and wrote them in Python 2.7, unfortunately ceased to function, requiring upgrades...
8n4vidtmkvmk · 55m ago
I recently discovered how many of my bash scripts run perfectly fine in git-bash on windows. I was pleased. I believe Bun (yes the js runtime) can run .sh scripts too. And of course under WSL too. So 3 different ways I can run them on Windows. File paths can get a little sketchy but with a couple helpers it's not hard to convert back and forth.
benchly · 11h ago
What about C? Is there something to be said for the proximity of the language to the machine language?
Maybe that sounds like a dumb question because I am a beginner at coding, so I apologize, but it's something I've recently gotten into via an interest in microcontrollers. A low(er)-level language like C seems to be pretty universally sound, running on just about anything, and seems to have not changed much in a long, long time. But when I was dabbling in both Python and Ruby (on Rails) and .NET framework, I noticed what you mean, as I scooped up some old projects on github thinking I'd add to them and realize that it would be a chore to get them updated.
prmoustache · 11h ago
Frankly perl5 seems like one that is very resistant to rot. TCL and TK as well.
Software does not rot, the environment around software, the very foundation which owes its existence: the singular task of enabling the software, is what rots.
Let's look at any environment snapshot in time, the software keeps working like it always did.. Start updating the environment, and the software stops working, or rather, the software works fine, but the environment no longer works.
I'm not saying never to update software, but, only do it if it increases speed, decreases memory usage, and broadens compatibility.
I like things better the way they were.
I like things better now than how they will be tomorrow.
I can't remember the last time I saw a software update that didn't make it worse.
prinny_ · 11h ago
I don’t get the comparison to building a house. Houses have a ton of maintenance. You can’t build a house on steady ground and leave it unattended for 20 years either. And sometimes what you need to do is not even construction type of maintenance, it’s bills, legal paperwork, replacing old furnitures just because you grew tired of that 15 year old sofa etc.
9rx · 5h ago
Houses do require a ton of maintenance, but typically within a stable environment. Yeah, laws might change every once in a blue moon, and over hundreds of years climate change may alter the patterns of wear and tear, but for all intents and purposes what you know today is likely to still hold for tomorrow.
But if you build on a bog, then you've unnecessarily introduced a whole lot of new variables to contend with.
rgmerk · 14h ago
You can't build permanent software in a world where a) everything is connected to everything else, and b) hackers will exploit anything and everything they can get their hands on.
8n4vidtmkvmk · 58m ago
Isn't that half the problem right there? Stop connecting everything to everything. Make your app work offline. And for anything that must connect, at least use a well defined API that can swap for some other service. E.g. S3 API is now defacto for block storage.
collinmcnulty · 10h ago
In my experience, the most common type of rot is that the real world the software describes has changed. For instance, I wrote software that modeled electrical power contracts, and when those real world contracts’ structure changes, no amount of “bedrock platform” is going to prevent that rot.
8n4vidtmkvmk · 48m ago
I don't know anything about electrical power, but this reminds me about a story about making assumptions about human relationships. Like, you can legally be your own grandfather. Does your program account for that?
Software needs to be flexible to account for new, weird scenarios. Sometimes we just can't predict everything, but we can try.
userbinator · 16h ago
those written for e.g. Linux will likely cease working in a decade or two
Have we already passed the era of DON'T BREAK USERSPACE when Linus would famously loudly berate anyone who did?
I suspect Win32 is still a good target for stability; I have various tiny utilities written decades ago that still work on Win11. With the continued degradation of Microsoft, at least there is WINE.
SkiFire13 · 15h ago
The kernel doesn't break userspace, but userspace breaks itself quite often.
No comments yet
whizzter · 15h ago
While the early core win32 parts are still fine, much COM based stuff will probably be a pain in the future.
It's not direct breakage per-se (API's were generated from definition files and there was an encouragement to build new API versions when breaking API's), the issue will be that many third party things were to be manually installed from more or less obscure sources.
Your Office install probably introduced a bunch of COM objects. Third party software that depended on those objects might not handle them being missing.
I think I took some DOS-like shortcuts with some of my early DirectDraw (DirectX 3 level?) code, afaik it doesn't work in fullscreen past Windows Vista but _luckily_ I provided a "slow" windowed GDI fallback so the software still kinda runs at least.
atq2119 · 15h ago
No, the issue in Linux is that userspace has traditionally had a tendency to break itself.
cookiengineer · 15h ago
You can account for that by using Go, with CGO_ENABLED=0. Then you have a self contained binary that relies solely on the POSIX syscalls.
dpassens · 12h ago
Linux syscalls. There's no such thing as POSIX syscalls because POSIX defines a C interface.
cookiengineer · 10h ago
You're right that POSIX doesn't specify syscall tables.
What I wanted to point out is that Go also supports BSDs and other kernels out of the box that implement the POSIX standard, though with slightly different syscall tables and binary formats on those target platforms and architectures.
I was referring to POSIX as a standard specification, because it also includes not only Linux's syscall table, but also various other things that you can typically find in the binutils or coreutils package, which Go's stdlib relies on. See [2.1.3] and following pages.
I guess what I wanted to say: If I would bet on long term maintenance, I would bet on POSIX compatibility, and not on one specific implementation of it.
I am considering learning Win32 for that purpose, not that I plan to do anything complicated with it, but for small single purpose tools it should be useful, at least when some kind of UI is needed.
rerdavies · 14h ago
Don't do it! Seriously! Terrible awful stuff to use. Even worse to program for!
wolvesechoes · 16h ago
I would love ReactOS to succeed.
Imustaskforhelp · 14h ago
If I remember correctly, ReactOS just uses wine under the hood. Which can be used in linux and even mac or bsd too.
Basically (compile to windows?) seems like a good enough tradeoff to run anywhere, right?
But I prefer appimage or flatpak because of the overhead that wine might introduce I suppose
wolvesechoes · 14h ago
> If I remember correctly, ReactOS just uses wine under the hood
Nah. I think they share some effort and ReactOS team adds patches to WINE codebase, it is a separate thing.
jgb1984 · 8h ago
Today we vibecode our software, so the rot is built in from day one!
ManBeardPc · 11h ago
Software rot is a big problem in many business tools. Everything not recently built from the tip of main is probably no longer working. APIs change, URLs change, Processes change, newer versions of dependencies no longer work because of version conflicts or deprecation without replacement. The amount of work just to fix the rot is constantly rising. No care is spent to keep things stable. Stable interfaces are really something I learnt to appreciate more and more, even if sometimes crusty and verbose. If it is reliably working don't replace it without a very good reason. I attribute these problems mostly to agile the way it is practiced in many companies. Only thinking in providing "value" per sprint and little to no planning ahead, prioritizing ease of change over everything else.
Copenjin · 16h ago
> Software rot is a big issue for cultures that constantly produce new programs
Cough cough vibing cough cough
alexshendi · 16h ago
Common Object File Format (COFF)?
Copenjin · 15h ago
onomatopoeia, fixed spelling :)
b_e_n_t_o_n · 16h ago
Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?
mmackh · 16h ago
Not if software is tied to infrastructure, buildings, etc.
tanduv · 15h ago
but even buildings need maintained
spauldo · 14h ago
Automation costs a lot. The projects I work on are almost always in the millions of dollars,band they're far from being considered "big" projects. The hardware manufacturers will sell you equipment that runs for thirty years. Companies are reluctant to replace working systems.
I replaced a PLC a couple years ago. The software to program it wouldn't run on my laptop because it used the win16 API. It used LL-984 ladder logic, and most people who were experts in that have retired. It's got new shiny IEC-compliant code now, and next they're looking at replacing the Windows 2000 machines they control it with. Once that's done, it'll run with little to no change until probably 2050.
b_e_n_t_o_n · 14h ago
Perhaps software should be designed in such a way that despite it working on infrastructure, it can be swapped and discarded.
8n4vidtmkvmk · 47m ago
I don't want to reformat my fridge.
treyd · 9h ago
If it works and isn't broke then why swap or discard it?
Vegenoid · 5h ago
This is why I put so much effort into working with POSIX shell code, despite its painful syntax. I can be pretty damn sure my knowledge will still be relevant in 30 years.
LLMs have also made reading and writing shell code much easier.
Falkon1313 · 14h ago
Over the course of my learning and my career, I've kind of gone back and forth on this a bit.
On the one hand, software is like a living thing. Once you bring it into this world, you need to nurture it and care for it, because its needs, and the environment around it, and the people who use it, are constantly changing and evolving. This is a beautiful sentiment.
On the other hand, it's really nice to just be done with something. To have it completed, finished, move on to something else. And still be able to use the thing you built two or three decades later and have it work just fine.
The sheer drudgery of maintenance and porting and constant updates and incompatibilities sucks my will to live. I could be creating something new, building something else, improving something, instead, I'm stuck here doing CPR on everything that I have to keep alive.
I'm leaning more and more toward things that will stand on their own in the long-term. Stable. Done. Boring. Lasting. You can always come back and add or fix something if you want. But you don't have to lose sleep just keeping it alive. You can relax and go do other things.
I feel like we've put ourselves in a weird predicament with that.
I can't help but think of Super Star Trek, originally written in the 1970s on a mainframe, based on a late 1960s program (the original mainframe Star Trek), I think. It was ported to DOS in the 1990s and still runs fine today. There's not a new release every two weeks. Doesn't need to be. Just a typo or bugfix every few years. And they're not that big a deal. -- https://almy.us/sst.html
I think that's more what we should be striving for. If someone reports a rare bug after 50 years, sure, fix it and make a new release. The rest of your time, you can be doing other stuff.
bravesoul2 · 17h ago
JS is hated but if you compile to browser JS that code will run in 2100. If you mainly deal with files / blobs not databases you will have these things in 2100 too. I think a lot of apps can be JS plus Dropbox integration to sync files. Dropbox may rot but make that a plugin (seperate .js file) and offer local read/write too and I think you'd be pretty future proof.
jillesvangurp · 16h ago
Except of course software rot and javascript code bases go hand in hand.
You seem to assume, browsers have stopped changing and will be more or less the same 75 years from now.
I think you are right that that code might run. But probably in some kind of emulator. In the same way we deal with IBM mainframes right now. Hardware and OS have long since gone the way of the dodo. But you can get stuff running on generic linux machines via emulation.
I think we'll start seeing a lot of AI driven code rot management pretty soon. As all the original software developers die off (they've long been retired); that might be the only way to keep these code bases alive. And it's also a potential path to migrating and modernizing code bases.
Maybe that will salvage a few still relevant but rotten to the core Javascript code bases.
8n4vidtmkvmk · 37m ago
If they want to introduce html6, they'll add a new doctype. If they want moderner JS they'll do another 'stricter' or file type or what have you. The fact that it will continue to run with zero changes is good enough.
mook · 15h ago
Things like E4X, sharp variables, and array comprehensions have already been removed; it's just that the mass of newer developers mean the average doesn't know about them. Unfortunately it's not like they never remove things.
8n4vidtmkvmk · 39m ago
I didn't know about either of those, and I've been using js for 20ish years. I could have used those sharp variables in a previous project! Fortunately it's not terribly hard to do the same thing with an IIFE. Or get() method.
bakkoting · 2h ago
As far as I'm aware none of those were ever supported in more than one engine (specifically Firefox) and so cannot reasonably be considered to have been part of JavaScript. JS really does make a point of not removing things.
There are some _very_ rare exceptions, but they're things like "support for subclassing TypedArrays", and even then this is only considered after careful analysis to ensure it's not breaking anyone.
flomo · 14h ago
You are right that your javascript bundle will probably run forever as-is. However, three years later your toolchain will be totally broken and now you are in NPM Hell trying to fix it. Ten years later, good luck.
(S3 better example than Dropbox. That will mostly be around forever.)
8n4vidtmkvmk · 42m ago
I've been debating how to go abouts installing js packages directly into my project. Like when I update a dependency, show me the diff for that whole project, not just my package.json bump. If it's good, I'll just accept the merge as-is. If it's bad, it can stay pinned to that version forever. No downloads. I guess this doesn't solve the "peer dependency" issue if 2 different things depend on 1 thing in a way such that I can't have 2 copies of the lib (eg react 18 and 19 is a nono but 2 lodashes is fine). Hmm.. a man can dream.
tekno45 · 17h ago
postgres will be around in 2500 lol
Kinrany · 15h ago
Postgres will get disassembled into independent composable parts and some other "distribution" of it will be used for a more narrow set of use cases that actually require running the database as a standalone process
joegibbs · 15h ago
I think don’t worry too much about trying to avoid it. Think 5-10 years ahead max, rather than 20.
In 20-30 years there’s a good chance that what you’ve written will be obsolete regardless - even if programs from 1995 ran perfectly on modern systems they’d have very few users because of changing tastes. A word processor wouldn’t have networked collaborative editing (fine for GRRM though), an image editor wouldn’t have PNG support, and they wouldn’t be optimised for modern hardware (who would foresee 4K screens and GPUs back then - who knows how we’ll use computers in 2055).
There are also always containers if the system needs those old versions.
whizzter · 15h ago
2005 is already 25 years ago, and what the author is hinting at is the difference in difficulty between keeping 1980s software running vs keeping 2005 software running is momentous.
1980s NES software is "easy" as in emulating a CPU and the associated hardware (naturally there are corner cases in emulation timing that makes it a lot harder, but it's still a limited system).
I used to make demos as mentioned in the article, the ones I did for DOS probably all work under DosBox. My early Windows demos on the other hand relied on a "bad" way of doing things with early DirectDraw versions that mimicked how we did things under DOS (ie, write to the framebuffer ourselves). For whatever reason the changes in Vista to the display driver model has made all of them impossible to run in fullscreen (luckily I wrote a GDI variant for windowed mode that still makes it possible to run).
Even worse is some stuff we handle at an enterprise customer, Crystal Reports was even endorsed by Microsoft and AFAIK included in Visual Studio installs. Nowadays abandoned by MS and almost by it's owner (SAP), we've tried to maintain an customized printing applications for a customer, relying on obscure DLL's (and even worse the SAP installer builder for some early 2000s install technology that hardly works with modern Visual Studio).
Both these examples depend on libraries being installed in a full system, sure one could containerize needed ones but looking at the problem with an archivist eyes, building custom Windows containers for thousands of pieces of software isn't going to be pretty (or even feasible in a legal sense both with copyright and activation systems).
Now you could complain about closed source software, but much of a tad more obscure *nix software has a tendency to exhibit a huge part of "works on my machine" mentality, configure scripts and Docker weren't invented in a vacuum.
saurik · 14h ago
> 2005 is already 25 years ago
o_O
whizzter · 5h ago
Brainfart, the DDraw based demos I wrote about were made from 1999 to 2002 so around 25 years ago but then wrote 2005 because iirc the enterprise stuff I also mentioned were probably made around then. Regardless, it's all of the win2k-XP era vintage before Mac's and phones started gaining marketshare and developing with deep Windows integrations was a totally rational choice.
pvtmert · 12h ago
a confirmed time-traveller :)
biscuits1 · 7h ago
". . . whose specifications are static and solid."
Well, thats the problem with software. There isn't agreement of such specifications. We aren't working with wood, nails nor forming a sill footing on bedrock.
kragen · 4h ago
Konrad Hinsen calls this "software collapse": when the platform has been eroded out from underneath your software and it collapses. https://hal.science/hal-02117588/document
There's no reason such a "bedrock platform"♢ needs to be a shitty pain in the ass like the IBM PC or NES (the examples on https://permacomputing.net/bedrock_platform/). Those platforms were pragmatic tradeoffs for the existing hardware and fabrication technology in the market at the time, based on then-current knowledge. We know how to do much better tradeoffs now. The 8088 in the IBM PC was 29000 transistors, but the ARM 2 was only 27000 transistors†. Both could run at 8 MHz (the 8088 in its later 8088-2 and 80C88 incarnations), but the ARM 2 was a 32-bit processor that delivered about 4 VAX MIPS at that speed (assuming about ½DMIPS/MHz like the ARM3‡) while the 8088 would only deliver about 0.3 VAX MIPS (it was 0.04DMIPS/MHz). And programming for the 8088's segmented memory model was a huge pain in the ass, and it was crippled by only having 20 address lines. 8088 assembly is full of special-purpose registers that certain instructions have to use; ARM assembly is orthogonal and almost as high-level as C.
Same transistor count, same clock speed, dramatically better performance, dramatically better programming experience.
Similarly, Unix and Smalltalk came out about the same time as Nova RDOS and RT-11, and for literally the same machines, but the power Unix and Smalltalk put in the hands of users far outstripped that of those worse-designed systems.
So, let's put together a bedrock platform that we could actually use for our daily computing in practice. Unlike the NES.
> ARM assembly is orthogonal and almost as high-level as C.
The AArch64 is wacky in its own, different, way. For example, loading a constant into a register, dealing with an offset to an index, etc. It also has special purpose registers, like the zero register.
The PDP-11 architecture remains the best gem of an orthogonal instruction set ever invented.
kragen · 4h ago
Yeah, ARM64 is a little weird, and I'm not sure it's a good design, though it does seem to be workable. But I'm talking about the original ARM instruction set implemented on the ARM2, as evidence that architectural design quality matters—the same 29000 transistors can give you 12 times the performance and a much better programming model.
The PDP-11 seems pleasant and orthogonal, but I've never written a program for it, just helped to disassemble the original Tetris, written for a Soviet PDP-11 clone. The instruction set doesn't feel nearly as pleasant as the ARM: no conditional execution, no bit-shifted index registers, no bit-shifted addends, only 8 registers instead of 16, and you need multiple instructions for procedure prologues and epilogues if you have to save multiple registers. They share the pleasant attribute of keeping the stack pointer and program counter in general-purpose registers, and having postincrement and predecrement addressing modes, and even the same condition-code flags. (ARM has postdecrement and preincrement, too, including by variable distances determined by a third register.)
The PDP-11 also wasn't a speed demon the way the ARM was. I believe that speed trades off against everything, and I think you're on board with that from your language designs. According to the page I linked above, a PDP-11/34 was about the same speed as an IBM PC/XT.
Loading a constant into a register is still a problem on the ARM2, but it's a problem that the assembler mostly solves for you with constant pools. And ARM doesn't have indirect addressing (via a pointer in memory), but most of the time you don't need it because of the much larger register set.
The ARM2 and ARM3 kept the condition code in the high bits of the program counter, which meant that subroutine calls automatically preserved it. I thought that was a cool feature, but later ARMs removed it in order to support being able to execute code out of more than just the low 16 mebibytes of memory.
Here's an operating system I wrote in 32-bit ARM assembler. r10 is reserved for the current task pointer, which doesn't conform to the ARM procedure call standard. (I probably should have used r9.) It's five instructions:
.syntax unified
.thumb
.fpu fpv4-sp-d16
.cpu cortex-m4
.thumb_func
yield: push {r4-r9, r11, lr} @ save all callee-saved regs except r10
str sp, [r10], #4 @ save stack pointer in current task
ldr r10, [r10] @ load pointer to next task
ldr sp, [r10] @ switch to next task's stack
pop {r4-r9, r11, pc} @ return into yielded context there
While Emacs itself is not entirely immune to software rot (external dependencies and all), it’s truly amazing how little to no rot is experienced by elisp software (packages). If you find an Emacs package written 15 years ago, the chances of successfully running out of the box are incredibly high.
zkry · 13h ago
I was thinking the exact same thing. As long as you're not depending on any external packages things are very stable. Like, if you're package depends on adding advice to some other package's random internal function, then yeah, it could easily break.
It's a great feeling knowing any tool I write in Elisp will likely work for the rest of my life as is.
deafpolygon · 14h ago
That... has not been my experience.
account42 · 13h ago
The OS/libraries changing is one example of software rot but another one is requirements changing and you can't completely eliminate that.
alexshendi · 16h ago
I think once you get rid of dynamic libraries and GUIs your software rot will be greatly reduced.
conartist6 · 11h ago
If you're listening VSCode forkers, ya built on bog
8n4vidtmkvmk · 33m ago
I'm sitting on the sidelines, waiting for them to all sink. I guess I don't stand to gain anything, but I'll be happy to keep sipping my beer.
BirAdam · 7h ago
The most reliable targets I’ve seen recently are Win32 and CGI. They just work. Linux, Windows, macOS. Decade after decade.
superkuh · 16h ago
Unless explicitly addressed rot rate is proportional to popularity.
Unpopular targets, platforms, languages, etc don't get changed and provide a much needed refuge. There are some interpreted languages like perl where a program written today could run on a perl from 2001 and a program from 2001 would run on perl today. And I'm not talking about in a container or with some special version. I'm talking about the system perl.
Some popular languages these days can lose forwards compatibility (gain features, etc) within just a few months that every dev will use within a few more months. In these cultures sofware rot is really fast.
wolvesechoes · 16h ago
> Unpopular targets, platforms, languages, etc don't get changed
Ah yes, Windows, some niche OS for hipsters and base-dwellers.
superkuh · 8h ago
Windows was an amazing exception to this rule for 2 decades. It's incredibly impressive that win32 still works on modern Windows installs. But that's changing fast now.
fuzzfactor · 17h ago
There were companies not quite worth a $billion who would have never made it that far if they couldn't convince masses of people that platform rot was good for them.
jongjong · 10h ago
This is a good discussion to have. I spend a lot of effort on evaluating dependencies. I look for a number of things like how popular/widely used it is, who the author is (if I recognize them) and I also look at code quality and number of sub-dependencies.
If I see a library which is solving a simple problem but it uses a lot of dependencies, I usually don't use that library. Every dependency and sub-dependency is a major risk... If a library author doesn't understand this, I simply cannot trust them. I want the authors of my dependencies to demonstrate some kind of wisdom and care in the way they wrote and packaged their library.
I have several open source projects which have been going for over a decade and I rarely need to update them. I was careful about dependencies and also I was careful about what language features I used. Also, every time some dependency gave me too much trouble I replaced it... Now all my dependencies are highly stable and reliable.
My open source projects became a kind of a Darwinian selection environment for the best libraries. I think it's why I started recognizing the names of good library authors. They're not always super popular but good devs tend to produce consistent quality and usually gets better with time. So if I see a new library and I recognize the author's name, it's a strong positive signal.
It feels nice seeing familiar niche names come up when I'm searching for new libraries to use. It's a small secret club and we're in it.
8n4vidtmkvmk · 30m ago
Picking packages based on their author has become a big thing for me. Some authors are awful about backwards compatibility, some are fantabulous. There's a couple folks that have bitten me that I rather despise now and avoid like the plague.
alexjurkiewicz · 15h ago
It's hard to take this article seriously. We should write software for DOS because we won't need to maintain it post-release?
Maybe software written in the age of DOS was relatively trivial compared to modern tools. Maybe there's a benefit to writing code in Rust rather than C89.
nektro · 15h ago
lovely article aside from this bit:
> while those written for e.g. Linux will likely cease working in a decade or two
there's nothing to support this claim in practice. linux is incredibly stable
ajuc · 15h ago
I also noticed this part of the article but for the oposite reason (I think 10-20 years is overly optimistic). I've written a small 2d game for Linux back in 00s. Using C++, SDL and a few other libraries (for example now-abandoned libparagui for GUI).
Any time I tried to run it afterwards - I had to recompile it, and a few times I had to basically port it (because libparagui got abandoned and some stuff in libc changed so I couldn't just compile the old libparagui version with the new libc).
It's surprisingly hard to make linux binaries that will work even 5 years from now, never mind 10-20 years from now.
For comparison I still have games I've written for DOS and windows in 90s and the binaries still work (OK - I had to apply a patch for Turbo Pascal 7 200 MHz bug).
The assumptions around linux software is that it will be maintained by infinite number of free programmers so you can change anything and people will sort it out.
count-delight · 7h ago
Something I've been pondering recently: It is sometimes said that Apple is notorious on deprecating APIs and breaking backwards-compatibility at a rapid pace, but is it not fundamentally the same that is shared with Linux? Old developers lamenting that they have developed something useful in 2015-2015 that doesn't work in any more in 2025. And if Linux distributions do it and it can be thought to be a form of opinionated design, can Apple be blamed for it? Unix/FreeBSD legacy and all.
Of course with Apple ecosystem you need relatively recent hardware and have to pay the annual developer program fee, but the expectation is still the same: if you release something, you should keep it up-to-date and not just "fire and forget" like in Windows and expect it to work. Maybe Windows is the anomaly here.
ajuc · 7h ago
Dunno I just never used Apple. It's not very popular over here.
nektro · 3h ago
static binaries will never stop working
i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself. gui/games in particular on linux is not an area that shares this same stability yet
ajuc · 2h ago
> static binaries will never stop working
Yeah, if you compile all the dependencies and dependencies of dependencies and so on statically. Which is a fun experience I've tried several times (have fun trying to statically compile gfx drivers or crypto libraries for example), wasted a few evenings, and then had to repeat anyway because the as-static-as-possible binaries do stop working sometimes.
There's a reason linux distributions invented package managers.
> i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself.
We're talking software rot, not uptime? It's exactly about software stopping working on newer systems.
> For example, a program written a decade ago may no longer work with new versions of the libraries it depends on because some of them have changed without retaining backwards compatibility.
The rot is real but we have a way to run Linux (and any really) software in 40 years. For example:
From alpine:3.14
...
Just as I can run thousands of games from the 80s on my vintage CRT arcade cab using a Pi and MAME (with a Pi2JAMMA adapter), I'll be able to run any OCI container in 30 years.
The issue of running old software is solved: we've got emulators, VMs, containerization, etc.
Sure they may lack security upgrades but we'll always be able to run them in isolation.
The rot is not so much that the platform won't exist in the future or that old libs would mysteriously not be available anymore: the problem is that many software aren't islands. It's all the stuff many need to connect to that's often going to be the issue.
For games that'd be, say, a game server not available anymore. Or some server validating a program to make sure its license has been paid. Or some centralized protocol that's been "upgraded" or fell into irrelevancy. Or some new file format that's now (legitimately or not) all the shit.
Take VLC: there's not a world in which in 40 years I cannot run a VLC version from today. There's always going to be a VM or, heck, even just an old Linux version I can run on bare metal. But VLC ain't an island either: by then we'll have coolmovie-48bits.z277 where z277 is the latest shiny video encoding format.
At least that's where I see the problem when running old software.
nektro · 2h ago
posts like this need to mention capitalism if they want to be taken seriously
sbrkYourMmap · 7h ago
This is the side effect of open source. And open source philosophy is at fault but rather developers themselves. We heavily rely upon code written by strangers and maintained by individual or a community without any obligation to any guarantees, including comparability and maintainability. At work when we are working with vendors that provide proprietary software we have contractual obligations, that amongst other things requires stable interfaces for long periods of time, and could be held liable for damages. Something modern model of open source can't offer
1970-01-01 · 4h ago
IMHO, you need to address how everyone is dating its superficially attractive cousin - free open source software - before you tackle the issue of it rotting away. When your entire castle rests upon the following language, perhaps it's somewhat your fault when it stops working tomorrow.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
https://www.sqlite.org/lts.html
People talk about SQLite's reliability but they should also mention its stability and longevity. It's first-class in both. This is what serious engineering looks like.
Not trying to be chide but it seems like with such a young industry we need better social tools to make sure this effort is preserved for future devs.
Churn has been endemic in our industry and it has probably held us back a good 20 years.
What they can do is renew the promise going forward; if in 2030 they again commit to 25 years of support, that would mean something to me. Claiming they can promise to be supporting it in 2075 or something right now is just not a sensible thing to do.
I'm curious how these plans would look and work in the context of software development. That was more what my question is about (also only being familiar with sqlite taking this seriously).
We've seen what lawyers can accomplish with their BAR associations and those were created over 200 years ago in the US! Lawyers also work with one of the clunkiest DSLs ever (legalese).
Imagine what they could accomplished if they used an actual language. :D
Do you think such a thing would have helped or hurt our industry?
I honestly think help.
If any tooling fails in 25 years, you can at least write a new program to get your data back out. Then you can import it into the next hot thing.
Always thought neovim should do something like this. How to recreate the base of neovim, or how to recreate popular plugins with minimal lua code.
Got to wonder how more sustainable that would be versus relying on donations.
How do we plan to make sure the lessons we've learned during development now will still be taught 300 years from now?
I'm not putting the onus on sqlite to solve this but they are also the only organization I know of that is taking the idea seriously.
Just more thinking in the open and seeing what other people are trying to solve similar problems (ensure teachings continue past their lives) outside the context of universities.
Some people on the React team deciding in 2027 to change how everyone uses React again is NOT exciting, it's an exercise in tolerating senior amateurs and I hate it because it affects all of us down to the experience of speaking to under-qualified people in interview processes "um-ackchyually"-ing you when you forget to wrap some stupid function in some other stupid function.
Could you imagine how incredulous it would be if SQLite's C API changed every 2 years? But it doesn't. Because it was apparently designed by real professionals.
I think "boring software" is a useful term.
Exciting things are unpredictable. Predictable things aren't exciting. They're boring.
Stock car racing is interesting because it's unpredictable, and, as I understand it, it's okay for a race car to behave unpredictably, as long as it isn't the welds in the roll bars that are behaving unpredictably. But if your excitement is coming from some other source—a beautiful person has invited you to go on a ski trip with them, or your wife needs to get to the hospital within the next half hour—it's better to use a boring car that you know won't overheat halfway there.
Similarly, if a piece of software is a means to some other end, and that end is what's exciting you, it's better to use boring software to reach that end instead of exciting software.
In other words, if you need your software to live in the dirty world we live in, and not just in a pristine bubble, things are gonna rot.
Picking tools and libraries and languages that will rot less quickly however seems like a good idea. Which to me means not chaining myself to anything that hasn't been around for a decade at least.
I got royally screwed because 50-60% of my lifetime code output before 2018, and pretty much all the large libraries I had written, were in AS3. In a way, having so much code I would have maintained become forced abandonware was sort of liberating. But now, no more closed source and no more reliance on any libs I don't roll or branch and heavily modify myself.
Well, that's why I called BubbleOS BubbleOS: https://gitlab.com/kragen/bubbleos/
Not there yet, though...
I share your painful experience of losing my work to proprietary platforms.
Out of curiosity, what kind of work did you do? Regarding our old AS3, did you have any luck with Haxe? I assume it would be a straightforward port.
Still friends with the company owner of that code. So I've had a bit more insight into follow-up on code over 2 decades old that isn't so typical for anything else I've done.
Lasting centuries may or may not be preferable.
There are places where you want to cheaply rebuild from scratch. Your castle after tornado and flooding will be irreparably bad. Most castles suck badly by not taking advantage of new materials and I myself would not like to live in 100 yo building.
Same for software, there are pieces that should be build to last but there are applications that should be replaceable in short time.
As much as I am not fan of vibe coding I don’t believe all software should be built for decades to last.
We all know it now as dependency hell, but what it is in fact is just a lazy shortcut for the current development that will bite you down the path. The corporate software is not a problem, because the corporate users don't care as long as it works now, in the future they will still rely on paid solutions that will continue working for them. For me, I run a local mirror of arch linux, because I don't want to connect to internet all the time to download a library that I might need or some software that I may require. I like it all here, but since I haven't updated in a while I might see some destructive update if I were to choose to update now. This should never happen, another thing that should never happen is if I were to compile an old version of some software. Time and time again, I will find a useful piece of software on github and I will naturally try compiling it, it's never easy, I will have to hunt the dependency it requires, then try compiling old versions of various libraries. It's just stupid, I wish it were easier and built smarter. Yes sometimes I want to run old software, that has no reason not to work. When you look at windows, it all works magically, well it's not magic it's just done smart. On GNU+Linux smart thinking like this is not welcome, it never has been. Instead they rely on huge amounts of people that develop this software, to perpetually update their programs for no reason, but to satisfy a meaningless number of a dependency.
What you want to have (download software from the net and run it) is what most distro have been trying to avoid. Instead, they vet your code, build it, and add it to a reputable repo. Because no one wants to download postgres from some random sites.
Enough of the BS of "we're just volunteers" - it's fundamentally broken and the powers that be don't care. If multiple multibillion dollar entities who already contribute don't see the Linux desktop as having a future, if Linus Torvalds himself doesn't care enough to push the Foundation on it; honestly, you probably shouldn't care either. From their perspective, it's a toy, that's only maintained, to make it easier to develop the good stuff.
Desktop Linux is OK. And I think it’s all volunteer work.
My work computer with Windows on the other hand requires restarts every day or two after WSL inevitably gets into a state where vscode can't connect to it for some reason (when it gets into such a state it also usually pegs the CPU at 100% so even unlocking takes a minute or more, and usually I just do a hard power off).
I'm not a fan of bundling everything under the sun personally. But it could work if people had more discipline of adding a minimal number of dependencies that would be themselves lightweight. OR be big, common and maintain backwards compatibility so they can be deduplicated. So sort of the opposite of the culture of putting everything through HTTP APIs, deprecating stuff left and right every month, Electron (which puts the browser complexity into anything), and pulling whole trees of dependencies in dynamic languages.
This is probably one of the biggest pitfalls of Linux, saying this as someone to whom it's the sanest available OS despite this. But the root of the problem is wider, it's just the fact that we tend to dump the reduction of development costs onto all users in more resources usage. Unless some big corp cares to make stuff more economical, or the project is right for some mad hobbyist. As someone else said, corps don't really care about Linux desktop.
Too young to remember Windows 3.1 and “DLL hell?” That was worse.
This effect gets accelerated when teams or individuals make their code more magical or even just more different than other code at the company, which makes it harder for new maintainers to step in. Add to this that not all code has all the test coverage and monitoring it should... It shouldn't be too surprising there's always some incentives to kill, change, or otherwise stop supporting what we shipped 5 years ago.
Because the software ecosystem is not static.
People want your software to have more features, be more secure, and be more performant. So you and every one of your competitors are on an update treadmill. If you ARE standing (aka being stable) on the treadmill, you'll fall off.
If you are on the treadmill you are accumulating code, features, and bug fixes, until you either get too big to maintain or a faster competitor emerges, and people flock to it.
Solving this is just as easy as proving all your code is exactly as people wanted AND making sure people don't want anything more ever.
Runners on treadmills don't actually move forward.
If I assume your point is true, wouldn't everyone then just switch to Paint for all 2D picture editing? I mean it's the fastest - opens instantly on my machine vs 3-4sec for Krita/Gimp/Photoshop. But it's also bare bones. So why isn't Paint universal used by everyone?
My assumption: what people want is to not waste their time. If a program is 3 seconds slower to start/edit, but saves you 45 minutes of fucking around in a less featureful editor, it's obvious which one is more expedient in the long run.
I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
The core issue, in my humble opinion, is that it's not doing the same thing. But from a thousand miles away it looks like that, because everyone uses 20% of functionality, but everyone in aggregate uses 100% of functionality.
> I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
I'd like to see some actual proof of (/thoughts on) this. Like if you didn't patch any security issues/bugs, or add any features, or fixed any dependencies, how is the code getting slower?
Like I understand some people care about performance, but I've seen many non-performant solutions (from Unity, to Photoshop to Rider, being prefered over custom C# engine, Paint, Notepad++) being used nearly universally, which leads me to believe there is more than one value in play.
source code is ascii text, and ascii text is not alive. it doesn't need to breathe, modulo dependencies, yes. but this attitude that "not active, must be dead and therefore: avoid" leads people to believing that the opposite: unproven and buggy new stuff, is always better.
silly counter-example: vim from 10 years ago is just as usable for the 90% case as the latest one
Using the Lindy Effect for guidance, I've built a stack/framework that works across 20 years of different versions of these languages, which increases the chances of it continuing to work without breaking changes for another 20 years.
Python's standard library is just fine for most tasks, I think. It's got loads of battle tested parsers for common formats. I use it for asset conversion pipelines in my game engines, and it has so far remained portable between windows, linux and mac systems with no maintenance on my part. The only unusual crate I depend on is Pillow, which is also decently well maintained.
It becomes significantly less ideal the more pip packages you add to your requirements.txt, but I think that applies to almost anything really. Dependencies suffer their own software rot and thus vastly increase the "attack surface" for this sort of thing.
I like Python as a language, but I would not use it for something that I want to be around 20+ years from now, unless I am ok doing the necessary maintenance work.
If it's me running it, that's fine. But if it's someone else that's trying to use installed software, that's not OK.
However, a decade ago, a coworker and I were tasked with creating some scripts to process data in the background, on a server that customers had access to. We were free to pick any tech we wanted, so long as it added zero attack surface and zero maintenance burden (aside from routine server OS updates). Which meant decidedly not the tech we work with all day every day which needs constant maintenance. We picked python because it was already on the server (even though my coworker hates it).
A decade later and those python scripts (some of which we had all but forgotten about) are still chugging along just fine. Now in a completely different environment, different server on a completely different hosting setup. To my knowledge we had to make one update about 8 years ago to add handling for a new field, and that was that.
Everything else we work with had to be substantially modified just to move to the new hosting. Never mind the routine maintenance every single sprint just to keep all the dependencies and junk up to date and deal with all the security updates. But those python scripts? Still plugging away exactly as they did in 2015. Just doing their job.
Mostly I recoiled in horror at bash specifically, which in addition to bash version, also ends up invisibly depending on a whole bunch of external environment stuff that is also updating constantly. That's sortof bash's job, so it's still arguably the right tool to write that sort of interface, but it ends up incredibly fragile as a result. Porting a complex bash script to a different distro is a giant pain.
Even for somebody that did not aim to have python programs for 20y, python is definitely not a good example of a "pdf for programs"
I've lately been pretty deep into 3d printing, and basically all the software has Python...and breaks quite easily. Whether because of a new version of Pip with some new packaging rule, forced venvs...I really don't like dealing with Python software.
A container has that already done, including all supporting libraries.
Edit: then ship the bash script in a container with a bash binary ;)
Entropy sucks.
These are different problems from the distribution/bundling piece, they won't be solved the same way.
No comments yet
No comments yet
Skill issue, plus what's the alternative? Python was close until the 3.x fiasco
Both tools in question are installed everywhere and get the job done. There isn't much to evaluate, and nothing to compare against
These are exactly the skill issues I meant! Git gud in evaluating and you'll be able to come up with many more sophisticated evaluation criteria than the primitive "it's installed everywhere"
Would I advocate writing my core business software in bash or perl? No, I'd hire and train for what was chosen. For small scripts I might need to share with coworkers? 100%
I end up spending often a couple weeks of my life on and off fixing things after every major release.
From my side I think it’s more useful to focus on surfacing issues early. We want to know about bugs, slowdowns, regressions before they hit users, so everything we write is written using TDD. But because unit tests are couple with the environment they "rot" together. So we usually set up monitoring, integration and black-box tests super early on and keep them running as long as the project is online.
Is software rot real? I'm sure, but it's not in the runtime. It's likely in the availability and compatibility of dependencies, and mainly the Node ecosystem.
There's tons of old programs from the Windows 95-XP era that I haven't been able to get running. Just last week, I was trying to install and run point and click games from 2002 and the general advise online is to just install XP. There was a way (with some effort) to get them working on Windows 7. But, there's no way to get them to work that I've seen on 10/11.
People will laugh, but they should really look.
It is possible to consult this wiki on port 80, that is to say using http:// instead of https://."
https://permacomputing.net/about/
"If you do not have access to git on your operating system, you can download a zip file that contains both the markdown source files and the generated HTML files, with the paths fixed. The zip file is generated once a week."
https://permacomputing.net/cloning/
http://permacomputing.net/permacomputing.net.zip
Would it be appropriate to include a digital signature, as is commonly found on mirrors
Thought experiment: If it was standard practice to offer a compressed archive then would websites still be hammered by unwanted crawlers
If answer is yes, then what if remove/deny access to online pages and only allow access to the compressed archive
The reason that Blender grew from being an inside joke to a real contender is the painful re-factoring it underwent between 2009 and 2011.
In contrast, I can feel the fact that the code in After Effects is now over 30 years old. Its native tracker is slow and ancient and not viable for anything but the most simple of tasks. Tracking was 'improved' by sub-contracting the task to a sub-licensed version of Mocha via a truly inelegant integration hack.
There is so much to be said for throwing everything away and starting again, like Apple successfully did with OSX (and Steve Job did to his own career when he left Apple to start Next). However, I also remember how Blackberry tried something similar and in the process lost most of their voodoo.
C# did just as big of a change by going from type-erased to reified generics. It broke the ecosystem in two (pre- and post- reified generics). No one talks about it, because the ecosystem was so, so tiny, no one encountered it.
"It" was fixed long before the 2.7 official sunset date. Even before the original planned date before it got extended, frankly.
Occasionally one would still encounter non-generic classes like this, when working with older frameworks/libraries, which cause a fair bit of impedence mismatch with modern styles of coding. (Also one of the causes of some people's complaints that C# has too many ways of doing things; old ways have to be retained for backwards compatibility, of course.)
† https://learn.microsoft.com/en-us/dotnet/api/system.collecti...
The paper that the other commentator was referring to might be this: https://www.microsoft.com/en-us/research/wp-content/uploads/...
Python, Ruby, etc. constantly get obsolete over time, packages get removed from the central repositories, ceasing to work.
Obviously shell scripts contain _many_ external dependencies, but overall the semantics are well-defined even if the actual definitions are loose against the shell script itself.
P.S: I have written couple of Bash-script projects that are still running (mostly deployment automation scripts) meanwhile, some of which I was being "smart" and wrote them in Python 2.7, unfortunately ceased to function, requiring upgrades...
Maybe that sounds like a dumb question because I am a beginner at coding, so I apologize, but it's something I've recently gotten into via an interest in microcontrollers. A low(er)-level language like C seems to be pretty universally sound, running on just about anything, and seems to have not changed much in a long, long time. But when I was dabbling in both Python and Ruby (on Rails) and .NET framework, I noticed what you mean, as I scooped up some old projects on github thinking I'd add to them and realize that it would be a chore to get them updated.
Same site has this article about "bedrock platforms" which resonate deeply with me https://permacomputing.net/bedrock_platform/
Software does not rot, the environment around software, the very foundation which owes its existence: the singular task of enabling the software, is what rots.
Let's look at any environment snapshot in time, the software keeps working like it always did.. Start updating the environment, and the software stops working, or rather, the software works fine, but the environment no longer works.
I'm not saying never to update software, but, only do it if it increases speed, decreases memory usage, and broadens compatibility.
I like things better the way they were.
I like things better now than how they will be tomorrow.
I can't remember the last time I saw a software update that didn't make it worse.
But if you build on a bog, then you've unnecessarily introduced a whole lot of new variables to contend with.
Have we already passed the era of DON'T BREAK USERSPACE when Linus would famously loudly berate anyone who did?
I suspect Win32 is still a good target for stability; I have various tiny utilities written decades ago that still work on Win11. With the continued degradation of Microsoft, at least there is WINE.
No comments yet
It's not direct breakage per-se (API's were generated from definition files and there was an encouragement to build new API versions when breaking API's), the issue will be that many third party things were to be manually installed from more or less obscure sources.
Your Office install probably introduced a bunch of COM objects. Third party software that depended on those objects might not handle them being missing.
I think I took some DOS-like shortcuts with some of my early DirectDraw (DirectX 3 level?) code, afaik it doesn't work in fullscreen past Windows Vista but _luckily_ I provided a "slow" windowed GDI fallback so the software still kinda runs at least.
What I wanted to point out is that Go also supports BSDs and other kernels out of the box that implement the POSIX standard, though with slightly different syscall tables and binary formats on those target platforms and architectures.
I was referring to POSIX as a standard specification, because it also includes not only Linux's syscall table, but also various other things that you can typically find in the binutils or coreutils package, which Go's stdlib relies on. See [2.1.3] and following pages.
I guess what I wanted to say: If I would bet on long term maintenance, I would bet on POSIX compatibility, and not on one specific implementation of it.
[2.1.3] https://pubs.opengroup.org/onlinepubs/9699919799.2018edition...
Basically (compile to windows?) seems like a good enough tradeoff to run anywhere, right?
But I prefer appimage or flatpak because of the overhead that wine might introduce I suppose
Nah. I think they share some effort and ReactOS team adds patches to WINE codebase, it is a separate thing.
Cough cough vibing cough cough
I replaced a PLC a couple years ago. The software to program it wouldn't run on my laptop because it used the win16 API. It used LL-984 ladder logic, and most people who were experts in that have retired. It's got new shiny IEC-compliant code now, and next they're looking at replacing the Windows 2000 machines they control it with. Once that's done, it'll run with little to no change until probably 2050.
LLMs have also made reading and writing shell code much easier.
On the one hand, software is like a living thing. Once you bring it into this world, you need to nurture it and care for it, because its needs, and the environment around it, and the people who use it, are constantly changing and evolving. This is a beautiful sentiment.
On the other hand, it's really nice to just be done with something. To have it completed, finished, move on to something else. And still be able to use the thing you built two or three decades later and have it work just fine.
The sheer drudgery of maintenance and porting and constant updates and incompatibilities sucks my will to live. I could be creating something new, building something else, improving something, instead, I'm stuck here doing CPR on everything that I have to keep alive.
I'm leaning more and more toward things that will stand on their own in the long-term. Stable. Done. Boring. Lasting. You can always come back and add or fix something if you want. But you don't have to lose sleep just keeping it alive. You can relax and go do other things.
I feel like we've put ourselves in a weird predicament with that.
I can't help but think of Super Star Trek, originally written in the 1970s on a mainframe, based on a late 1960s program (the original mainframe Star Trek), I think. It was ported to DOS in the 1990s and still runs fine today. There's not a new release every two weeks. Doesn't need to be. Just a typo or bugfix every few years. And they're not that big a deal. -- https://almy.us/sst.html
I think that's more what we should be striving for. If someone reports a rare bug after 50 years, sure, fix it and make a new release. The rest of your time, you can be doing other stuff.
You seem to assume, browsers have stopped changing and will be more or less the same 75 years from now.
I think you are right that that code might run. But probably in some kind of emulator. In the same way we deal with IBM mainframes right now. Hardware and OS have long since gone the way of the dodo. But you can get stuff running on generic linux machines via emulation.
I think we'll start seeing a lot of AI driven code rot management pretty soon. As all the original software developers die off (they've long been retired); that might be the only way to keep these code bases alive. And it's also a potential path to migrating and modernizing code bases.
Maybe that will salvage a few still relevant but rotten to the core Javascript code bases.
There are some _very_ rare exceptions, but they're things like "support for subclassing TypedArrays", and even then this is only considered after careful analysis to ensure it's not breaking anyone.
(S3 better example than Dropbox. That will mostly be around forever.)
In 20-30 years there’s a good chance that what you’ve written will be obsolete regardless - even if programs from 1995 ran perfectly on modern systems they’d have very few users because of changing tastes. A word processor wouldn’t have networked collaborative editing (fine for GRRM though), an image editor wouldn’t have PNG support, and they wouldn’t be optimised for modern hardware (who would foresee 4K screens and GPUs back then - who knows how we’ll use computers in 2055).
There are also always containers if the system needs those old versions.
1980s NES software is "easy" as in emulating a CPU and the associated hardware (naturally there are corner cases in emulation timing that makes it a lot harder, but it's still a limited system).
I used to make demos as mentioned in the article, the ones I did for DOS probably all work under DosBox. My early Windows demos on the other hand relied on a "bad" way of doing things with early DirectDraw versions that mimicked how we did things under DOS (ie, write to the framebuffer ourselves). For whatever reason the changes in Vista to the display driver model has made all of them impossible to run in fullscreen (luckily I wrote a GDI variant for windowed mode that still makes it possible to run).
Even worse is some stuff we handle at an enterprise customer, Crystal Reports was even endorsed by Microsoft and AFAIK included in Visual Studio installs. Nowadays abandoned by MS and almost by it's owner (SAP), we've tried to maintain an customized printing applications for a customer, relying on obscure DLL's (and even worse the SAP installer builder for some early 2000s install technology that hardly works with modern Visual Studio).
Both these examples depend on libraries being installed in a full system, sure one could containerize needed ones but looking at the problem with an archivist eyes, building custom Windows containers for thousands of pieces of software isn't going to be pretty (or even feasible in a legal sense both with copyright and activation systems).
Now you could complain about closed source software, but much of a tad more obscure *nix software has a tendency to exhibit a huge part of "works on my machine" mentality, configure scripts and Docker weren't invented in a vacuum.
o_O
There's no reason such a "bedrock platform"♢ needs to be a shitty pain in the ass like the IBM PC or NES (the examples on https://permacomputing.net/bedrock_platform/). Those platforms were pragmatic tradeoffs for the existing hardware and fabrication technology in the market at the time, based on then-current knowledge. We know how to do much better tradeoffs now. The 8088 in the IBM PC was 29000 transistors, but the ARM 2 was only 27000 transistors†. Both could run at 8 MHz (the 8088 in its later 8088-2 and 80C88 incarnations), but the ARM 2 was a 32-bit processor that delivered about 4 VAX MIPS at that speed (assuming about ½DMIPS/MHz like the ARM3‡) while the 8088 would only deliver about 0.3 VAX MIPS (it was 0.04DMIPS/MHz). And programming for the 8088's segmented memory model was a huge pain in the ass, and it was crippled by only having 20 address lines. 8088 assembly is full of special-purpose registers that certain instructions have to use; ARM assembly is orthogonal and almost as high-level as C.
Same transistor count, same clock speed, dramatically better performance, dramatically better programming experience.
Similarly, Unix and Smalltalk came out about the same time as Nova RDOS and RT-11, and for literally the same machines, but the power Unix and Smalltalk put in the hands of users far outstripped that of those worse-designed systems.
So, let's put together a bedrock platform that we could actually use for our daily computing in practice. Unlike the NES.
______
♢ Stanislav Datskovskiy's term: http://www.loper-os.org/?p=55
† https://en.wikipedia.org/wiki/Transistor_count
‡ https://netlib.org/performance/html/dhrystone.data.col0.html but note that the ARM3 had a cache, so this depends on having RAM that can keep up with 8MHz. Both the ARM2 and ARM3 were mostly-1-instruction-per-clock pipelined RISCs with almost exactly the same instruction set. https://en.wikipedia.org/wiki/Acorn_Archimedes, https://www.onirom.fr/wiki/blog/21-04-2022_Acorn-Archimedes/, and its twin https://wardrome.com/acorn-archimedes-the-worlds-first-risc-... confirm the 8MHz and 4MIPS.
The AArch64 is wacky in its own, different, way. For example, loading a constant into a register, dealing with an offset to an index, etc. It also has special purpose registers, like the zero register.
The PDP-11 architecture remains the best gem of an orthogonal instruction set ever invented.
The PDP-11 seems pleasant and orthogonal, but I've never written a program for it, just helped to disassemble the original Tetris, written for a Soviet PDP-11 clone. The instruction set doesn't feel nearly as pleasant as the ARM: no conditional execution, no bit-shifted index registers, no bit-shifted addends, only 8 registers instead of 16, and you need multiple instructions for procedure prologues and epilogues if you have to save multiple registers. They share the pleasant attribute of keeping the stack pointer and program counter in general-purpose registers, and having postincrement and predecrement addressing modes, and even the same condition-code flags. (ARM has postdecrement and preincrement, too, including by variable distances determined by a third register.)
The PDP-11 also wasn't a speed demon the way the ARM was. I believe that speed trades off against everything, and I think you're on board with that from your language designs. According to the page I linked above, a PDP-11/34 was about the same speed as an IBM PC/XT.
Loading a constant into a register is still a problem on the ARM2, but it's a problem that the assembler mostly solves for you with constant pools. And ARM doesn't have indirect addressing (via a pointer in memory), but most of the time you don't need it because of the much larger register set.
The ARM2 and ARM3 kept the condition code in the high bits of the program counter, which meant that subroutine calls automatically preserved it. I thought that was a cool feature, but later ARMs removed it in order to support being able to execute code out of more than just the low 16 mebibytes of memory.
Here's an operating system I wrote in 32-bit ARM assembler. r10 is reserved for the current task pointer, which doesn't conform to the ARM procedure call standard. (I probably should have used r9.) It's five instructions:
http://canonical.org/~kragen/sw/dev3/monokokko.SIt's a great feeling knowing any tool I write in Elisp will likely work for the rest of my life as is.
Unpopular targets, platforms, languages, etc don't get changed and provide a much needed refuge. There are some interpreted languages like perl where a program written today could run on a perl from 2001 and a program from 2001 would run on perl today. And I'm not talking about in a container or with some special version. I'm talking about the system perl.
Some popular languages these days can lose forwards compatibility (gain features, etc) within just a few months that every dev will use within a few more months. In these cultures sofware rot is really fast.
Ah yes, Windows, some niche OS for hipsters and base-dwellers.
If I see a library which is solving a simple problem but it uses a lot of dependencies, I usually don't use that library. Every dependency and sub-dependency is a major risk... If a library author doesn't understand this, I simply cannot trust them. I want the authors of my dependencies to demonstrate some kind of wisdom and care in the way they wrote and packaged their library.
I have several open source projects which have been going for over a decade and I rarely need to update them. I was careful about dependencies and also I was careful about what language features I used. Also, every time some dependency gave me too much trouble I replaced it... Now all my dependencies are highly stable and reliable.
My open source projects became a kind of a Darwinian selection environment for the best libraries. I think it's why I started recognizing the names of good library authors. They're not always super popular but good devs tend to produce consistent quality and usually gets better with time. So if I see a new library and I recognize the author's name, it's a strong positive signal.
It feels nice seeing familiar niche names come up when I'm searching for new libraries to use. It's a small secret club and we're in it.
Maybe software written in the age of DOS was relatively trivial compared to modern tools. Maybe there's a benefit to writing code in Rust rather than C89.
> while those written for e.g. Linux will likely cease working in a decade or two
there's nothing to support this claim in practice. linux is incredibly stable
Any time I tried to run it afterwards - I had to recompile it, and a few times I had to basically port it (because libparagui got abandoned and some stuff in libc changed so I couldn't just compile the old libparagui version with the new libc).
It's surprisingly hard to make linux binaries that will work even 5 years from now, never mind 10-20 years from now.
For comparison I still have games I've written for DOS and windows in 90s and the binaries still work (OK - I had to apply a patch for Turbo Pascal 7 200 MHz bug).
The assumptions around linux software is that it will be maintained by infinite number of free programmers so you can change anything and people will sort it out.
Of course with Apple ecosystem you need relatively recent hardware and have to pay the annual developer program fee, but the expectation is still the same: if you release something, you should keep it up-to-date and not just "fire and forget" like in Windows and expect it to work. Maybe Windows is the anomaly here.
i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself. gui/games in particular on linux is not an area that shares this same stability yet
Yeah, if you compile all the dependencies and dependencies of dependencies and so on statically. Which is a fun experience I've tried several times (have fun trying to statically compile gfx drivers or crypto libraries for example), wasted a few evenings, and then had to repeat anyway because the as-static-as-possible binaries do stop working sometimes.
There's a reason linux distributions invented package managers.
> i was pointing out that you and OP are conflating a particular dev environment with the stability of linux itself.
We're talking software rot, not uptime? It's exactly about software stopping working on newer systems.
https://how.complexsystems.fail/
The rot is real but we have a way to run Linux (and any really) software in 40 years. For example:
Just as I can run thousands of games from the 80s on my vintage CRT arcade cab using a Pi and MAME (with a Pi2JAMMA adapter), I'll be able to run any OCI container in 30 years.The issue of running old software is solved: we've got emulators, VMs, containerization, etc.
Sure they may lack security upgrades but we'll always be able to run them in isolation.
The rot is not so much that the platform won't exist in the future or that old libs would mysteriously not be available anymore: the problem is that many software aren't islands. It's all the stuff many need to connect to that's often going to be the issue.
For games that'd be, say, a game server not available anymore. Or some server validating a program to make sure its license has been paid. Or some centralized protocol that's been "upgraded" or fell into irrelevancy. Or some new file format that's now (legitimately or not) all the shit.
Take VLC: there's not a world in which in 40 years I cannot run a VLC version from today. There's always going to be a VM or, heck, even just an old Linux version I can run on bare metal. But VLC ain't an island either: by then we'll have coolmovie-48bits.z277 where z277 is the latest shiny video encoding format.
At least that's where I see the problem when running old software.