In all seriousness, I have a high respect for Unix and Unix-like systems, particularly FOSS implementations like Linux and FreeBSD. When I first started using Linux in 2004 as a high schooler who grew up on Windows and who used classic Macs in elementary school, the power of the Unix command line and the vast selection of professional-grade software that was available for free, and legally with the source code, too, was mind-blowing. Not too long after that, I started learning about the history of Unix and about its design philosophy. I had a new dream: I wanted to become a computer scientist just like Ken Thompson or Dennis Ritchie working for a place like Bell Labs, or become a professor and lead research projects like the BSDs back when Berkeley had the CSRG. Downloading and using Linux in 11th grade pushed me away from my alternative thoughts of majoring in linguistics, mathematics, or city planning.
Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design. I got bitten by the Lisp and Smalltalk bugs, and later I got bitten by the statically-typed functional programming bug (think Haskell and ML).
These days my conception of a dream operating system is basically an alternative 1990s universe where somehow the grander visions of some Apple researchers and engineers working on Lisp (e.g., Newton’s planned Lisp OS, Dylan, SK8) came to fruition, complete with Apple’s then-devotion to usability. The classic Macintosh interface and UI guidelines combined with a Lisp substrate that enabled composition and malleability would have knocked the socks off of NeXTstep, and I say this as a big fan of NeXT and Jobs-era Mac OS X. This is my dream OS for workstation-class personal computing.
With that said, I have immense respect for Unix, as well as its official successor, Plan 9.
anyfoo · 3h ago
> Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design.
For me, that watershed moment came when I looked at IBM's AS/400, known today as "IBM i". Despite having used computers since the 80s, and Unix/Linux since about the mid-90s, only much later AS/400 made me realize how extremely unixoid almost every OS I know is (well, every OS I knew after the 8 bit micro era or so). Just like you, it also made me realize how UNIX maybe isn't the answer to all, and that it's maybe not such a good thing that we've pretty much settled on it.
I've heard that apparently MULTICS, one of UNIX's major influences, gives people a similar impression when they revisit it nowadays, and realize how advanced it was. I personally have not looked into it yet.
(The other OS that expanded my horizon on how different OSes can be from what I know is MVS on IBM mainframes, but unlike AS/400, not really on a necessarily good way.)
dardeaup · 1h ago
If you could wave a magic wand and suddenly all Unix systems (and variants such as Linux) could be instantly changed to suit your wishes, what would you wish for?
For me, it would be:
- removal of 'root' user and replaced with various predefined sysadmin groups and standardized audit logs for each group
- get a batch processing system (similar in spirit to z/OS but without JCL). It should allow you to see a list of the jobs you've submitted, cancel individual jobs, re-order existing jobs that haven't started, have a standard place to see stdout/stderr logs.
- get something like AIX's smit/smitty for sysadmin tasks
- have either ZFS built-in or something with equivalent functionality
- sensible and easy-to-use containerized/jailing capabilities
- built-in support for append-only file permissions
- ability to quickly/easily adjust priorities for all processes belonging to a user
- ability to quickly/easily cap a user's processes cpu/ram resources (without having to use a container/jail)
kjellsbells · 1h ago
- Accessibility built in, eg the OS takes care of adapting to what the user can do ( vision, subtitles, adaptive keyboard input etc).
- trivially replaceable kernel, to encourage research and experimentation, real time use, etc.
- ruthless, consistent separation of user data, user configuration, and an unbreakable standard for where programs get installed. Just look at dotfiles, hier/LFS, the windows Registry etc to see what a mess this is.
- a native grid compute protocol that allowed any two or more computers to merge into a single system image and automatically distribute workloads across them. We have clustering and the insane complexity of k8s today but it imagine something as easy as "Bonjour and Plan 9"!
lenerdenator · 16m ago
For me, it was Plan 9. In some ways, it was mind-bending to not have root. I haven't played with it recently; maybe it has gotten easier to play with alternative OSes since I tried 10 years ago.
floren · 3h ago
> made me realize how extremely unixoid almost every OS I know is
It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.
anyfoo · 3h ago
Indeed. But to be fair, in today's world it is not only somewhat hard to imagine an OS that does not rely on things such as a filesystem with opaque binary files, directories, and hierarchical paths connecting these, but in order to properly communicate with the "outside world" you more or less have to have those things. Even AS/400 / IBM i added IFS, which is basically a bolted on hierarchical filesystem, to their system a long time ago by now.
Still sad that it has to be that way. I've long come off of thinking "everything is a file", and a file then just being a binary blob, is a good thing. (That's not even talking about other concepts from Unix we take for fully granted yet.)
linguae · 2h ago
Indeed. This reminds me of Rob Pike’s famous 2000 polemic “Systems Software Research is Irrelevant,” where he laments the decline in innovative operating system designs of the era.
Additionally, we need to consider the career incentive structures of researchers, whether they are in industry, academia, or some other institution. Writing an operating system is difficult and laborious, and coming up with innovative interfaces and subsystems is even more difficult. When a researcher’s boss in industry is demanding research results that are productizable, or when a researcher’s tenure committee at a university is demanding “impact” measured by publications, grants, and awards, it’s riskier betting a career on experimental systems design that could take a few person-years to implement and may not pan out (as research is inherently risky) versus pursuing less-ambitious lines of research.
It’s hard for a company to turn its back on compatibility with standards, and it’s hard for academic researchers to pursue “out-there” ideas in operating systems in a “publish-or-perish” world, especially when implementing those ideas is labor-intensive.
The widespread availability of Unix, from a source-available proprietary system with generous licensing costs to universities back in the 1970s, to the birth of FOSS clones and derivatives such as Linux and the BSDs, has made it much easier for CS researchers to not need to reinvent the OS wheel, instead focusing on narrower subsystems, but at the cost of discouraging research that could very well lead to the discovery of whole new ways of carrying things, metaphorically speaking. Sometimes reinventing the wheel is a good thing.
I still dream, though, of writing my own operating system in my spare time. Thankfully as a community college professor I have no publication pressures, and I get three months off per year to do whatever I want, so….
anyfoo · 2h ago
For sure. Also, we simply live in a world where computers, and their operating systems, are giant, impossibly complex structures. No single person can even fully know in all detail even what you would relatively consider a small part of neither any commercial computer, nor their operating system. In my job, I work with 15000 (!) page specifications, and that's only concerning one part of very many.
That pretty much guarantees that change can only be incremental.
bee_rider · 1h ago
Ah… but don’t you need some POSIX to get tools like grep and vim working? Who really wants to live in a universe without grep and vim?
IcyWindows · 10m ago
"Who wants to live without horses for transportation?"
Isn't the point that we don't even consider alternatives?
noone_youknow · 2h ago
> It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.
Well, not quite _every_ time. For example, I’m deliberately not doing POSIX with my latest one[0], so I can be a bit more experimental.
Kudos for doing so! This is seriously a great endeavor. Regarding its relation to UNIX concepts, I do spot a classical hierarchical file system there, though. ;) Is it only an "add-on" (similar to IFS on IBM i), or is it fundamental?
sroerick · 2h ago
I've read that in AS/400, everything is an object, rather than a file. Could you expand on that at all?
kjellsbells · 1h ago
It's quite tricky to explain, but yes, everything in AS/400 is an object. There is also the notion of a context, called a library, that these files-as-objects exist in. AS/400 files very frequently act in a way that today we would describe as databases (like sqlite files, say) and the library context guides how information ('records') in them is returned, eg FIFO, LIFO, etc.
I think the best explanation is contained in this very old porting guide from IBM that explains how to move UNIX programs to AS/400. It's written in a remarkably un-IBM fashion, not at all straitlaced.
For any experts out there, please correct me, it's been 30 years...
linksnapzz · 37m ago
I just finished reading "Fortress Rochester" by Frank Soltis, the designer behind the IBM System38/OS400/iSeries sw.
It's a fascinating book, very approachable for the density of the technical detail contained, and shows how very different choices were made w.r.t. how OS/ 400 system software was designed, and how hardware was developed to support it.
As I understand from reading it, there's like three layers to the software-
-->Things you'd think of as applications on Unix, running in user space...this includes the DB/400 database engine, queue systems etc.
-->Machine Interface (MI) components, which include the management of the single-level store, I/O devices (OS/400 still supports dedicated mainframe-style I/O channels to an extent), and compilers/interpreters. All of this layer and above are considered "Technology Independent" software, where programs written in C/C++, RPG IV, COBOL, Java etc. are compiled into 128-bit object code that gets translated into PPC instructions on the fly.
-->the SLIC (System? Licensed Internal Code) which is referring to both some of the iSeries firmware, stuff that might be considered part of the kernel in Unix, as well as the PowerVM hypervisor.
The craziest thing (to me) about the single-level store is that there's no user-visible filesystem to administer; objects created in memory are automatically persisted onto permanent storage, unless you explicitly tell the system otherwise.
The OS/400 objects also have capabilities built-in; i.e. things like executables can only be run, and not edited at runtime; these are flagged during memory access by setting bits in ECC memory using a dedicated CPU instruction on Power CPUs that was added explicitly for OS/400 use.
For someone who is used to Unix, iSeries a really fascinating thing to learn about. Pity that it's not more accessible to hobbyists, but as the Soltis boook makes clear, the purpose of the design was to satisfy a specific set of customers, who had well-defined line-of-business applications to run.
Crucially, it also describes the "single level store" that everything lives in. In short, the boundary between "main memory" or "hard disk" or other things is abstracted away, and you just have pointers to "things" instead. While in UNIX accessing some piece of main memory is fundamentally different from, say, opening a file.
Even though UNIX has a little bit of the opposite concept of trying -- but in my mind failing -- to represent as much as it can as a "file".
rjsw · 3h ago
Your dream did happen, Interface Builder on NeXT evolved from a Lisp application on classic Macintosh [1], I have a copy of a paper on SOS Interface.
Multics : everything is a memory segment, just get out your soldering iron and wirecutters to begin porting Multics to your hardware today!
IBM: everything is a record type, we have 320 different modules to help you deal with all 320 system-standard record types, and we can even convert some of them to others. And we have 50,000 unfixable bugs because other pieces of code depend upon the bug working the way it does ...
UNIX: everything is an ASCII byte. Done
I started writing code in the 1970s on TOPS-10, TWENEX, PLATO, and my undergrad thesis advisor helped to invent Multics. The benefits of UNIX are real, folks, the snake oil is the hardware vendors who HATE how it levels the playing field ...
> This would be accomplished by disseminating an operating system that is apparently inexpensive and easily portable, but also relatively unreliable and insecure (so as to require continuing upgrades from AT&T).
I remember watching a YouTube UNIX wars documentary that posits the exact opposite.
It argued that top brass at AT&T saw UNIX as a means to the telecommunications end, not a business in its own right. When it became more popular in the 1980s, it became obvious that they'd be bad businessmen if they didn't at least make a feeble attempt at making money off of it, so some exec at Ma Bell decided to charge thousands (if not tens of thousands; I can't find a reliable primary source online with a cursory search) per license to keep the computer business from getting to be too much of a distraction from AT&T's telecoms business.
That limited it to the only places that were doing serious OS research at the time: universities. Then some nerd at a Finnish university decided to make a kernel that fit into the GNU ecosystem, and the rest is history.
AnimalMuppet · 21s ago
AT&T was prohibited from entering the OS/software business by the consent decree of... 1956, I think? They legally could not make a business out of selling Unix. So they distributed the tapes for more or less the cost of the tapes, IIRC.
userbinator · 2h ago
relatively unreliable and insecure (so as to require continuing upgrades
That seems more applicable to Windows these days. If you graph CVEs vs. version, there is an interesting trend.
dinklenuts · 10m ago
I liked to joke that Netflix popularised chaos engineering so that their competitors would deliberately, repeatedly cripple their infrastructure.
pipeline_peak · 2h ago
I’ve read (probably on Wikipedia) that the popularity of Unix was to blame by few for the stagnancy in OS research.
To be fair, it’s not a stretch to suspect a company wants its competitors to be dependent on their product. The theory of planned security vulnerabilities sounds off the plot however.
anyfoo · 2h ago
As elaborated in another comment, I am not a fan of UNIX having won (anymore), but I find this "conspiracy" ridiculous. It's obvious to me that UNIX was seen as a good idea by many people, and that it did make a lot of sense compared to many alternatives, especially considering the constraints of computers and computing environments at the time.
Ericson2314 · 2h ago
Worse is better is viral!
(This conspiracy may not be factually true, but it is teleologically true)
1oooqooq · 2h ago
amazing how that website manages to render white text on a light image and show empty text on Firefox reader feature. i don't think I've seen someone able to break so many standards :)
In all seriousness, I have a high respect for Unix and Unix-like systems, particularly FOSS implementations like Linux and FreeBSD. When I first started using Linux in 2004 as a high schooler who grew up on Windows and who used classic Macs in elementary school, the power of the Unix command line and the vast selection of professional-grade software that was available for free, and legally with the source code, too, was mind-blowing. Not too long after that, I started learning about the history of Unix and about its design philosophy. I had a new dream: I wanted to become a computer scientist just like Ken Thompson or Dennis Ritchie working for a place like Bell Labs, or become a professor and lead research projects like the BSDs back when Berkeley had the CSRG. Downloading and using Linux in 11th grade pushed me away from my alternative thoughts of majoring in linguistics, mathematics, or city planning.
Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design. I got bitten by the Lisp and Smalltalk bugs, and later I got bitten by the statically-typed functional programming bug (think Haskell and ML).
These days my conception of a dream operating system is basically an alternative 1990s universe where somehow the grander visions of some Apple researchers and engineers working on Lisp (e.g., Newton’s planned Lisp OS, Dylan, SK8) came to fruition, complete with Apple’s then-devotion to usability. The classic Macintosh interface and UI guidelines combined with a Lisp substrate that enabled composition and malleability would have knocked the socks off of NeXTstep, and I say this as a big fan of NeXT and Jobs-era Mac OS X. This is my dream OS for workstation-class personal computing.
With that said, I have immense respect for Unix, as well as its official successor, Plan 9.
For me, that watershed moment came when I looked at IBM's AS/400, known today as "IBM i". Despite having used computers since the 80s, and Unix/Linux since about the mid-90s, only much later AS/400 made me realize how extremely unixoid almost every OS I know is (well, every OS I knew after the 8 bit micro era or so). Just like you, it also made me realize how UNIX maybe isn't the answer to all, and that it's maybe not such a good thing that we've pretty much settled on it.
I've heard that apparently MULTICS, one of UNIX's major influences, gives people a similar impression when they revisit it nowadays, and realize how advanced it was. I personally have not looked into it yet.
(The other OS that expanded my horizon on how different OSes can be from what I know is MVS on IBM mainframes, but unlike AS/400, not really on a necessarily good way.)
For me, it would be:
- trivially replaceable kernel, to encourage research and experimentation, real time use, etc.
- ruthless, consistent separation of user data, user configuration, and an unbreakable standard for where programs get installed. Just look at dotfiles, hier/LFS, the windows Registry etc to see what a mess this is.
- a native grid compute protocol that allowed any two or more computers to merge into a single system image and automatically distribute workloads across them. We have clustering and the insane complexity of k8s today but it imagine something as easy as "Bonjour and Plan 9"!
It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.
Still sad that it has to be that way. I've long come off of thinking "everything is a file", and a file then just being a binary blob, is a good thing. (That's not even talking about other concepts from Unix we take for fully granted yet.)
Additionally, we need to consider the career incentive structures of researchers, whether they are in industry, academia, or some other institution. Writing an operating system is difficult and laborious, and coming up with innovative interfaces and subsystems is even more difficult. When a researcher’s boss in industry is demanding research results that are productizable, or when a researcher’s tenure committee at a university is demanding “impact” measured by publications, grants, and awards, it’s riskier betting a career on experimental systems design that could take a few person-years to implement and may not pan out (as research is inherently risky) versus pursuing less-ambitious lines of research.
It’s hard for a company to turn its back on compatibility with standards, and it’s hard for academic researchers to pursue “out-there” ideas in operating systems in a “publish-or-perish” world, especially when implementing those ideas is labor-intensive.
The widespread availability of Unix, from a source-available proprietary system with generous licensing costs to universities back in the 1970s, to the birth of FOSS clones and derivatives such as Linux and the BSDs, has made it much easier for CS researchers to not need to reinvent the OS wheel, instead focusing on narrower subsystems, but at the cost of discouraging research that could very well lead to the discovery of whole new ways of carrying things, metaphorically speaking. Sometimes reinventing the wheel is a good thing.
I still dream, though, of writing my own operating system in my spare time. Thankfully as a community college professor I have no publication pressures, and I get three months off per year to do whatever I want, so….
That pretty much guarantees that change can only be incremental.
Isn't the point that we don't even consider alternatives?
Well, not quite _every_ time. For example, I’m deliberately not doing POSIX with my latest one[0], so I can be a bit more experimental.
[0] https://github.com/roscopeco/anos
I think the best explanation is contained in this very old porting guide from IBM that explains how to move UNIX programs to AS/400. It's written in a remarkably un-IBM fashion, not at all straitlaced.
https://www.redbooks.ibm.com/redbooks/pdfs/sg244438.pdf
For any experts out there, please correct me, it's been 30 years...
It's a fascinating book, very approachable for the density of the technical detail contained, and shows how very different choices were made w.r.t. how OS/ 400 system software was designed, and how hardware was developed to support it.
As I understand from reading it, there's like three layers to the software-
-->Things you'd think of as applications on Unix, running in user space...this includes the DB/400 database engine, queue systems etc.
-->Machine Interface (MI) components, which include the management of the single-level store, I/O devices (OS/400 still supports dedicated mainframe-style I/O channels to an extent), and compilers/interpreters. All of this layer and above are considered "Technology Independent" software, where programs written in C/C++, RPG IV, COBOL, Java etc. are compiled into 128-bit object code that gets translated into PPC instructions on the fly.
-->the SLIC (System? Licensed Internal Code) which is referring to both some of the iSeries firmware, stuff that might be considered part of the kernel in Unix, as well as the PowerVM hypervisor.
The craziest thing (to me) about the single-level store is that there's no user-visible filesystem to administer; objects created in memory are automatically persisted onto permanent storage, unless you explicitly tell the system otherwise.
The OS/400 objects also have capabilities built-in; i.e. things like executables can only be run, and not edited at runtime; these are flagged during memory access by setting bits in ECC memory using a dedicated CPU instruction on Power CPUs that was added explicitly for OS/400 use.
For someone who is used to Unix, iSeries a really fascinating thing to learn about. Pity that it's not more accessible to hobbyists, but as the Soltis boook makes clear, the purpose of the design was to satisfy a specific set of customers, who had well-defined line-of-business applications to run.
Crucially, it also describes the "single level store" that everything lives in. In short, the boundary between "main memory" or "hard disk" or other things is abstracted away, and you just have pointers to "things" instead. While in UNIX accessing some piece of main memory is fundamentally different from, say, opening a file.
Even though UNIX has a little bit of the opposite concept of trying -- but in my mind failing -- to represent as much as it can as a "file".
[1] https://en.wikipedia.org/wiki/Jean-Marie_Hullot
IBM: everything is a record type, we have 320 different modules to help you deal with all 320 system-standard record types, and we can even convert some of them to others. And we have 50,000 unfixable bugs because other pieces of code depend upon the bug working the way it does ...
UNIX: everything is an ASCII byte. Done
I started writing code in the 1970s on TOPS-10, TWENEX, PLATO, and my undergrad thesis advisor helped to invent Multics. The benefits of UNIX are real, folks, the snake oil is the hardware vendors who HATE how it levels the playing field ...
This exact version of it was first published in v. 4.0.0, on 24 Jul 1996: http://www.catb.org/jargon/oldversions/jarg400.txt
That was then published as The New Hacker's Dictionary, third edition, 1996: https://books.google.com.vc/books?id=g80P_4v4QbIC&printsec=f...
[1] https://en.wikipedia.org/wiki/Jargon_File
I remember watching a YouTube UNIX wars documentary that posits the exact opposite.
It argued that top brass at AT&T saw UNIX as a means to the telecommunications end, not a business in its own right. When it became more popular in the 1980s, it became obvious that they'd be bad businessmen if they didn't at least make a feeble attempt at making money off of it, so some exec at Ma Bell decided to charge thousands (if not tens of thousands; I can't find a reliable primary source online with a cursory search) per license to keep the computer business from getting to be too much of a distraction from AT&T's telecoms business.
That limited it to the only places that were doing serious OS research at the time: universities. Then some nerd at a Finnish university decided to make a kernel that fit into the GNU ecosystem, and the rest is history.
That seems more applicable to Windows these days. If you graph CVEs vs. version, there is an interesting trend.
To be fair, it’s not a stretch to suspect a company wants its competitors to be dependent on their product. The theory of planned security vulnerabilities sounds off the plot however.
(This conspiracy may not be factually true, but it is teleologically true)