Unix Conspiracy (1991)

68 gjsman-1000 50 9/4/2025, 11:12:00 PM catb.org ↗

Comments (50)

linguae · 1d ago
Reminds me of this classic I’ve first read many years ago: https://www.gnu.org/fun/jokes/unix-hoax.html

In all seriousness, I have a high respect for Unix and Unix-like systems, particularly FOSS implementations like Linux and FreeBSD. When I first started using Linux in 2004 as a high schooler who grew up on Windows and who used classic Macs in elementary school, the power of the Unix command line and the vast selection of professional-grade software that was available for free, and legally with the source code, too, was mind-blowing. Not too long after that, I started learning about the history of Unix and about its design philosophy. I had a new dream: I wanted to become a computer scientist just like Ken Thompson or Dennis Ritchie working for a place like Bell Labs, or become a professor and lead research projects like the BSDs back when Berkeley had the CSRG. Downloading and using Linux in 11th grade pushed me away from my alternative thoughts of majoring in linguistics, mathematics, or city planning.

Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design. I got bitten by the Lisp and Smalltalk bugs, and later I got bitten by the statically-typed functional programming bug (think Haskell and ML).

These days my conception of a dream operating system is basically an alternative 1990s universe where somehow the grander visions of some Apple researchers and engineers working on Lisp (e.g., Newton’s planned Lisp OS, Dylan, SK8) came to fruition, complete with Apple’s then-devotion to usability. The classic Macintosh interface and UI guidelines combined with a Lisp substrate that enabled composition and malleability would have knocked the socks off of NeXTstep, and I say this as a big fan of NeXT and Jobs-era Mac OS X. This is my dream OS for workstation-class personal computing.

With that said, I have immense respect for Unix, as well as its official successor, Plan 9.

pjmlp · 14h ago
I went down a similar path.

I was already tainted by starting on 8 an 16 bit home computers, the magic of Amiga and OS/2, before getting into UNIX via Xenix.

The wonderful dungeon of the university library did the rest, showing me all the alternative universe.

It also helped my university was keen into teaching the ways of Smalltalk, Lisp, ML, Prolog, Oberon, in addition to classical UNIX lectures.

I was really lucky in that regard.

anyfoo · 23h ago
> Sometime in graduate school I started paying more attention to the Xerox PARC legacy of the Mac, and I started realizing that perhaps Unix was not the pinnacle of operating systems design.

For me, that watershed moment came when I looked at IBM's AS/400, known today as "IBM i". Despite having used computers since the 80s, and Unix/Linux since about the mid-90s, only much later AS/400 made me realize how extremely unixoid almost every OS I know is (well, every OS I knew after the 8 bit micro era or so). Just like you, it also made me realize how UNIX maybe isn't the answer to all, and that it's maybe not such a good thing that we've pretty much settled on it.

I've heard that apparently MULTICS, one of UNIX's major influences, gives people a similar impression when they revisit it nowadays, and realize how advanced it was. I personally have not looked into it yet.

(The other OS that expanded my horizon on how different OSes can be from what I know is MVS on IBM mainframes, but unlike AS/400, not really on a necessarily good way.)

dardeaup · 21h ago
If you could wave a magic wand and suddenly all Unix systems (and variants such as Linux) could be instantly changed to suit your wishes, what would you wish for?

For me, it would be:

    - removal of 'root' user and replaced with various predefined sysadmin groups and standardized audit logs for each group
    - get a batch processing system (similar in spirit to z/OS but without JCL). It should allow you to see a list of the jobs you've submitted, cancel individual jobs, re-order existing jobs that haven't started, have a standard place to see stdout/stderr logs.
    - get something like AIX's smit/smitty for sysadmin tasks
    - have either ZFS built-in or something with equivalent functionality
    - sensible and easy-to-use containerized/jailing capabilities
    - built-in support for append-only file permissions
    - ability to quickly/easily adjust priorities for all processes belonging to a user
    - ability to quickly/easily cap a user's processes cpu/ram resources (without having to use a container/jail)
nickdothutton · 6h ago
At home I had an 8-bit so played around with CP/M and BBS'ing trying to gather shareware (software was too expensive for a kid of course). My first serious experience of "big systems" were VAX 6000 (1GB RAM in 1991!). I'm reminded that many of the useful concepts of that system, some of which you list.. never made it into Unix. One of the reasons I stuck with Solaris for so long later is because it had a few of them.
icedchai · 6h ago
My first experience with a multiuser system was also on VAX/VMS. It felt more complete and cohesive (perhaps not the exactly right terms...) than early Unixes. I still experiment with it in emulators.
shrubble · 16h ago
Solaris from 10 onwards to OpenIndiana/illumos/SmartOS/OmniOS has some of this.

I don't know if you could remove the root user, but you can have a lot of control using pfexec. This might require you to design and assemble it however.

ZFS of course for ZFS filesystem.

Solaris Containers are great: sensible and easy-to-use containerized/jailing capabilities

Solaris has "projects" and the FSS (Fair Share Scheduler) which should allow you to cap in both absolute and relative terms (like a share of CPU time or RAM) on a per-user or per-project/group basis, even if not in a container.

As well, you can create virtualized network interfaces called VNICs and have bandwidth management by VNIC or by port (e.g. port 443, port 25). So you could always reserve say 10% of bandwidth for SSH traffic so you never get starved, etc.

kjellsbells · 21h ago
- Accessibility built in, eg the OS takes care of adapting to what the user can do ( vision, subtitles, adaptive keyboard input etc).

- trivially replaceable kernel, to encourage research and experimentation, real time use, etc.

- ruthless, consistent separation of user data, user configuration, and an unbreakable standard for where programs get installed. Just look at dotfiles, hier/LFS, the windows Registry etc to see what a mess this is.

- a native grid compute protocol that allowed any two or more computers to merge into a single system image and automatically distribute workloads across them. We have clustering and the insane complexity of k8s today but it imagine something as easy as "Bonjour and Plan 9"!

1718627440 · 14h ago
Sounds like systemd :-).
Elosha · 17h ago
Interesting. What you are wishing for is, in essence, (Open)VMS. It exists!
floren · 23h ago
> made me realize how extremely unixoid almost every OS I know is

It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.

anyfoo · 22h ago
Indeed. But to be fair, in today's world it is not only somewhat hard to imagine an OS that does not rely on things such as a filesystem with opaque binary files, directories, and hierarchical paths connecting these, but in order to properly communicate with the "outside world" you more or less have to have those things. Even AS/400 / IBM i added IFS, which is basically a bolted on hierarchical filesystem, to their system a long time ago by now.

Still sad that it has to be that way. I've long come off of thinking "everything is a file", and a file then just being a binary blob, is a good thing. (That's not even talking about other concepts from Unix we take for fully granted yet.)

linguae · 22h ago
Indeed. This reminds me of Rob Pike’s famous 2000 polemic “Systems Software Research is Irrelevant,” where he laments the decline in innovative operating system designs of the era.

Additionally, we need to consider the career incentive structures of researchers, whether they are in industry, academia, or some other institution. Writing an operating system is difficult and laborious, and coming up with innovative interfaces and subsystems is even more difficult. When a researcher’s boss in industry is demanding research results that are productizable, or when a researcher’s tenure committee at a university is demanding “impact” measured by publications, grants, and awards, it’s riskier betting a career on experimental systems design that could take a few person-years to implement and may not pan out (as research is inherently risky) versus pursuing less-ambitious lines of research.

It’s hard for a company to turn its back on compatibility with standards, and it’s hard for academic researchers to pursue “out-there” ideas in operating systems in a “publish-or-perish” world, especially when implementing those ideas is labor-intensive.

The widespread availability of Unix, from a source-available proprietary system with generous licensing costs to universities back in the 1970s, to the birth of FOSS clones and derivatives such as Linux and the BSDs, has made it much easier for CS researchers to not need to reinvent the OS wheel, instead focusing on narrower subsystems, but at the cost of discouraging research that could very well lead to the discovery of whole new ways of carrying things, metaphorically speaking. Sometimes reinventing the wheel is a good thing.

I still dream, though, of writing my own operating system in my spare time. Thankfully as a community college professor I have no publication pressures, and I get three months off per year to do whatever I want, so….

anyfoo · 21h ago
For sure. Also, we simply live in a world where computers, and their operating systems, are giant, impossibly complex structures. No single person can even fully know in all detail even what you would relatively consider a small part of neither any commercial computer, nor their operating system. In my job, I work with 15000 (!) page specifications, and that's only concerning one part of very many.

That pretty much guarantees that change can only be incremental.

BobbyTables2 · 19h ago
Very true!

I think it’d be amazing if things like asynchronous I/O were the standard instead of an afterthought in both kernel and user space.

Also feels like Microsoft got something right with their handles… (Try doing select/poll to wait for an socket and a semaphore — without eventfd or something else bolted on)

bee_rider · 20h ago
Ah… but don’t you need some POSIX to get tools like grep and vim working? Who really wants to live in a universe without grep and vim?
IcyWindows · 19h ago
"Who wants to live without horses for transportation?"

Isn't the point that we don't even consider alternatives?

noone_youknow · 22h ago
> It's a bit disappointing that every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.

Well, not quite _every_ time. For example, I’m deliberately not doing POSIX with my latest one[0], so I can be a bit more experimental.

[0] https://github.com/roscopeco/anos

anyfoo · 21h ago
Kudos for doing so! This is seriously a great endeavor. Regarding its relation to UNIX concepts, I do spot a classical hierarchical file system there, though. ;) Is it only an "add-on" (similar to IFS on IBM i), or is it fundamental?
noone_youknow · 15h ago
Thank you!

At this early stage, the filesystem exists only to prove the disk drivers and the IPC interfaces connecting them. I chose FAT32 for this since there has to be a FAT partition anyway for UEFI.

The concept of the VFS may stick around as a useful thing, but it’s strictly for storage, there is no “everything is a file” ethos. It’s entirely a user space concept - the kernel knows nothing of virtual filesystems, files, or the underlying hardware like disks.

anyfoo · 13h ago
That makes sense.
pjmlp · 14h ago
Kudos for trying something else.
hulitu · 17h ago
> every time somebody decides to write their own kernel, the first thing they do is implement some subset of the POSIX spec.

yet most of them target virtual machines. Nobody is programming the bare hardware anymore.

Qemu is nice, but you have to read 5 internet pages to start it properly.

floren · 8h ago
That feels like an orthogonal problem to whether or not POSIX is implemented, though.
pjmlp · 14h ago
Already the use of PL/I, while we keep trying to add safety back into C.
sroerick · 22h ago
I've read that in AS/400, everything is an object, rather than a file. Could you expand on that at all?
kjellsbells · 21h ago
It's quite tricky to explain, but yes, everything in AS/400 is an object. There is also the notion of a context, called a library, that these files-as-objects exist in. AS/400 files very frequently act in a way that today we would describe as databases (like sqlite files, say) and the library context guides how information ('records') in them is returned, eg FIFO, LIFO, etc.

I think the best explanation is contained in this very old porting guide from IBM that explains how to move UNIX programs to AS/400. It's written in a remarkably un-IBM fashion, not at all straitlaced.

https://www.redbooks.ibm.com/redbooks/pdfs/sg244438.pdf

For any experts out there, please correct me, it's been 30 years...

linksnapzz · 20h ago
I just finished reading "Fortress Rochester" by Frank Soltis, the designer behind the IBM System38/OS400/iSeries sw.

It's a fascinating book, very approachable for the density of the technical detail contained, and shows how very different choices were made w.r.t. how OS/ 400 system software was designed, and how hardware was developed to support it.

As I understand from reading it, there's like three layers to the software-

-->Things you'd think of as applications on Unix, running in user space...this includes the DB/400 database engine, queue systems etc.

-->Machine Interface (MI) components, which include the management of the single-level store, I/O devices (OS/400 still supports dedicated mainframe-style I/O channels to an extent), and compilers/interpreters. All of this layer and above are considered "Technology Independent" software, where programs written in C/C++, RPG IV, COBOL, Java etc. are compiled into 128-bit object code that gets translated into PPC instructions on the fly.

-->the SLIC (System? Licensed Internal Code) which is referring to both some of the iSeries firmware, stuff that might be considered part of the kernel in Unix, as well as the PowerVM hypervisor.

The craziest thing (to me) about the single-level store is that there's no user-visible filesystem to administer; objects created in memory are automatically persisted onto permanent storage, unless you explicitly tell the system otherwise.

The OS/400 objects also have capabilities built-in; i.e. things like executables can only be run, and not edited at runtime; these are flagged during memory access by setting bits in ECC memory using a dedicated CPU instruction on Power CPUs that was added explicitly for OS/400 use.

For someone who is used to Unix, iSeries a really fascinating thing to learn about. Pity that it's not more accessible to hobbyists, but as the Soltis boook makes clear, the purpose of the design was to satisfy a specific set of customers, who had well-defined line-of-business applications to run.

sillywalk · 6h ago
> I just finished reading "Fortress Rochester" by Frank Soltis, the designer behind the IBM System38/OS400/iSeries sw.

There's a copy of the previous version, Inside the AS/400, also by Frank Soltis on archive.org

https://archive.org/details/insideas4000000solt/

bombcar · 21h ago
I’m not ready for 1995 to be 30 years ago …
pjmlp · 14h ago
Also it is one of the successful bytecode OSes, were userspace is mostly bytecode based and AOT compilation takes place at installation time, or after updates.
anyfoo · 21h ago
Honestly, it's a bit much for a single HN comment. But I've look around, and I found this, which upon first glance gives a good (and not too long) overview: http://www.varietysoftworks.com/jbaldwin/Education/single-le...

Crucially, it also describes the "single level store" that everything lives in. In short, the boundary between "main memory" or "hard disk" or other things is abstracted away, and you just have pointers to "things" instead. While in UNIX accessing some piece of main memory is fundamentally different from, say, opening a file.

Even though UNIX has a little bit of the opposite concept of trying -- but in my mind failing -- to represent as much as it can as a "file".

lenerdenator · 20h ago
For me, it was Plan 9. In some ways, it was mind-bending to not have root. I haven't played with it recently; maybe it has gotten easier to play with alternative OSes since I tried 10 years ago.
rjsw · 23h ago
Your dream did happen, Interface Builder on NeXT evolved from a Lisp application on classic Macintosh [1], I have a copy of a paper on SOS Interface.

[1] https://en.wikipedia.org/wiki/Jean-Marie_Hullot

williamDafoe · 20h ago
Multics : everything is a memory segment, just get out your soldering iron and wirecutters to begin porting Multics to your hardware today!

IBM: everything is a record type, we have 320 different modules to help you deal with all 320 system-standard record types, and we can even convert some of them to others. And we have 50,000 unfixable bugs because other pieces of code depend upon the bug working the way it does ...

UNIX: everything is an ASCII byte. Done

I started writing code in the 1970s on TOPS-10, TWENEX, PLATO, and my undergrad thesis advisor helped to invent Multics. The benefits of UNIX are real, folks, the snake oil is the hardware vendors who HATE how it levels the playing field ...

tomhow · 1d ago
The origin of this is the Jargon File [1] v. 2.3.1, published on 3 Jan 1991: http://www.catb.org/jargon/oldversions/jarg231.txt

This exact version of it was first published in v. 4.0.0, on 24 Jul 1996: http://www.catb.org/jargon/oldversions/jarg400.txt

That was then published as The New Hacker's Dictionary, third edition, 1996: https://books.google.com.vc/books?id=g80P_4v4QbIC&printsec=f...

[1] https://en.wikipedia.org/wiki/Jargon_File

userbinator · 22h ago
relatively unreliable and insecure (so as to require continuing upgrades

That seems more applicable to Windows these days. If you graph CVEs vs. version, there is an interesting trend.

anonymousiam · 19h ago
Eric's server is probably melting. It took ten seconds to load 1.5k of text.
pipeline_peak · 22h ago
I’ve read (probably on Wikipedia) that the popularity of Unix was to blame by few for the stagnancy in OS research.

To be fair, it’s not a stretch to suspect a company wants its competitors to be dependent on their product. The theory of planned security vulnerabilities sounds off the plot however.

anyfoo · 21h ago
As elaborated in another comment, I am not a fan of UNIX having won (anymore), but I find this "conspiracy" ridiculous. It's obvious to me that UNIX was seen as a good idea by many people, and that it did make a lot of sense compared to many alternatives, especially considering the constraints of computers and computing environments at the time.
lenerdenator · 19h ago
> This would be accomplished by disseminating an operating system that is apparently inexpensive and easily portable, but also relatively unreliable and insecure (so as to require continuing upgrades from AT&T).

I remember watching a YouTube UNIX wars documentary that posits the exact opposite.

It argued that top brass at AT&T saw UNIX as a means to the telecommunications end, not a business in its own right. When it became more popular in the 1980s, it became obvious that they'd be bad businessmen if they didn't at least make a feeble attempt at making money off of it, so some exec at Ma Bell decided to charge thousands (if not tens of thousands; I can't find a reliable primary source online with a cursory search) per license to keep the computer business from getting to be too much of a distraction from AT&T's telecoms business.

That limited it to the only places that were doing serious OS research at the time: universities. Then some nerd at a Finnish university decided to make a kernel that fit into the GNU ecosystem, and the rest is history.

AnimalMuppet · 19h ago
AT&T was prohibited from entering the OS/software business by the consent decree of... 1956, I think? They legally could not make a business out of selling Unix. So they distributed the tapes for more or less the cost of the tapes, IIRC.
pjmlp · 14h ago
Yes, that is how UNIX and C won the OS wars.

As soon as they were allowed to profit from UNIX, the Lion's book got forbidden, and the BSD lawsuit took place.

Ericson2314 · 22h ago
Worse is better is viral!

(This conspiracy may not be factually true, but it is teleologically true)

1oooqooq · 22h ago
amazing how that website manages to render white text on a light image and show empty text on Firefox reader feature. i don't think I've seen someone able to break so many standards :)
doctor_blood · 11h ago
Just tested it myself; FF reader works as expected for both desktop and mobile.

The server was under a heavier load than usual - it's possible the page hadn't finished loading for you, or was missing elements when you toggled reader mode.

hulitu · 17h ago
> amazing

amazing ? I thought this was the norm. /s

The days, when OSs let you select text and background colors, are long gone.

dinklenuts · 20h ago
I liked to joke that Netflix popularised chaos engineering so that their competitors would deliberately, repeatedly cripple their infrastructure.
dmazin · 16h ago
That’s a strange way to put it because it makes infrastructure more robust, the opposite of “crippling”.

A funnier version would be they’re in cahoots with AWS since robust infrastructure is more expensive.

lupusreal · 22h ago
This is bullshit, but I believe it.