I think we shouldn't[1] be making Operating Systems, per se, but something like Operating Environments.
An Operating Environment (OE) would be a new interface, maybe shell and APIs to access file systems, devices, libraries and such -- possibly one that can be just launched as an application in your host OS. That way you can reuse all facilities provided by the host OS and present them in new, maybe more convenient ways. I guess Emacs is a sort of Operating Environment, as browsers as well. 'Fantasy computers' are also Operating Environments, like pico-8, mini micro[2], uxn, etc..
Of course, if you really have great a low-level reason to reinvent the way things are done (maybe to improve security, efficiency, DX, or all of that), then go ahead :)
The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer). Also, developing a new UX is very different from developing a new low-level architecture (and you can just plug the UX into existing OSes).
In most cases an OS shell (and an OE), from the user point of view, is "just" a good way of finding and launching applications. Maybe a way of finding and managing files if you count the file manager in. It shouldn't get too much in the way and be the center of attention, I guess. (This contrasts with its low level design, which has a large number functions, APIs, etc.). But also it should probably be (in different contexts) cozy, comfortable, beautiful, etc. (because why not?). A nice advanced feature is the ability to automate things and run commands programmatically, which command shells tend to have by default but are more lacking in graphical shells. And I'm sure there is still a lot to explore in OS UX...
[1] I mean, unless you really have a reason with all caveats in mind of course.
I think there's value in exploring operating systems and environments. And, it's very useful to note that you don't need to do both at the same time. This strikes me as an unneccessary worry though:
> The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer).
As a start, you simply don't need to support all of this, and you don't even need to aspire to support it all. Support virtio versions of the relevant hardware class and try to run on reasonable hypervisors. Support one or two physical devices of the relevant hardware class to run on a real machine.
If you can plugin to an existing driver abstraction and pull in external drivers, great. NDIS is one option there.
If your OS is interesting, you can figure our the driver situation, and the cpu architecture situation. It would sure be neat to run on a raspberry pi, but only running on x86 isn't a big limitation.
rbanffy · 24m ago
Some of these are operating systems in the most literal sense. Some are environments that aren't concerned with starting processes or reading disk blocks.
johnsmth · 36m ago
It sounds to me like you are describing Linux desktops.
rbanffy · 23m ago
Or Windows up to Windows ME.
I believe confusing UI with OS is a mistake Windows users are still paying to this day. Thanks to NeXT, Mac users don't have this torment since MacOS 9.
MomsAVoxell · 7h ago
I had the privilege to work as a junior operator in the 80’s, and got exposed to some strange systems .. Tandem and Wang and so on .. and I always wondered if those weird Wang Imaging System things were out there, in an emulator somewhere, to play with, as it seemed like a very functional system for archive digitalization.
As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.
Keyframe · 4h ago
as a fellow dinosaur and a hobbyist, I concur. Especially SGI's. For those that didn't know, MAME (of all things) can run IRIX to an extent https://sgi.neocities.org/
rbanffy · 20m ago
The one I'd like to see working is the IBM 3193. Few people know IBM had graphics terminals and the 3270 protocol has provisions for high-res images going to/from the terminal.
It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.
Rochus · 6h ago
> This list should include...
And a few dozen others as well.
alphazard · 3h ago
Notably missing from this list are seL4 and Helios which is based on it.
The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.
Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.
MYEUHD · 3h ago
It's not based on it, but inspired from it.
Helios was written from scratch.
alphazard · 2h ago
I don't really understand or appreciate a distinction. The seL4 design was used as a starting point and small changes were made mostly as a matter of API convenience. I consider the design of an operating system to be by far the most difficult part, and the typing to be less impressive/important.
Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.
forgotpwd16 · 43m ago
Usually "based on" means the original codebase is mirrored/extended. Arguably if what you say is true, that is Helios' design has minor differences to seL4, then "based on" in reference to the design is indeed better description than "inspired from" which makes it sound (imo) to have significant changes.
rbanffy · 25m ago
MercuryOS reminds me of the Apple Lisa - The way it managed applications invisibly was a step in the direction of selecting tools based on intentions. It was a document-centric system, which MercuryOS isn't, but a step in the same direction.
For some time, Windows 95 (IIRC) had a Templates folder. You'd put documents in it and you could right-click a folder and select New->Invoice or something similar based on what you had in the Templates folder. It was similar to Lisa's Stationery metaphor.
Lerc · 5h ago
Are there any operating systems designed from the ground up to support and fully utilize many processor systems?
I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability
toast0 · 2h ago
I think you're reaching towards the concept of a Single System Image [1] system. Such a system is a cluster of many computers, but you can interact with it as if it was a single computer.
But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.
you can build these without shared memory using standard distributed database techniques for serializability and fault tolerance. i dont think its a particularly good idea. there's nothing great about running 'ps' and getting half a million entries. using the unix user/group model isn't great for managing resources. its not even that great to log in to start jobs. the only thing your gaining is familiarity.
building better abstractions - kuberenetes is an example, although i certainly hope we dont keep being stuck there - is probably a better use of time
fiberhood · 4h ago
The RoarVM [1] is a research project that showed how to run Squeak Smalltalk on thousands of cores (at one point it ran on 10,000 cores).
I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.
It's not a true OS--but it's a platform on top of an arbitrary number of nodes that act as one.
The cool thing is that from the program's perspective you don't have to worry about the distributed system running underneath--the program just thinks it's running on an arbitrarily large machine.
0x0203 · 4h ago
Yes, to a degree, but probably not quite like you're thinking. The super computers and HPC clusters are highly tuned for the hardware they use which can have thousands of CPUs. But ultimately the "OS" that controls them takes on a bit of a different meaning in those contexts.
Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.
I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.
Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.
Findecanor · 4h ago
There have certainly been research operating systems for large cache-coherent multiprocessors. For example, IBM's K42 and ETH Zürich's Barrelfish. Both had been designed to separate the kernel state at each core from the others' by using message passing between cores instead of shared data structures.
RetroTechie · 3h ago
Why the "novel" qualifier?
There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.
A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.
xattt · 6h ago
I can’t help but notice that each of these stubs represent a not-insignificant portion of effort put in by one or more humans.
mrbluecoat · 3h ago
Indeed. Could have been retitled "Labor of Love OSes"
diego_moita · 3h ago
As a kernel programmer I find it so lame that when people say "Operating Systems" what they're thinking is just the superficial layer: GUI interfaces, Desktop Managers and UX in general. As if the only things that could have OS were desktop computers, laptops, tablets and smartphones.
What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?
serhack_ · 8h ago
I would love to see some examples outside of the WIMP-based UI
WillAdams · 5h ago
Well, there were Momenta and PenPoint --- the latter in particular focused on Notebooks which felt quite different, and Apple's Newton was even more so.
Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)
amelius · 7h ago
Maybe a catalog of kernels?
wazzaps · 6h ago
MercuryOS towards the bottom is pretty cool
MonkeyClub · 5h ago
MercuryOS [1, 2] appears to be simply a "speculative vision" with no proof of concept implementation, a manifesto rather than an actual system.
I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.
Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?
The title should have been "Catalog of UI Demos". It has nothing to do with operating systems.
Desktop Neo was a sick demo, ten years ago. If there was ever a real project that implemented it, I'd be willing to give it a whirl.
rubitxxx3 · 6h ago
This list could be longer! I expected much more, given that CS students and hobbyists are doing this sort of thing often. Maybe the format is too verbose?
gitroom · 3h ago
Honestly love seeing people obsess over old or weird OS stuff - makes me want to poke around in my own cluttered laptop folders just to see what weird bits I still have tucked away.
An Operating Environment (OE) would be a new interface, maybe shell and APIs to access file systems, devices, libraries and such -- possibly one that can be just launched as an application in your host OS. That way you can reuse all facilities provided by the host OS and present them in new, maybe more convenient ways. I guess Emacs is a sort of Operating Environment, as browsers as well. 'Fantasy computers' are also Operating Environments, like pico-8, mini micro[2], uxn, etc..
Of course, if you really have great a low-level reason to reinvent the way things are done (maybe to improve security, efficiency, DX, or all of that), then go ahead :)
The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer). Also, developing a new UX is very different from developing a new low-level architecture (and you can just plug the UX into existing OSes).
In most cases an OS shell (and an OE), from the user point of view, is "just" a good way of finding and launching applications. Maybe a way of finding and managing files if you count the file manager in. It shouldn't get too much in the way and be the center of attention, I guess. (This contrasts with its low level design, which has a large number functions, APIs, etc.). But also it should probably be (in different contexts) cozy, comfortable, beautiful, etc. (because why not?). A nice advanced feature is the ability to automate things and run commands programmatically, which command shells tend to have by default but are more lacking in graphical shells. And I'm sure there is still a lot to explore in OS UX...
[1] I mean, unless you really have a reason with all caveats in mind of course.
[2] https://miniscript.org/MiniMicro/index.html#about
> The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer).
As a start, you simply don't need to support all of this, and you don't even need to aspire to support it all. Support virtio versions of the relevant hardware class and try to run on reasonable hypervisors. Support one or two physical devices of the relevant hardware class to run on a real machine.
If you can plugin to an existing driver abstraction and pull in external drivers, great. NDIS is one option there.
If your OS is interesting, you can figure our the driver situation, and the cpu architecture situation. It would sure be neat to run on a raspberry pi, but only running on x86 isn't a big limitation.
I believe confusing UI with OS is a mistake Windows users are still paying to this day. Thanks to NeXT, Mac users don't have this torment since MacOS 9.
As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.
https://ifdesign.com/en/winner-ranking/project/datensichtger...
It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.
And a few dozen others as well.
https://ares-os.org/docs/helios/
The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.
Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.
Helios was written from scratch.
Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.
For some time, Windows 95 (IIRC) had a Templates folder. You'd put documents in it and you could right-click a folder and select New->Invoice or something similar based on what you had in the Templates folder. It was similar to Lisa's Stationery metaphor.
I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability
But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.
[1] https://en.m.wikipedia.org/wiki/Single_system_image
building better abstractions - kuberenetes is an example, although i certainly hope we dont keep being stuck there - is probably a better use of time
I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.
We just applied for YC funding.
[1] https://github.com/smarr/RoarVM
[2] https://www.youtube.com/watch?v=f1605Zmwek8
[3] https://www.youtube.com/watch?v=wDhnjEQyuDk
You are doing God's work. Thank you.
It's not a true OS--but it's a platform on top of an arbitrary number of nodes that act as one.
The cool thing is that from the program's perspective you don't have to worry about the distributed system running underneath--the program just thinks it's running on an arbitrarily large machine.
Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.
I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.
Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.
There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.
A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.
What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?
Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)
I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.
Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?
[1] https://www.mercuryos.com/
[2] https://news.ycombinator.com/item?id=35777804 (May 1, 2023, 161 comments)
Desktop Neo was a sick demo, ten years ago. If there was ever a real project that implemented it, I'd be willing to give it a whirl.