"Everything worked in a single address space, programs could talk to each other in ways operating systems of today couldn’t dream of."
Those of us who used early versions of Windows without protected memory don't consider a "single address space" to be a feature.
kazinator · 32d ago
Yet, managed languages have somewhat brought that back, enabling very large and complex applications with many parts in one image.
Windows is programmed using machine-language executables written using an unsafe language, which makes all the difference. That's what necessitates hardware protection.
(One single address space still benefits from virtual memory, whose advantages go beyond protection.)
wduquette · 32d ago
I still wouldn't want everything on the platform running in a single address space, useful as it is for monolithic applications.
PaulHoule · 33d ago
The usual analysis is that Common Lisp killed the Lisp machine, that is, as much as some people will deny it, Common Lisp was designed with implementation in mind (anything implementable is designed with implementation in mind) and the intention was that it would get better-than-Lisp machine performance on machines like the VAX, 68k, 80386, which soon became the mainstream.
An alternate OS is an appealing idea in many ways today but runs into the problem of "where do you get your userspace?" Make it POSIX compatible and you can run all kinds of C code like the GNU tools and other things you find in a Linux distribution. Make something radical and new and you have to write everything from scratch and so do all your users.
amszmidt · 32d ago
Not sure where this "analysis" comes from, but the demise of the Lisp Machine has generally been agreed that it had to do more with the speed of technological advances than Common Lisp -- by the time Common Lisp was standardized, for all intents and purposes, all Lisp Machine companies where already defunct.
Common Lisp was definitely not designed with the intention of better performance, the intention was literally to have a _common_ Lisp language that multiple implementations could use and where porting programs would be much easier. One needs to remember that when Common Lisp was first drafted, there where dozens of Lisp dialects -- all of them different.
The list of CPUs is a very large span, some predating Common Lisp (CLtL1 is from 1984, CLtL2 from 1990, and ANSI Common Lisp from 1996) by close to several years (VAX, from 1977).
But other than that, the idea of a not-Unix system does fall into those two buckets ... make it Unix, or rewrite everything. One can see this in Oberon, Smalltalk-78, Mezzano, etc...
tocs3 · 33d ago
How is the architecture of a lisp machine different that "normal" computers. I did the first half of the NAND to Tetris course and thought it was super interesting. Since, I have been thinking about using the HDL from the course to play with other computing ideas (a turing machine, cellular automata, one instruction set computers). I have never really found anything about how a lisp machine would be different in terms of hardware though.
jecel · 33d ago
Much of the memory in a Lisp program is in the form of CONS cells, so the MIT LISP machines had a compact way of encoding this. They also used tagged memory to be able to handle the different kinds of data at runtime. They inherited a very stack-oriented execution model from the PDP-10 implementation of LISP. And they implemented very complex instructions using microcode.
The Symbolics people refined this approach while the LMI people kept the original design until nearly the end when they tried to do a RISC+tags:
While the Lisp Machine does use lists, the benefits of CDR coding were / are quite overblown. The Lisp Machine also used other data structures far heavier than lists.
The Lisp Machine macroinstructions aren't that complicated, it is a basically stack based machine -- the most complicated thing was handling of the function arguments from Lisp (FEF); which has complicated semantics when it comes to handling optional, keywords, and what not arguments.
No comments yet
trinix912 · 33d ago
Aren't these points similar to the problems Plan9 tried to solve and the ideas it presented (eg. distributed filesystems)? The key point with Lisp Machines was the specialized hardware which we don't seem to need anymore. But attempts have definitely been made, it's just that it usually ends when you don't have enough vendor support (be it hardware, or software).
deterministic · 33d ago
Nope we don't need Lisp Machines. If we did somebody would get rich building and selling them.
Those of us who used early versions of Windows without protected memory don't consider a "single address space" to be a feature.
Windows is programmed using machine-language executables written using an unsafe language, which makes all the difference. That's what necessitates hardware protection.
(One single address space still benefits from virtual memory, whose advantages go beyond protection.)
An alternate OS is an appealing idea in many ways today but runs into the problem of "where do you get your userspace?" Make it POSIX compatible and you can run all kinds of C code like the GNU tools and other things you find in a Linux distribution. Make something radical and new and you have to write everything from scratch and so do all your users.
Common Lisp was definitely not designed with the intention of better performance, the intention was literally to have a _common_ Lisp language that multiple implementations could use and where porting programs would be much easier. One needs to remember that when Common Lisp was first drafted, there where dozens of Lisp dialects -- all of them different.
The list of CPUs is a very large span, some predating Common Lisp (CLtL1 is from 1984, CLtL2 from 1990, and ANSI Common Lisp from 1996) by close to several years (VAX, from 1977).
But other than that, the idea of a not-Unix system does fall into those two buckets ... make it Unix, or rewrite everything. One can see this in Oberon, Smalltalk-78, Mezzano, etc...
The Symbolics people refined this approach while the LMI people kept the original design until nearly the end when they tried to do a RISC+tags:
http://fare.tunes.org/tmp/emergent/kmachine.htm
The Lisp Machine macroinstructions aren't that complicated, it is a basically stack based machine -- the most complicated thing was handling of the function arguments from Lisp (FEF); which has complicated semantics when it comes to handling optional, keywords, and what not arguments.
No comments yet