I love, love, love flipping through old magazines. Look, an ad for a commercial Emacs! A C compiler for just $495! A port of vi to MS-DOS for $149! A sort command for just $135! A PC card with a 68000 coprocessor for heaven knows why!
The good old days were fun for their sense of everything-is-new adventure, but there's an awful lot I don't miss.
uxhacker · 4h ago
I recently have also been thinking about Jef Raskin’s book The Humane Interface. It feels increasingly relevant to now.
Raskin was deeply concerned with how humans think in vague, associative, creative ways, while computers demand precision and predictability.
His goal was to humanize the machine through thoughtful interface design—minimizing modes, reducing cognitive load, and anticipating user intent.
What’s fascinating now is how AI, changes the equation entirely. Instead of rigid systems requiring exact input, we now have tools that themselves are fuzzy, and probabilistic.
I keep thinking that the gap Raskin was trying to bridge is closing—not just through interface, but through the architecture of the machine itself.
So AI makes Raskin’s vision more feasible than ever but also challenges his assumptions:
Does AI finally enable truly humane interfaces?
rasz · 1m ago
Dude is responsible for one-button mouse ...
ianbicking · 2h ago
"Does AI finally enable truly humane interfaces?"
I think it does; LLMs in particular. AI also enables a ton of other things, many of them inhumane, which can make it very hard to discuss these things as people fixate on the inhumane. (Which is fair... but if you are BUILDING something, I think it's best to fixate on the humane so that you conjure THAT into being.)
I think Jef Raskin's goal with a lot of what he proposed was to connect the computer interface more directly with the user's intent. An application-oriented model really focuses so much of the organization around the software company's intent and position, something that follows us fully into (most of) today's interfaces.
A magical aspect of LLMs is that they can actually fully vertically integrate with intent. It doesn't mean every LLM interface exposes this or takes advantage of this (quite the contrary!), but it's _possible_, and it simple wasn't possible in the past.
For instance: you can create an LLM-powered piece of software that collects (and allows revision) to some overriding intent. Just literally take the user's stated intent and puts it in a slot in all following prompts. This alone will have a substantial effect on the LLMs behavior! And importantly you can ask for their intent, not just their specific goal. Maybe I want to build a shed, and I'm looking up some materials... the underlying goal can inform all kinds of things, like whether I'm looking for used or new materials, aesthetic or functional, etc.
To accomplish something with a computer we often thread together many different tools. Each of them is generally defined by their function (photo album, email client, browser-that-contains-other-things, and so on). It's up to the human to figure out how to assemble these, and at each step it's easy to lose track, to become distracted or confused, to lose track of context. And again an LLM can engage with the larger task in a way that wasn't possible before.
PaulDavisThe1st · 2h ago
Tell me, how does doing any of the things you've suggested help with the huge range of computer-driven tasks that have nothing to do with language? Video editing, audio editing, music composition, architectural and mechanical design, the list is vast and nearly endless.
LLMs have no role to play in any of that, because their job is text generation. At best, they could generate excerpts from a half-imagined user manual ...
ianbicking · 29m ago
Everything has to do with language! Language is a way of stating intention, of expression something before it exists, of talking about goals and criteria. Everything example you give can be described in language. You are caught up in the mechanisms of these tools, not the underlying intention.
You can describe your intention in any of these tools. And it can be whatever you want... maybe your intention in an audio editor is "I need to finish this before the deadline in the morning but I have no idea what the client wants" and that's valid, that's something an LLM can actually work with.
HOW the LLM is involved is an open question, something that hasn't been done very well, and may not work well when applied to existing applications. But an LLM can make sense of events and images in addition to natural language text. You can give an LLM a timestamped list of UI events and it can actually infer quite a bit about what the user is actually doing. What does it do with that understanding? We're going to have to figure that out! These are exciting times!
uxhacker · 1h ago
Because some LLMs are now multimodal—they can process and generate not just text, but also sound and visuals. In other words, they’re beginning to handle a broader range of human inputs and outputs, much like we do.
PaulDavisThe1st · 1h ago
Those are not LLMs. They use the same foundational technology (pick what you like, but I'd say transformers) to accomplish tasks that require entirely different training data and architectures.
I was specifically asking about LLMs because the comment I replied to only talked about LLMs - Large Language Models.
jnwatson · 1h ago
Multimodal LLMs are absolutely LLMs, the language is just not human language.
Karrot_Kream · 1h ago
What if you could pilot your video editing tool through voice? Have a multimodal LLM convert your instructions into some structured data instruction that gets used by the editor to perform actions.
masfuerte · 1h ago
Compare pinch zoom to the tedious scene in Bladerunner where Deckard is asking the computer to zoom in to a picture.
PaulDavisThe1st · 1h ago
Training LLMs to generate some internal command structure for a tool is conceptually similar to what we've done with them already, but the training data for it is essentially non-existent, and would be hard to generate.
yubblegum · 56m ago
Deckard. Blade Runner.
growlNark · 3h ago
> Does AI finally enable truly humane interfaces?
Perhaps, but I don't think we're going to see evidence of this for quite a while. It would be really cool if the computer adapted to how you naturally want to use it, though, without forcing you through an interface where you talk/type to it.
Karrot_Kream · 1h ago
> Does AI finally enable truly humane interfaces?
This is something I keep tossing over in my head. Multimodal capabilities of frontier models right now are fantastic. Rather than locking into a desktop with peripherals or hunching over a tiny screen and tapping with thumbs we finally have an easy way to create apps that interact "natively" through audio. We can finally try to decipher a user's intent rather than forcing the user to interact through an interface designed to provide precise inputs to an algorithm. I'm excited to see what we build with these things.
rendx · 3h ago
Highly recommended, timeless read!
mistrial9 · 3h ago
"no" .. intelligent appliance was the product that came out of Raskin's thinking..
I object to the framing of this question directly -- there is no definition of "AI" . Secondly, the humane interface is a genre that Jef Raskin shaped and re-thought over years.. A one-liner here definitely does not embody the works of Jef Raskin.
Off the top of my head, it appears that "AI" enables one-to-many broadcast, service interactions and knowledge retrieval in a way that was not possible before. The thinking of Jef Raskin was very much along the lines of an ordinary person using computers for their own purposes. "AI" in the supply-side format coming down the road, appears to be headed towards societal interactions that depersonalize and segregate individual people. It is possible to engage "AI" whatever that means, to enable individuals as an appliance. This is by no means certain at this time IMHO.
PaulDavisThe1st · 1h ago
> Let me make a typical error. I want to move the cursor to the word good, so I should press the left Leap key and type “good.” I'll press the right Leap key and type. It found it anyway. The system does one thing that all systems should have done from day 1: If you tell it to search one way for something and it doesn’t find it, it searches the other way in case you made a mistake. Most systems didn’t do this because if you did find it then you’ve lost your place. In this system if you want to go back, you just bang on the keyboard. (Raskin slams both hands on the keyboard, and the cursor returns to the point in the document at which his search began.)
We're supposed to idolize this as some sort of hyper-enlightened version of interface design? Hell no.
I get that this design worked for Raskin. It worked for him the same way that my hacked version of GNU Emacs' next-line function does for me when the cursor is at the end of the buffer, or how I needed a version of its delete-horizontal-space but that would work only after the cursor.
I get that Raskin's "oh, you probably made a mistake, let me do something else that I suspect is what you really meant" might even have worked for a bunch of other people too. But the idea that somehow Raskin had some sort of deep insight with stuff like this, rather than just a set of personal preferences that are like those of anyone else, is just completely wrong, I think.
jaysonelliot · 1h ago
You're making the error of judging Raskin's approach with the knowledge of user interfaces that a person in 2025 has. It's been 40 years since that interview.
Many people today weren't even born yet.
In that 40 years, many UI conventions have sprung up, and we've internalized them to the point that they're so familiar we actually say they're intuitive.
But if you go back to the state of computing in 1986, or even earlier, when Raskin was developing his UX principles for the Canon Cat and the SwyftCard, he was considering computer interfaces that were almost exclusively command-line interfaces.
You're not supposed to "idolize" any designer or engineer. But I would highly encourage you to read The Humane Interface, learn about the underlying principles of usability and interface design, and consider how you'd apply them to a UI today, 40 years later. The execution you'd come up with would be different. But the principles he started from are foundational and very useful.
mongol · 1h ago
Raskin was a fan of "zoomable interfaces" as I recall. Remember reading about a huge canvas which you navigate and can zoom in and out of.
Today we have Miro and it works like that. I hate it.
PaulDavisThe1st · 1h ago
Further, I have read several sections of The Humane Interface, and I think it does contain some real insight, some of which we have unfortunately lost.
But I do not think that Raskin was channelling some remarkable stream of insight into these matters. And yes, "idolize" was more poking fun at people who use superlatives to describe him, in my opinion without much justification.
PaulDavisThe1st · 1h ago
You're making the mistake of thinking I wasn't using computers in 1986 :)
I used GNU Emacs as an example for precisely this reason.
I read his book during my early years of software development, as a big fan of the (early) Mac and with a real passion to build better user experiences in industrial software.
I often wonder what Jef would think of the iPhone (or even the current Mac), if he were still with us. I suspect he'd be deeply disappointed.
throwaway6723 · 1h ago
Jef Raskin got cut out of the Macintosh project by Steve Jobs and he held a grudge about that. Unclear if he'd be deeply disappointed for personal or technical reasons.
janfoeh · 3h ago
By definition, an operating system is the program you have to fight with before you can fight with an application.
One for the quote file.
emorning3 · 3h ago
I suspect that Jef Raskin would not be down with "prompt engineering' at all.
I think that Mr Raskin's opinion would be that it should be obvious how to use a piece of software.
TMWNN · 2h ago
Prompt engineering, it seems to me, is about the most obvious way to use a computer: Tell it in plain English what you want.
mistrial9 · 2h ago
you are assuming clarity and good faith on the other side of the model providers
The good old days were fun for their sense of everything-is-new adventure, but there's an awful lot I don't miss.
Raskin was deeply concerned with how humans think in vague, associative, creative ways, while computers demand precision and predictability.
His goal was to humanize the machine through thoughtful interface design—minimizing modes, reducing cognitive load, and anticipating user intent.
What’s fascinating now is how AI, changes the equation entirely. Instead of rigid systems requiring exact input, we now have tools that themselves are fuzzy, and probabilistic.
I keep thinking that the gap Raskin was trying to bridge is closing—not just through interface, but through the architecture of the machine itself.
So AI makes Raskin’s vision more feasible than ever but also challenges his assumptions:
Does AI finally enable truly humane interfaces?
I think it does; LLMs in particular. AI also enables a ton of other things, many of them inhumane, which can make it very hard to discuss these things as people fixate on the inhumane. (Which is fair... but if you are BUILDING something, I think it's best to fixate on the humane so that you conjure THAT into being.)
I think Jef Raskin's goal with a lot of what he proposed was to connect the computer interface more directly with the user's intent. An application-oriented model really focuses so much of the organization around the software company's intent and position, something that follows us fully into (most of) today's interfaces.
A magical aspect of LLMs is that they can actually fully vertically integrate with intent. It doesn't mean every LLM interface exposes this or takes advantage of this (quite the contrary!), but it's _possible_, and it simple wasn't possible in the past.
For instance: you can create an LLM-powered piece of software that collects (and allows revision) to some overriding intent. Just literally take the user's stated intent and puts it in a slot in all following prompts. This alone will have a substantial effect on the LLMs behavior! And importantly you can ask for their intent, not just their specific goal. Maybe I want to build a shed, and I'm looking up some materials... the underlying goal can inform all kinds of things, like whether I'm looking for used or new materials, aesthetic or functional, etc.
To accomplish something with a computer we often thread together many different tools. Each of them is generally defined by their function (photo album, email client, browser-that-contains-other-things, and so on). It's up to the human to figure out how to assemble these, and at each step it's easy to lose track, to become distracted or confused, to lose track of context. And again an LLM can engage with the larger task in a way that wasn't possible before.
LLMs have no role to play in any of that, because their job is text generation. At best, they could generate excerpts from a half-imagined user manual ...
You can describe your intention in any of these tools. And it can be whatever you want... maybe your intention in an audio editor is "I need to finish this before the deadline in the morning but I have no idea what the client wants" and that's valid, that's something an LLM can actually work with.
HOW the LLM is involved is an open question, something that hasn't been done very well, and may not work well when applied to existing applications. But an LLM can make sense of events and images in addition to natural language text. You can give an LLM a timestamped list of UI events and it can actually infer quite a bit about what the user is actually doing. What does it do with that understanding? We're going to have to figure that out! These are exciting times!
I was specifically asking about LLMs because the comment I replied to only talked about LLMs - Large Language Models.
Perhaps, but I don't think we're going to see evidence of this for quite a while. It would be really cool if the computer adapted to how you naturally want to use it, though, without forcing you through an interface where you talk/type to it.
This is something I keep tossing over in my head. Multimodal capabilities of frontier models right now are fantastic. Rather than locking into a desktop with peripherals or hunching over a tiny screen and tapping with thumbs we finally have an easy way to create apps that interact "natively" through audio. We can finally try to decipher a user's intent rather than forcing the user to interact through an interface designed to provide precise inputs to an algorithm. I'm excited to see what we build with these things.
I object to the framing of this question directly -- there is no definition of "AI" . Secondly, the humane interface is a genre that Jef Raskin shaped and re-thought over years.. A one-liner here definitely does not embody the works of Jef Raskin.
Off the top of my head, it appears that "AI" enables one-to-many broadcast, service interactions and knowledge retrieval in a way that was not possible before. The thinking of Jef Raskin was very much along the lines of an ordinary person using computers for their own purposes. "AI" in the supply-side format coming down the road, appears to be headed towards societal interactions that depersonalize and segregate individual people. It is possible to engage "AI" whatever that means, to enable individuals as an appliance. This is by no means certain at this time IMHO.
We're supposed to idolize this as some sort of hyper-enlightened version of interface design? Hell no.
I get that this design worked for Raskin. It worked for him the same way that my hacked version of GNU Emacs' next-line function does for me when the cursor is at the end of the buffer, or how I needed a version of its delete-horizontal-space but that would work only after the cursor.
I get that Raskin's "oh, you probably made a mistake, let me do something else that I suspect is what you really meant" might even have worked for a bunch of other people too. But the idea that somehow Raskin had some sort of deep insight with stuff like this, rather than just a set of personal preferences that are like those of anyone else, is just completely wrong, I think.
In that 40 years, many UI conventions have sprung up, and we've internalized them to the point that they're so familiar we actually say they're intuitive.
But if you go back to the state of computing in 1986, or even earlier, when Raskin was developing his UX principles for the Canon Cat and the SwyftCard, he was considering computer interfaces that were almost exclusively command-line interfaces.
You're not supposed to "idolize" any designer or engineer. But I would highly encourage you to read The Humane Interface, learn about the underlying principles of usability and interface design, and consider how you'd apply them to a UI today, 40 years later. The execution you'd come up with would be different. But the principles he started from are foundational and very useful.
Today we have Miro and it works like that. I hate it.
But I do not think that Raskin was channelling some remarkable stream of insight into these matters. And yes, "idolize" was more poking fun at people who use superlatives to describe him, in my opinion without much justification.
I used GNU Emacs as an example for precisely this reason.
One for the quote file.
I think that Mr Raskin's opinion would be that it should be obvious how to use a piece of software.
I haven't played it, but looking at the graphics alone brings up some deep feelings of nostalgia