It's so cool to see more terminal(-adjacent) experiments! We're overdue in evolving this space.
Self-plug: last month I demoed [0] my own terminal. The goal is to dispense with traditional shells and see what happens. It generated quite a bit of hooplah, with a 50-50 split on how people felt about it.
I'll keep an eye on Fui; might come in handy for future Linux builds.
There’s a lot of parallels between what you’re working on and my projects. In fact nearly all of your features exist in my projects too.
It’s great to see others working in this field. When I started out (nearly a decade ago now) there was a lot more resistance to these kinds of concepts than there are now.
f1shy · 4h ago
This is amazing... I'm asking to have those features (which were available in Genera) all this time!
The hard distinction between Terminal/Text and GUI can be somewhat diluted.
I wanted to download it, but I see there is contribution needed. I will see how I can convince you :)
abnercoimbre · 3h ago
Thank you! Yes it's currently in closed beta (the project is TempleOS-level crazy, which means it's too easy to get it wrong.)
If someone thinks they're a good fit to test experimental terminals, an email will suffice: abner at terminal dot click
shlomo_z · 5h ago
your terminal looks cool!
pjmlp · 4h ago
Depends, I for one, am quite happy no longer having to use a computer like in 1986 - 1990.
f1shy · 4h ago
There is a big difference between "having to" and "being able to" (if you want, like, find it convenient or feel so, for some use case)
Anyway unless you were a happy Genera user at that time, I would like what terminal did you use then with color highlighting, dynamic feedback, auto completion, transparency and the other features...
pjmlp · 3h ago
I used what Timex 2068, MS-DOS 3.3 - 5.0, CP/M, allowed me to do, until I was freed into Windows 3.x and Amiga 500, in 1990.
Until 2005, I also had to put up with using Xenix, DG/UX, Aix, Solaris, HP-UX, GNU/Linux via telnet as development servers.
Thankfully by 1996, X Win32 and Hummingbird came to rescue as my X Windows servers of choice, when possible.
As for all those features, you could already do most of them in 4DOS (1989).
It is like asking me about electronic typewriter features, when the World has moved on into digital printing.
shakna · 2h ago
Some of the authors in my circles still use typewriters. Some don't. That's fine. But people do enjoy going back to older techniques, sometimes.
I've seen many debates about the correct carriage return. The world does not move on uniformly.
markisus · 10h ago
Can someone explain what “the framebuffer” is? I’m familiar with OpenGL programming where the OS can provide a framebuffer for an application but I’m confused about whether there is a global framebuffer for the entire desktop. Is this a Linux specific concept?
Bhulapi · 10h ago
As far as I know, a framebuffer can mean a lot of things depending on hardware and implementation, but it was used to refer to actual memory that would contain pixel values that would eventually be written to the screen. In Linux, this is abstracted by the framebuffer device, which is hardware independent (you can actually have several fbdevices, which if I'm not mistaken end up referring to different monitors usually). What's convenient about the implementation is that these devices still work as normal memory devices, which means you can read/write as you would any other memory. Some more info: https://www.kernel.org/doc/html/latest/fb/framebuffer.html
fc417fc802 · 8h ago
I'll preface this by saying that I may have some misconceptions. Other people much more knowledgeable than I am have posted summaries of how modern graphics hardware works on HN before.
My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).
My understanding of the memory access provided by fbdev is that it's an extremely simple API. In other words an outdated abstraction that's still useful so it's kept around.
An example of this complexity is video streams utilizing hardware accelerated decoding. Those often won't show up in screenshots, or if they do they might be out of sync or otherwise not quite what you saw on screen because the driver is attempting to construct a single cohesive snapshot for you where one never actually existed in the first place.
If I got anything wrong please let me know. My familiarity generally stops at the level of the various cross platform APIs (opengl, vulkan, opencl, etc).
oasisaimlessly · 7h ago
> My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).
Modern hardware still generally can be put into a default VGA-compatible[1] mode that does use a dedicated framebuffer. This mode is used by the BIOS and during boot until the model-specific GPU driver takes over.
Unless you're deep in a conversation with a graphics nerd, when "the framebuffer" is referenced, what the person normally means is some area of memory, accessible programmatically, that directly represents the pixels displayed on the screen. No fancy windows, vectors, coordinates, just raw memory and values that are the literal values the screen is showing.
In practice, it's not literally that, but in practice, it acts/works like that.
rjsw · 9h ago
On Linux and on other operating systems that have reused the Linux DRM drivers, you can run OpenGL applications from a virtual terminal text console. Examples are kmscube [1] and the glmark2 benchmark suite.
Awesome! Reminds me of the good old days of QuickBasic and SCREEN 13, when you could write very small programs with fullscreen graphics.
I still have not figured out how to do fullscreen graphics on my Mac.
krackers · 10h ago
>how to do fullscreen graphics on my Mac
You can't, you don't have direct access to the framebuffer. Unless by "fullscreen" you just mean spanning from end-to-end in which case you can create an opengl or metal view and just set the fullscreen style mask.
jebarker · 9h ago
> You can't, you don't have direct access to the framebuffer.
Why is this the case? What would be the problem with allowing it?
krackers · 8h ago
It fits Apple's modus operandi to enforce things UI/UX wise, I assume in this case they don't want end-apps to be able to bypass the compositor (and e.g. prevent alerts from showing on the screen or whatnot).
They used to allow it, but they removed the API after 10.6
I guess on modern macOS CGDisplayCapture() is the closest example that still works (although clearly there is still some compositing going on since the orange-dot microphone indicator still appears, and you can get the dock to appear over it if you activate mission control. I'm guessing it does the equivalent of a full-screen window but then tries to lock input somehow).
godelski · 7h ago
Apple Employees: Please push back against this behavior. It creates more problems than it solves[0]
The philosophy really creates a lot of problems that end up just frustrating users. It's a really lazy form of "security". Here's a simple example of annoyance: if I have private relay enabled, then turning on my VPN causes an alert (will not dismiss automatically). The same thing happens when I turn it off! It is a fine default, but why can't I decide I want to have automatically dismiss? This is such a silly thing to do considering I can turn off Private Relay, without any privileges, and without any notifications! NOTHING IS BEING SOLVED except annoying users... There's TONS of stupid little bugs that normal people just get used to but have no reason for existing (you can always identify an iPhone user because "i" is never capitalized...)
The philosophy only works if Apple is able to meet everybody's needs, but we're also talking about a company who took years to integrate a flashlight into their smartphone. It's okay that you can't create a perfect system. No one expects you to. It is also okay to set defaults and think about design, that's what people love you for (design). But none of this needs to come at a cost to "power users." I understand we're a small portion of users, but you need to understand that our efforts make your products better. Didn't see an issue with something? That's okay, you're not omniscient. But luckily there's millions of power users who will discover stuff and provide solutions. Even if these solutions are free, you still benefit! Because it makes peoples' experiences better! If it is a really good solution, you can directly integrate it into your product too! Everybody wins! Because designing computers is about designing an ecosystem. You design things people can build on top of! That's the whole fucking point of computers in the first place!
Get your head out of your ass and make awesome products!
[0] If any...
[another example]: Why is it that my Airpods can only connect to one device, constantly switching from my Macbook to iPhone. This often creates unnecessary notifications. I want magic with my headphones. There's no reason I shouldn't be able to be playing Spotify on my computer, walk away from it with my phone in pocket, and have the source switch to my phone. I can pick up my phone and manually change the source. But in reality what happens is I pick up my phone and Spotify pauses because my headphones switched to my iPhone... If you open up access then you bet these things will be solved. I know it means people might be able to then seamlessly switch from their Macbook to their Android, but come on, do you really think you're going to convert them by making their lives harder? Just wait till they try an iPhone! It isn't a good experience... I'm speaking from mine...
CharlesW · 6h ago
> What would be the problem with allowing it?
All software/hardware in the system (Core Animation, Retina scaling, HDR, Apple Silicon's GPUs) assumes an isolated/composited display model. A model where a modern rendering pipeline is optional would add major complexity, and would also prevent Apple from freely evolving its hardware, system software, and developer APIs.
Additionally, a mandatory compositor blocks several classes of potential security threats, such as malware that could suppress or imitate system UI or silently spy on the user and their applications (at least without user permission).
eschaton · 6h ago
For one thing, on most modern systems there isn’t a framebuffer, there’s only a simulation of one created by GPU compositing.
For another, security. Your application doesn’t get to scrape other applications’ windows and doesn’t get to put stuff in them, and they don’t get to do that to yours, unless properly entitled & signed. You can self-sign for development but for distribution that ensures malware can be blocked by having its signature revoked.
Bhulapi · 10h ago
My first experience with programming was with QuickBasic. You just brought back some memories, wish I still had all of those old programs around.
antirez · 2h ago
Related, at a different layer of abstraction: Kitty graphical protocol, implemented also by Ghostty terminal emulator.
RetroTechie · 4h ago
This kind of thing begs to be run bare metal (no Linux fbdev using modern 3D GPU with a complex driver stack under the hood). Or some small RTOS at most.
kristianp · 9h ago
What does "in a TTY" context mean here? It doesn't mean in a terminal window, right?
o11c · 4h ago
The terminology around this is really confusing, unfortunately.
Sometimes a distinction is made between "TTY" (teletype, traditionally separate hardware, now built into the kernel, accessed by ctrl-alt-f1, at /dev/tty1 etc.) and "PTY" (pseudo-teletype, terminal emulator programs running under X11 etc., at /dev/pts/0 etc. nowadays. Confusingly, the traditional paths used /dev/ptyxx for masters and /dev/ttyxx for slaves, which is not the same as the tty-vs-pty distinction here.) "VT" or "virtual console" are unambiguous and more commonly used terms than "TTY" in this sense. Serial, Parallel, and USB terminals don't really fit into this distinction properly, even though they're close to the original definition of teletype they don't support VT APIs.
There are many things the kernel provides in a real TTY (raw keyboard, beeper control, VCS[A] buffers, etc.). "The" framebuffer at /dev/fb0 etc. is usually swapped out when the TTY is (assuming proper keyboard/event handling, which is really icky), so it counts. Actually it's more complicated than "framebuffer" since X11/Wayland actually use newer abstractions that the framebuffer is now built on top of (if I understand correctly; I've only dabbled here).
Note that there are 3 special devices the kernel provides:
/dev/tty0 is a sort-of alias for whichever /dev/tty1 etc. is currently active. This is not necessarily the one the program was started on; getting that is very hacky.
/dev/tty is a sort-of alias for the current program's controlling terminal (see credentials(7)), which might be a PTY or a TTY or none at all.
/dev/console is where the kernel logs stuff and single-user logins are done, which by default is /dev/tty0 but you can pass console= multiple times (the last will be used in contexts where a single device is needed).
This is probably wrong but hopefully informative.
freeone3000 · 8h ago
It does. Terminals, including without X, are frequently graphical devices, allowing for full-color graphics without needing Xlib or Wayland. This allows you to more easily manipulate that capability.
geon · 3h ago
On all OSes?
nimish · 10h ago
Interesting, I guess you could port LVGL to this and get a full GUI?
Bhulapi · 10h ago
I think trying to use anything from LVGL in this project would reduce to essentially just using LVGL. It's more of a project to try and build most of the components from "scratch", i.e. use as few external libraries as possible.
cellis · 10h ago
Super cool! Looks small enough to still be grokkable!
anthk · 2h ago
Remember SVGAlib and libggi? Maybe FUI it's like the last one.
actionfromafar · 10h ago
Any license on this?
Bhulapi · 10h ago
Added MIT license
mouse_ · 11h ago
Don't type commands from the Internet, especially as root, especially when dd is involved. That being said,
If you're ever bored, from a TTY, type
sudo dd if=/dev/urandom of=/dev/fb0
This provides a nifty demonstration of how both the framebuffer and urandom works.
you can also take a sort of "screenshot" in a tty by typing dd if=/dev/fb0 of=./shot.fb
and then you can view it by flipping those filenames around, so that the shot.fb is now the input and /dev/fb0 is now the output.
Bhulapi · 11h ago
Writing urandom to the framebuffer is a joy in and of itself. You actually reminded me to have users add themselves to the video and input group (which does require root privileges usually), but this way they can then run the library without sudo.
Self-plug: last month I demoed [0] my own terminal. The goal is to dispense with traditional shells and see what happens. It generated quite a bit of hooplah, with a 50-50 split on how people felt about it.
I'll keep an eye on Fui; might come in handy for future Linux builds.
[0] https://terminal.click/posts/2025/04/the-wizard-and-his-shel...
It’s great to see others working in this field. When I started out (nearly a decade ago now) there was a lot more resistance to these kinds of concepts than there are now.
I wanted to download it, but I see there is contribution needed. I will see how I can convince you :)
If someone thinks they're a good fit to test experimental terminals, an email will suffice: abner at terminal dot click
Anyway unless you were a happy Genera user at that time, I would like what terminal did you use then with color highlighting, dynamic feedback, auto completion, transparency and the other features...
Until 2005, I also had to put up with using Xenix, DG/UX, Aix, Solaris, HP-UX, GNU/Linux via telnet as development servers.
Thankfully by 1996, X Win32 and Hummingbird came to rescue as my X Windows servers of choice, when possible.
As for all those features, you could already do most of them in 4DOS (1989).
https://en.m.wikipedia.org/wiki/4DOS
It is like asking me about electronic typewriter features, when the World has moved on into digital printing.
I've seen many debates about the correct carriage return. The world does not move on uniformly.
My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).
My understanding of the memory access provided by fbdev is that it's an extremely simple API. In other words an outdated abstraction that's still useful so it's kept around.
An example of this complexity is video streams utilizing hardware accelerated decoding. Those often won't show up in screenshots, or if they do they might be out of sync or otherwise not quite what you saw on screen because the driver is attempting to construct a single cohesive snapshot for you where one never actually existed in the first place.
If I got anything wrong please let me know. My familiarity generally stops at the level of the various cross platform APIs (opengl, vulkan, opencl, etc).
Modern hardware still generally can be put into a default VGA-compatible[1] mode that does use a dedicated framebuffer. This mode is used by the BIOS and during boot until the model-specific GPU driver takes over.
[1]: https://en.wikipedia.org/wiki/Video_Graphics_Array#Use
the frame
> eventually be written to the screen
the buffer
In practice, it's not literally that, but in practice, it acts/works like that.
[1] https://gitlab.freedesktop.org/mesa/kmscube
I still have not figured out how to do fullscreen graphics on my Mac.
You can't, you don't have direct access to the framebuffer. Unless by "fullscreen" you just mean spanning from end-to-end in which case you can create an opengl or metal view and just set the fullscreen style mask.
Why is this the case? What would be the problem with allowing it?
They used to allow it, but they removed the API after 10.6
https://developer.apple.com/library/archive/documentation/Gr...
I guess on modern macOS CGDisplayCapture() is the closest example that still works (although clearly there is still some compositing going on since the orange-dot microphone indicator still appears, and you can get the dock to appear over it if you activate mission control. I'm guessing it does the equivalent of a full-screen window but then tries to lock input somehow).
The philosophy really creates a lot of problems that end up just frustrating users. It's a really lazy form of "security". Here's a simple example of annoyance: if I have private relay enabled, then turning on my VPN causes an alert (will not dismiss automatically). The same thing happens when I turn it off! It is a fine default, but why can't I decide I want to have automatically dismiss? This is such a silly thing to do considering I can turn off Private Relay, without any privileges, and without any notifications! NOTHING IS BEING SOLVED except annoying users... There's TONS of stupid little bugs that normal people just get used to but have no reason for existing (you can always identify an iPhone user because "i" is never capitalized...)
The philosophy only works if Apple is able to meet everybody's needs, but we're also talking about a company who took years to integrate a flashlight into their smartphone. It's okay that you can't create a perfect system. No one expects you to. It is also okay to set defaults and think about design, that's what people love you for (design). But none of this needs to come at a cost to "power users." I understand we're a small portion of users, but you need to understand that our efforts make your products better. Didn't see an issue with something? That's okay, you're not omniscient. But luckily there's millions of power users who will discover stuff and provide solutions. Even if these solutions are free, you still benefit! Because it makes peoples' experiences better! If it is a really good solution, you can directly integrate it into your product too! Everybody wins! Because designing computers is about designing an ecosystem. You design things people can build on top of! That's the whole fucking point of computers in the first place!
Get your head out of your ass and make awesome products!
[0] If any...
[another example]: Why is it that my Airpods can only connect to one device, constantly switching from my Macbook to iPhone. This often creates unnecessary notifications. I want magic with my headphones. There's no reason I shouldn't be able to be playing Spotify on my computer, walk away from it with my phone in pocket, and have the source switch to my phone. I can pick up my phone and manually change the source. But in reality what happens is I pick up my phone and Spotify pauses because my headphones switched to my iPhone... If you open up access then you bet these things will be solved. I know it means people might be able to then seamlessly switch from their Macbook to their Android, but come on, do you really think you're going to convert them by making their lives harder? Just wait till they try an iPhone! It isn't a good experience... I'm speaking from mine...
All software/hardware in the system (Core Animation, Retina scaling, HDR, Apple Silicon's GPUs) assumes an isolated/composited display model. A model where a modern rendering pipeline is optional would add major complexity, and would also prevent Apple from freely evolving its hardware, system software, and developer APIs.
Additionally, a mandatory compositor blocks several classes of potential security threats, such as malware that could suppress or imitate system UI or silently spy on the user and their applications (at least without user permission).
For another, security. Your application doesn’t get to scrape other applications’ windows and doesn’t get to put stuff in them, and they don’t get to do that to yours, unless properly entitled & signed. You can self-sign for development but for distribution that ensures malware can be blocked by having its signature revoked.
Sometimes a distinction is made between "TTY" (teletype, traditionally separate hardware, now built into the kernel, accessed by ctrl-alt-f1, at /dev/tty1 etc.) and "PTY" (pseudo-teletype, terminal emulator programs running under X11 etc., at /dev/pts/0 etc. nowadays. Confusingly, the traditional paths used /dev/ptyxx for masters and /dev/ttyxx for slaves, which is not the same as the tty-vs-pty distinction here.) "VT" or "virtual console" are unambiguous and more commonly used terms than "TTY" in this sense. Serial, Parallel, and USB terminals don't really fit into this distinction properly, even though they're close to the original definition of teletype they don't support VT APIs.
There are many things the kernel provides in a real TTY (raw keyboard, beeper control, VCS[A] buffers, etc.). "The" framebuffer at /dev/fb0 etc. is usually swapped out when the TTY is (assuming proper keyboard/event handling, which is really icky), so it counts. Actually it's more complicated than "framebuffer" since X11/Wayland actually use newer abstractions that the framebuffer is now built on top of (if I understand correctly; I've only dabbled here).
Note that there are 3 special devices the kernel provides:
This is probably wrong but hopefully informative.If you're ever bored, from a TTY, type
sudo dd if=/dev/urandom of=/dev/fb0
This provides a nifty demonstration of how both the framebuffer and urandom works.
you can also take a sort of "screenshot" in a tty by typing dd if=/dev/fb0 of=./shot.fb
and then you can view it by flipping those filenames around, so that the shot.fb is now the input and /dev/fb0 is now the output.