Ask HN: Has anybody built search on top of Anna's Archive?
289 points by neonate 6d ago 146 comments
Ask HN: Is there any demand for Personal CV/Resume website?
6 points by usercvapp 1d ago 17 comments
Fui: C library for interacting with the framebuffer in a TTY context
164 Bhulapi 68 5/8/2025, 10:05:37 PM github.com ↗
Self-plug: last month I demoed [0] my own terminal. The goal is to dispense with traditional shells and see what happens. It generated quite a bit of hooplah, with a 50-50 split on how people felt about it.
I'll keep an eye on Fui; might come in handy for future Linux builds.
[0] https://terminal.click/posts/2025/04/the-wizard-and-his-shel...
I wanted to download it, but I see there is contribution needed. I will see how I can convince you :)
If someone thinks they're a good fit to test experimental terminals, an email will suffice: abner at terminal dot click
(Not interested in testing, sad to say. I've spent the last fifteen years trapping "I wish this shell/terminal had X feature" to "time to find a tool for X that isn't as garbage as a shell and a terminal." But I have also used Emacs for about the same length of time, and that implements its own terminal emulators, anyway.)
The rest is all about how smart the terminal is and lets you do things without your shell having to be involved. But... now I have to know _two_ major things: the shell and the terminal. Ugh. Why can't I just have a better shell to do all of that?
How does this stuff work over ssh/mosh? Answer: it doesn't:
> Can’t Shells Do All This Stuff Too?
> Maybe. You could extend the TTY protocol or add new escape codes to mimic some of these features. But as Terminal Click evolves this argument gets sillier. [...]
But if you don't want to add escapes then now you have to define new protocols to make this work across the network. Or give up on the network. Or continue to insist on the terminal interpreting what it sees and being a layer completely detached from the shell, though now it can't see the PATH on the remote end and other such problems.
Don't you see that a plain old terminal emulator + tmux + shell is just an awesome stack?
Now maybe we should never need remote shells. In a zero-touch world it really has to be the case that there are very few places to ssh/mosh over to, though even then we'd need this terminal to be so smart as to understand that this shell is a Windows PowerShell and now whoops we're in a WSL2 bash shell (which is a lot like ssh'ing over to a Linux box).
I just don't have high hopes that these things can make my life easier.
> Don't you see that a plain old terminal emulator + tmux + shell is just an awesome stack?
I think it's only awesome to those of us brought up on RTFM culture. If you read the commentary [0] from HN's little cousin they center the battle around this very subject, which is what I personally care for the most. So much so that I'm willing to smash the stack (heh) and persuade others it's worth reinventing even if we ultimately fail. We will preserve what we can; I'm not going at it blindly.
You'll never meaningfully convince newer generations to leverage man pages for discovering new commands/functions: "Oh oops add a 3 to read about that one. Why? Let me explain man sections to you." Ditto for environment updates: "Try putenv, which should not be confused with setenv! No no, run env first to dump what you have!"
I ALWAYS get blank stares from (imo competent) GenZ devs who were initially curious. They were willing to read and learn and discover but not like this. They tip toe away from me and switch back to IDEs. My examples were contrived but the general observation propagates across the entirety of workflows within classic terminals.
[0] https://lobste.rs/s/ndlwoh/wizard_his_shell
Although I should say that I work with colleagues in newer and older generations, and I find that the younger ones do end up learning how to RTFM.
- Automatic black box with autocomplete is a deal breaker for me. It'll drive me mad.
- I only use the mouse in the terminal if I have to (it disturbs and slows my flow!). So anything that helps me NOT using the mouse would be better. At the moment I think it's only text select/copy I use the mouse for. And resize the window. So a magic quick way to select text and copy from the keyboard would be very nice.
But I do realize I'm not the intended audience.
It’s great to see others working in this field. When I started out (nearly a decade ago now) there was a lot more resistance to these kinds of concepts than there are now.
Anyway unless you were a happy Genera user at that time, I would like what terminal did you use then with color highlighting, dynamic feedback, auto completion, transparency and the other features...
Until 2005, I also had to put up with using Xenix, DG/UX, Aix, Solaris, HP-UX, GNU/Linux via telnet as development servers.
Thankfully by 1996, X Win32 and Hummingbird came to rescue as my X Windows servers of choice, when possible.
As for all those features, you could already do most of them in 4DOS (1989).
https://en.m.wikipedia.org/wiki/4DOS
It is like asking me about electronic typewriter features, when the World has moved on into digital printing.
I've seen many debates about the correct carriage return. The world does not move on uniformly.
My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).
My understanding of the memory access provided by fbdev is that it's an extremely simple API. In other words an outdated abstraction that's still useful so it's kept around.
An example of this complexity is video streams utilizing hardware accelerated decoding. Those often won't show up in screenshots, or if they do they might be out of sync or otherwise not quite what you saw on screen because the driver is attempting to construct a single cohesive snapshot for you where one never actually existed in the first place.
If I got anything wrong please let me know. My familiarity generally stops at the level of the various cross platform APIs (opengl, vulkan, opencl, etc).
Modern hardware still generally can be put into a default VGA-compatible[1] mode that does use a dedicated framebuffer. This mode is used by the BIOS and during boot until the model-specific GPU driver takes over.
[1]: https://en.wikipedia.org/wiki/Video_Graphics_Array#Use
Maybe some of fbdev are like that, but most of them are not. They use vga/vesa interfaces to get a real video memory and write into it. A text console is also using vga video memory to write character data into it.
I still wonder do there any ways to use VGA at its full. Like loading sprites into invisible on the screen video memory and copying them into their right place on the screen. VGA allowed to copy 8 of 4-bit pixels by copying one byte, for example. Were these things just dropped off for a nice abstraction, or maybe there is some ioctls to switch modes for read/writes into video memory? I don't know and never was interested enough to do a research.
> In other words an outdated abstraction that's still useful so it's kept around.
Yes, it is kinda like this, but the outdated abstraction is realized on video card, kernel just gives access to it.
In Linux fbdev is more like a fallback device when drivers for a specific video card are not accessible. fbdevs are used to make a text console with more than 80x25 characters. Video acceleration or opengl can work on fbdev only as a software implementation.
DRM/KMS using dumb buffers are the preferred API if you want to do software rendering and modesetting. You can find several examples of this online if you search for drm_mode_create_dumb.
[0] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds...
the frame
> eventually be written to the screen
the buffer
In practice, it's not literally that, but in practice, it acts/works like that.
Even with modern graphics cards there's a global framebuffer for the entire desktop. It's updated every frame by the operating system sending commands to the graphics card to copy data from all over the place from different programs.
Linux's fbdev API (/dev/fb0) provides a framebuffer that looks like a file - you can read, write and mmap it. This is deprecated, not because it's obsolete functionality, but because the API is obsolete as they want to replace it with DRM dumb buffers (that's Direct Rendering Manager, not the evil thing). You have to use the whole DRM system, which lets you select which graphics card to use, which video ports to use if your card has more than one, which resolution, etc. You can then allocate a frame buffer and tell it to display that buffer. You can also allocate more than one buffer and tell the card when to switch to a different one (i.e. double-buffering).
Is there though?
https://docs.kernel.org/gpu/amdgpu/display/mpo-overview.html
> Plane independent page flips - No need to be tied to global compositor page-flip present rate, reduced latency, independent timing.
There's at least a buffer for scanout but that doesn't necessarily have to be larger than one line and I don't believe it's end use accessible.
As an example. Consider running a game at one refresh rate in windowed fullscreen (or whatever it's called) mode with your compositor at a different refresh rate. With your desktop spanning two monitors, one operating at 60 Hz the other at 50 Hz. And throw in a media player with a window that's split across those two monitors rendering a 29.97 fps (ie NTSC) video stream via hardware accelerated decoding.
[1] https://gitlab.freedesktop.org/mesa/kmscube
I still have not figured out how to do fullscreen graphics on my Mac.
That's a very inefficient approach nowadays. Modern hardware uses accelerated graphics throughout, including for simple 2D rendering the sort of which you would've written in QuickBasic back in the day. Even more complex 2D (where the 3D-render pipeline doesn't help as much) is generally best achieved by resorting to GPU-side compute, as seen e.g. in the Linebender Vello project. This is especially relevant at higher resolutions, color depths and frame rates, where the approach of pushing pixels via the CPU becomes even more clearly an unworkable one.
Computers with GPUs are now fast enough to blast through recomputing the entire screen's contents every frame, but perhaps for efficiency reasons we still shouldn't?
You can't, you don't have direct access to the framebuffer. Unless by "fullscreen" you just mean spanning from end-to-end in which case you can create an opengl or metal view and just set the fullscreen style mask.
Why is this the case? What would be the problem with allowing it?
They used to allow it, but they removed the API after 10.6
https://developer.apple.com/library/archive/documentation/Gr...
I guess on modern macOS CGDisplayCapture() is the closest example that still works (although clearly there is still some compositing going on since the orange-dot microphone indicator still appears, and you can get the dock to appear over it if you activate mission control. I'm guessing it does the equivalent of a full-screen window but then tries to lock input somehow).
The philosophy really creates a lot of problems that end up just frustrating users. It's a really lazy form of "security". Here's a simple example of annoyance: if I have private relay enabled, then turning on my VPN causes an alert (will not dismiss automatically). The same thing happens when I turn it off! It is a fine default, but why can't I decide I want to have automatically dismiss? This is such a silly thing to do considering I can turn off Private Relay, without any privileges, and without any notifications! NOTHING IS BEING SOLVED except annoying users... There's TONS of stupid little bugs that normal people just get used to but have no reason for existing (you can always identify an iPhone user because "i" is never capitalized...)
The philosophy only works if Apple is able to meet everybody's needs, but we're also talking about a company who took years to integrate a flashlight into their smartphone. It's okay that you can't create a perfect system. No one expects you to. It is also okay to set defaults and think about design, that's what people love you for (design). But none of this needs to come at a cost to "power users." I understand we're a small portion of users, but you need to understand that our efforts make your products better. Didn't see an issue with something? That's okay, you're not omniscient. But luckily there's millions of power users who will discover stuff and provide solutions. Even if these solutions are free, you still benefit! Because it makes peoples' experiences better! If it is a really good solution, you can directly integrate it into your product too! Everybody wins! Because designing computers is about designing an ecosystem. You design things people can build on top of! That's the whole fucking point of computers in the first place!
Get your head out of your ass and make awesome products!
[0] If any...
[another example]: Why is it that my Airpods can only connect to one device, constantly switching from my Macbook to iPhone. This often creates unnecessary notifications. I want magic with my headphones. There's no reason I shouldn't be able to be playing Spotify on my computer, walk away from it with my phone in pocket, and have the source switch to my phone. I can pick up my phone and manually change the source. But in reality what happens is I pick up my phone and Spotify pauses because my headphones switched to my iPhone... If you open up access then you bet these things will be solved. I know it means people might be able to then seamlessly switch from their Macbook to their Android, but come on, do you really think you're going to convert them by making their lives harder? Just wait till they try an iPhone! It isn't a good experience... I'm speaking from mine...
For another, security. Your application doesn’t get to scrape other applications’ windows and doesn’t get to put stuff in them, and they don’t get to do that to yours, unless properly entitled & signed. You can self-sign for development but for distribution that ensures malware can be blocked by having its signature revoked.
All software/hardware in the system (Core Animation, Retina scaling, HDR, Apple Silicon's GPUs) assumes an isolated/composited display model. A model where a modern rendering pipeline is optional would add major complexity, and would also prevent Apple from freely evolving its hardware, system software, and developer APIs.
Additionally, a mandatory compositor blocks several classes of potential security threats, such as malware that could suppress or imitate system UI or silently spy on the user and their applications (at least without user permission).
Sometimes a distinction is made between "TTY" (teletype, traditionally separate hardware, now built into the kernel, accessed by ctrl-alt-f1, at /dev/tty1 etc.) and "PTY" (pseudo-teletype, terminal emulator programs running under X11 etc., at /dev/pts/0 etc. nowadays. Confusingly, the traditional paths used /dev/ptyxx for masters and /dev/ttyxx for slaves, which is not the same as the tty-vs-pty distinction here.) "VT" or "virtual console" are unambiguous and more commonly used terms than "TTY" in this sense. Serial, Parallel, and USB terminals don't really fit into this distinction properly, even though they're close to the original definition of teletype they don't support VT APIs.
There are many things the kernel provides in a real TTY (raw keyboard, beeper control, VCS[A] buffers, etc.). "The" framebuffer at /dev/fb0 etc. is usually swapped out when the TTY is (assuming proper keyboard/event handling, which is really icky), so it counts. Actually it's more complicated than "framebuffer" since X11/Wayland actually use newer abstractions that the framebuffer is now built on top of (if I understand correctly; I've only dabbled here).
Note that there are 3 special devices the kernel provides:
This is probably wrong but hopefully informative.https://www.linusakesson.net/programming/tty/
It’s a long read, but necessarily so: there is considerable complexity (and unfortunate overloading of terms) as a consequence of the transition from physical terminals as a separate hardware component to the emulated (warts and all) counterparts we have in modern OSes.
If you're ever bored, from a TTY, type
sudo dd if=/dev/urandom of=/dev/fb0
This provides a nifty demonstration of how both the framebuffer and urandom works.
you can also take a sort of "screenshot" in a tty by typing dd if=/dev/fb0 of=./shot.fb
and then you can view it by flipping those filenames around, so that the shot.fb is now the input and /dev/fb0 is now the output.
Thanks!
Then a driver (if needed at all!), would be trivial, you could take Linux out of the equation & collapse the software stack into a tiny amount of code.