Funny story: using kilo was the final straw [1] in getting me to give up on terminals. These days I try to do all my programming atop a simple canvas I can draw pixels on.
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
I really enjoyed the plan9 way of an application slurping up the terminal window (not a real terminal anyway) and then using it as full fledged GUI window. No weird terminal windows floating around in the background and you still could return to it when quitting for any logs or outputs.
volemo · 1h ago
> These days I try to do all my programming atop a simple canvas I can draw pixels on.
Why?
akkartik · 1h ago
Terminals are full of hacks. For example, in my terminal project linked above the Readme says this:
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
ayrtondesozzla · 22m ago
Sorry for jumping off topic but I came across mu recently - looks very interesting! Hope to try it out properly when I get a moment
lor_louis · 4h ago
Kilo is a fun weekend project, but I learned the hard way that it's not a good base uppon which you should build your own text editor.
The core data structure (array of lines) just isn't that well suited to more complex operations.
> The core data structure (array of lines) just isn't that well suited to more complex operations.
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
userbinator · 3h ago
The core data structure (array of lines) just isn't that well suited to more complex operations.
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
lifthrasiir · 2h ago
The actual latency budget would be less than a single frame to be completely non-noticable, so you are in fact limited to less than 1 GB to move per each keystroke. And each character may hold additional metadata like syntax highlight states, so 1 GB of movable memory doesn't translate to 1 GB of text either. You are still correct in that a line-based array is enough for most cases today, but I don't think it's generally true.
Would highly recommend the tutorial as it is really well done.
ok_dad · 5h ago
Here’s a second recommendation for that tutorial. It’s the first coding tutorial I’ve finished because it’s really good and I enjoyed building the foundational software program that my craft relies on. I don’t use that editor but it was fun to create it.
90s_dev · 6h ago
Reading through this code is a veritable rite of passage. You learn how C works, how text editors work, how VT codes work, how syntax highlighting works, how find works, and how little code it really takes to make anything when you strip away almost all conveniences, edge cases, and error handling.
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
lifthrasiir · 4h ago
Personally, this is the reason I don't really buy the extreme size reduction; such projects generally have to sacrifice some essential features that demand a certain but necessary amount of code.
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
[1] https://git.sr.ht/~akkartik/teliva
https://arcan-fe.com/2025/01/27/sunsetting-cursed-terminal-e...
Why?
"Backspace is known to not work in some configurations. As a workaround, typing ctrl-h tends to work in those situations." (https://git.sr.ht/~akkartik/teliva#known-issues)
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
The core data structure (array of lines) just isn't that well suited to more complex operations.
Anyway here's what I built: https://github.com/lorlouis/cedit
If I were to do it again I'd use a piece table[1]. The VS code folks wrote a fantastic blog post about it some time ago[2].
[1] https://en.m.wikipedia.org/wiki/Piece_table [2] https://code.visualstudio.com/blogs/2018/03/23/text-buffer-r...
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
Would highly recommend the tutorial as it is really well done.
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
And these projects:
https://github.com/antirez/kilo/forks
go figure.
;)