Show HN: Nerdlog – Fast, multi-host TUI log viewer with timeline histogram
134 dimonomid 55 4/21/2025, 11:38:10 AM github.com ↗
For more background and technical details, I wrote this up as well: https://dmitryfrank.com/projects/nerdlog/article
TIL awk patterns can be more than just regexes and can be combined with boolean operators. I've written a bit of awk and never realized this.
Logdy is more web-based and focuses on live tailing, structured log search, and quick filtering across multiple sources, also without requiring a centralized server. Not trying to compare directly, but if you're exploring this space, you might find it useful as a complementary approach or for different scenarios. Although we need to work more on adding ability to query multiple hosts.
Anyway, kudos on nerdlog—always great to see lean tools in the logging space that don’t require spinning up half a dozen services.
[1] https://logdy.dev
I do not want to store plaintext logs and use ancient workarounds like logrotate. journald itself has the built-in ability to receive logs from remote hosts (journald remote & gateway) and search them using --merge.
As mentioned in the article, my original use case was: having a fleet of hosts, each printing pretty sizeable amount of logs, e.g. having more than 1-2GB log file on every host on a single day was pretty common. My biggest problem with journalctl is that, during some intensive spikes of logs, it might drop logs; we were definitely observing this behavior that some messages are clearly missing from the journalctl output, but when we check the plain log files, the messages are there. I don't remember details now, but I've read about some kind of ratelimiting / buffer overflow going on there (and somehow the part which writes to the files enjoys not having these limits, or at least having more permissive limits). So that's the primary one; I definitely didn't want to deal with missing logs. Somehow, old school technology like plain log files keeps being more reliable.
Second, at least back then, journalctl was noticeably slower than simply using tail+head hacks to "select" the requested time range.
Third, having a dependency like journalctl it's just harder to test than plain log files.
Lastly, I wanted to be able to use any log files, not necessarily controlled by journalctl.
I think adding support for journalctl should be possible, but I still do have doubts on whether it's worth it. You mention that you don't want to store plaintext logs and using logrotate, but is it painful to simply install rsyslog? I think it takes care of all this without us having to worry about it.
Example: I'm running an Arch-based Linux desktop. Installing ryslog took several minutes to build and install. If I wasn't highly motivated to try out nerdlog, I would have canceled the install.
Also, can the SSH requirement for localhost be bypassed? Most users won't be running an SSH server on their desktop, and this would improve nerdlog's use-cases and make it easier for new users to give it a quick local test run.
Final suggestion: add `go get` support to your repo, so that I can install nerdlog from a single command and not have to clone the repo itself.
But yes the bypass for localhost can definitely be implemented.
I did go get install ...nerdlog/cmd/nerdlog-tui@latest just fine.
Thanks for hacking in the open, and releasing early.
Not sure if that "Thanks" for releasing early is sarcastic, but regardless, I appreciate the feedback.
Just posting it in case you want to subscribe to it. Looks like it's a popular demand indeed, so I'll at least poke it and see what kind of performance we can get out of it.
The `go get` one should be easy to solve though, and my bad for not thinking of it before, thanks. I'll look into it.
Regardless, journalctl support is the single most requested feature, so yeah I'll at least try to make that happen; hopefully on the upcoming weekend if I'm lucky.
Thanks again for considering journald, and at the same time, don't forget that it's your project at the end of the day... you can always disregard feature requests if it's not a direction you want to head in. Though in this case, I do believe journald support would get your tool more traction with a larger audience in the long term.
Fyi, support for journalctl was added to master, in case you wanted to try it out. I didn't yet add automated tests with the mocked journalctl, but my manual tests show that it's working fine.
If a system doesn't have either `/var/log/messages` or `/var/log/syslog`, nerdlog will now resort to `journalctl` by default.
It can also be selected explicitly by specifying `journalctl` as the file, e.g. `myserver.com:22:journalctl`.
Thanks for trying it out, and for all the suggestions, very helpful!
i use lnav in this way all the time: journalctl -f -u service | lnav
this is the ethos of unix tooling
In fact nerdlog doesn't even support anything like -f (realtime following) yet. The idea to implement it did cross my mind, but I never really needed it in practice, so I figured I'd spend my time on something else. Might do it some day if the demand is popular, but still, nerdlog in general is not about just reading a continuous stream of logs; it's rather about being able to query arbitrary time periods from remote logs, and being very fast at that.
However, before we go there, I want to double check that we're on the same page: this `log_files` field specifies only files _in the same logstream_; meaning, these files need to have consecutive logs. So for example, it can be ["/var/log/syslog", "/var/log/syslog.1"], or it can be ["/var/log/auth.log", "/var/log/auth.log.1"], but it can NOT be something like ["/var/log/syslog", "/var/log/auth.log"].
https://docs.ansible.com/ansible/11/collections/ansible/buil...
e.g.
That first "children" key is because in ansible's world one can have "vars" and "hosts" that exist at the very top, too; the top-level "vars" would propagate down to all hosts which one can view as "not necessary" in the GP's example, or "useful" if those files are always the same for every single host in the whole collection. Same-same for the "user:" but I wasn't trying to get bogged down in the DRY for this exerciseYeah it would be great, and I do want to support it, especially if the demand is popular. In fact, even if you ungzip them manually, as of today nerdlog doesn't support more than 2 files in a logstream, which needs to be fixed first.
Specifically about supporting gzipped logs though, the UX I'm thinking about is like this: if the requested time range goes beyond the earliest available ungzipped file, then warn the user that we'll have to ungzip the next file (that warning can be turned off in options though, but by default I don't want to just ungzip it silently, because it can consume a signficant amount of disk space). So if the user agrees, nerdlog ungzips it and places somewhere under tmp. It'll never delete it manually though, relying on the regular OS means of cleaning up /tmp, and will keep using it as long as it's available.
Does it make sense?
> In fact, even if you ungzip them manually, as of today nerdlog doesn't support more than 2 files in a logstream
Ah, interesting! I read the limitation as "we don't support zipped files," not "we only support two files!"
Best of luck, this is neat!
Does this work with runit (Void Linux)?
That's not hard to implement, however to make it persistent requires implementing some config / scriptability, which is a whole other thing and requires more thought.
Re: runit, I never tested it, but after looking around briefly, it sounds like there is no unified log file, and not even unified log format? I mean it's possible to make it work, treating every log file as a separate logstream, but I've no idea what these logs look like and whether supporting the formats would be easy.
It is a simple script: https://github.com/void-linux/socklog-void/blob/master/svlog...
I think everything is in /var/log/socklog/everything/current, so this could be considered united.
Before you add the timefmt, it may be better to add a configuration file if one does not already exist, but it seems like it does? You already have ~/.config/nerdlog/logstreams.yaml, so might as well have config.yaml?
For more about logging on Void: https://docs.voidlinux.org/config/services/logging.html
2025-04-21 12:34:56 myhostname myservice: Something happened
If so, then yeah it's totally doable to make this format supported.
Re: config.yaml, yeah I thought of that, but in the long term I rather wanted it to be nerdlogrc.lua, so a Lua script which nerdlog executes on startup. Similar to vim (or rather, more like neovim in this case since it's Lua). Certainly having config.yaml is easier to implement, but in the longer term it may make things more confusing if we also introduce the Lua scripting.
Good luck with whatever you're going through!
> If you sleep in a position that causes radial nerve compression, you may wake up experiencing numbness and tingling along the back of your arm, forearm, and hand. With more severe compression, you may also experience “wrist drop”. With wrist drop, your wrist becomes limp, and is unable to extend up.
This is what happened. It happened to me before 2 times, but this time it was only about 30 mins long compression so it is not THAT severe as it was the previous 2 times.
The reason I expanded on it is that just mere 30 minutes of sleeping on your arm may damage the nerves which results in radial nerve palsy. I would have never imagined it would happen to me, heck, I had no idea it was a thing. I had limp arms before from compression but that was related to only blood circulation, not nerve damage.
Radial nerve palsy is a nightmare for programmers. The first one was so severe that I lost my muscle memory, meaning I could not access some of my accounts.
Treatment is rest, selective electrical stimulation, and B1-B6-B12 combination, ideally injected IM. The faster the better. The first incident took months to recover, this one may only last 3 weeks.
I seriously hope I won't have to deal with it, but thanks for expanding on it and the treatment.
Wish you a speedy recovery!
I went spelunking around in the codebase trying to get the actual answer to your question and it seems it's like many things: theoretically yes with enough energy expended but by default it seems to be ssh-ing into the target hosts and running a pseudo agent over its own protocol back through ssh. So, "no"