Show HN: Tattoy – a text-based terminal compositor (tattoy.sh)
16 points by tombh 40m ago 3 comments
Show HN:I made a word translation plugin for language learning.
2 points by Mantaa 1d ago 0 comments
fd: A simple, fast and user-friendly alternative to 'find'
718 tosh 294 3/19/2025, 11:44:17 AM github.com ↗
bat, fd, hexyl, hyperfine
I'm going to take this moment to remind all of you well-paid engineers that if we each spread $10 a month sponsoring talented software makers like sharkdp the Internet would be a better place.
So many great little tools out there and we should try to support an ecosystem for them.
It's a dream team for rust cli tools over there.
[1] rows included linebreaks so your standard sed/head/tail/something-from-coreutils approach would not work.
No criticism to the author. He is way more productive than I will ever be, but xsv does appear to be on the back burner. Open source means the author can spend their time how they like and I am entitled to nothing.
readme says this tool is originally a fork of BurntSushi's xsv, but has been nearly entirely rewritten at that point, to fit SciencesPo's médialab use-cases, rooted in web data collection and analysis geared towards social sciences (you might think CSV is outdated by now, but read our love letter to the format before judging too quickly). xan therefore goes beyond typical data manipulation and expose utilities related to lexicometry, graph theory and even scraping.
https://shark.fish/rustlab2019
we sponsored fd's development a while back and we occasionally sponsor terminal tool authors from time to time at Terminal Trove where we have more tools in the trove. (0)
we're currently sponsoring zellij which I encourage you to check out and sponsor! (1)
https://terminaltrove.com/ (0)
https://github.com/zellij-org/zellij (1)
I don't know if it can do everything that tmux or screen can do. I bet it probably can't. But it does all the things I want such a thing to do, and without needing any configuration on my part.
But yes it makes sense that this feature makes it superior to tmux for some users. A good case for having diversity in software.
I’m not a hardcore tmux user. I just like having persistent sessions on remote machines and bouncing between tabs, things like that. When I want to do something beyond that, with tmux I’d have to RTFM each time. In Zellij, the menu bar lets me discover it easily.
Bottom line: you’re right. I’m glad they both exist!
Do keep in mind how much how trillonaire/billionaire companies sponsor the free software they use while doing so.
find -> fd, time(for runtime comparison) -> hyperfine, grep->ripgrep, asciinema + converting to .gif -> t-rec[1], manually creating convertional commits -> koji[2], etc.
[1]https://terminaltrove.com/t-rec/ [2]https://terminaltrove.com/koji/
Since I believe the correlation of usage of these tools is high, I think they could benefit from having similarly named flags.
Not taking anything away from the tools that have been written. Just for me, the pain of learning a new tool is greater than the convenience I’d gain from using it.
That's because `-i`, while incredibly useful, is not POSIX. So when you say "POSIX tools," what you actually probably mean is, "superset of POSIX tools."
There is some agreement among the same tools as what the options in the superset actually mean, but not always, as is the case with `sed`.
Compare, for example, what `man grep` says on your system with the POSIX definition: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/g...
As for whether flags are consistent across different tools, well that's a total crapshoot. `find` uses `-L` for following symlinks while `grep` uses `-R`. Ironically, both `fd` and ripgrep use `-L`, so they are more consistent than the relevant superset-plus-POSIX tooling on at least that front.
To be clear, I mean this somewhat narrowly. Namely
> I’m too old and too lazy
I totally get that. I have a bunch of things I kinda want to learn and do, but just not enough time in the day. But I do still try to make time for these things. I did it for `tmux` (previously, `screen`) and `zsh` (previously, `bash`) and I am very happy I did. Each one cost me about a Sunday and a half, but they've been paying rewards ever since.
I have. But that’s one inconsistency and an edge case I rarely need to worry about.
Plus we are talking about grep here, not sed. I created my own “sed” because I got fed up with dealing different implementations of existing seds. If others use it or don’t use it then that’s their choice, not mine.
> As for whether flags are consistent across different tools, well that's a total crapshoot. `find` uses `-L` for following symlinks while `grep` uses `-R`. Ironically, both `fd` and ripgrep use `-L`, so they are more consistent than the relevant superset-plus-POSIX tooling on at least that front.
UNIX tools are a mess too. My point wasn’t that their flags are more sane than modern tools. It’s that I’ve already committed to memory the flags for the tools I use daily.
> Each one cost me about a Sunday and a half, but they've been paying rewards ever since
I spend my free time writing open source tools that solve problems I run into (like fixing the limitations with existing UNIX shells, and presently writing a new terminal emulator with a ton of features existing ones neglect). So I don’t have time to learn new tools to solve problems I don’t have.
Edit:
I just want add to my previous comment that some of my open source tools do make use of your fantastic Go libraries, such as the Unicode width library.
So I’ve really valued your contributions to open source even though I don’t personally use ripgrep specifically.
I am not sure I have ever gotten traditional "find" to do what I want on the first try, and I've had a lot of first tries. At some point you have to ask yourself, if I haven't achieved zen in X years with Y tool, maybe the problem is the tool and not me?
Why are we acting like we’re still in the 80’s and can only use tools that existed then?
I have two young children plus maintain several open source projects (like yourself) and a full time job too.
Time isn’t infinite so I focus my energy on the stuff that makes the biggest impact.
It’s not like I’m ungrateful for your contributions to open source and if you’d look at some of the stuff I’ve been building you’d see we are pretty like minded with regards to modernising the command line.
I'm not trying to convince you to do that. I already told you that your philosophy is reasonable. But I absolutely find it disappointing.
Anyway, yes, I have a kid. And all sorts of other things competing for my time. So I don't get much time to do serendipitous learning. But I try when I can. I cited two examples, but that's over a period of many years. I don't have time to do it often.
> you’d see we are pretty like minded with regards to modernising the command line
I bet we are. Maybe you are a lot more open to learning new things than you are letting on in your comments here. :-)
https://murex.rocks
Edit:
> I bet we are. Maybe you are a lot more open to learning new things than you are letting on in your comments here.
Oh absolutely I’m open to learning new things.
Just recently I’ve been learning about tmux control mode (eg what iTerm2 uses for tmux integration) because I wanted to bypass tmux terminal emulation for specific escape codes, meaning I can then draw native elements inside a terminal window, such as spreadsheets, images, and code folding, while still having full tmux capabilities.
That was a “fun” ride. I plan on writing a blog about it at some point because there’s some features about tmux that I think others would be surprised to learn.
But I gave it ten minutes and ported part of https://github.com/BurntSushi/dotfiles/blob/bedf3598f2501ad5... to https://github.com/BurntSushi/dotfiles/blob/bedf3598f2501ad5...
Not much, but it took me some time to get the basics down.
One thing that stood out to me was that startup times are kinda brutal. I'm not sure if that's intended or if it's something about my environment:
That would probably be a deal breaker for me.I got murex through the AUR: https://aur.archlinux.org/packages/murex
> Oh absolutely I’m open to learning new things.
Okay then I'd say this is high contrast with your comments above!
I guess you could liken it to Powershell or JVM in that the start up is generally a one time cost if you’re using it interactively.
I could certainly invest a little time on the start up though. It wouldn’t be impossible to improve things there.
> Okay then I'd say this is high contrast with your comments above!
Is it though? I didn’t say I’m adverse to learning new things. Just that I don’t want to learn replacements for the tools I’ve already memorised.
But anyway, you’ve downloaded murex so I’ll give ripgrep go. I’m sure it’ll become a staple tool for me now I’ve committed time to it :)
The error messages did look very nice.
I have been considering dipping my toes into a less standard shell over the past few years. It is so so so hard to break away from the ubiquitous bullshit that most of us are stuck with. I have no love for the Bourne shell and its derivatives, although zsh is some nice lipstick on a very ugly pig.
The other candidates I'm aware of are fish, nushell and oils. I think nushell is the "most" different, but the last time I tried it, I bounced off of it for reasons I can't remember.
And you’re right that it wasn’t a fair a trade asking you to look at Murex in exchange for ripgrep (which is nice by the way!) but I respect that you did take a look nonetheless.
I guess it does make sense now that I think about it that ripgrep wouldn't do in-place edits. If ripgrep never performs writes, there's never a chance of a mistake in usage or bug in the software clobbering files in bulk.
Yeah that's exactly why it doesn't.
It is true that the `-r/--replace` flag can replace a number of `sed` and `awk` use cases. It has for me at least.
I've switched to sd[1] because it basically just works as I expect every time.
[1]: https://github.com/chmln/sd
its legit as simple as "fd -e png -x optimize-png {}" the only thing I dont like about fd is that for some reason it kind of forces you to do 'fd . Downloads' if you just want everything in "Downloads" which equates to 'fd {pattern} Dir1 dir2" I wish you could omit the pattern sometimes.
Overall, I use AI shell completion so it's much smoother.
What’s the workflow like for AI shell completion?
How does it know which flag to complete for you? Do you write a description in native language (eg English) and it completes the entire command line? Or is it more one flag at a time?
Got it from here
https://x.com/arjie/status/1575201117595926530?s=46
That gives me some ideas to try myself.
For example, I'm used to glob patterns but the glob flag (-g) works differently in fd and rg. I think that fd's -g flag does not use "full-path" globbing while rg's -g does (or the other way around). To get fd to use rg style globs, it also needs the -p flag, which rg also recognizes but it has a completely different meaning for rg and has nothing to do with how globbing/filename matching works.
I guess I'm used to the warts at this stage, like I had gotten used to the warts on find and grep all those years ago.
Difficult or impossible to fix these inconsistencies at this stage without breaking backward compatibility.
fd - https://terminaltrove.com/fd/
bat - https://terminaltrove.com/bat/
numbat - https://terminaltrove.com/numbat/
hyperfine - https://terminaltrove.com/hyperfine/
hexyl - https://terminaltrove.com/hexyl/
we make a real effort to ensure that you can install them with the ability to see the screenshots.
https://asciinema.org/
I think terminaltrove takes these from the projects themselves instead of creating them on their own.
Would be cool if you had mise commands - be it just
mise use -g fd
or for other tools that arent in their registry, how to use the backends like
mise use -g cargo:xyztool
I could see bat being useful only as the last program on a long pipe chain, but I really like xxd so I will pass on hexyl.
For me, anything that isn’t a drop-in replacement for the OG tools isn’t worth the friction. I use ripgrep inside VS Code but vanilla grep on the command line because of years of muscle memory.
That said, I don’t care what language a tool is written in as long as it works. One of my favorite Unix tools is GNU Stow, and it’s written in Perl. Even if these Rust tools were drop-in replacements, I probably wouldn’t bother installing them manually. As a user, the speed improvements and memory safety don’t really matter to me.
There are other languages, like Go, where memory safety is guaranteed as well, and Go’s performance is more than adequate for tooling—with the added benefit of getting more engagement from the community. So I’m not entirely convinced by this “Rust is the savior” narrative.
That said, if macOS or Ubuntu decided to hot-swap the OG tools with Rust alternatives that behave exactly like their predecessors, I probably wouldn’t complain—as long as it doesn’t disrupt my workflow.
But that's exactly the reason to use the newer tools -- they just make more sense -- especially fd over find. I've been using UNIX for over thirty years and find just never clicked with me.
No comments yet
fd -t f -X rm {} \; ^ cache
Which makes me really nervous, so usually I fall back to using find:
find cache -type f -delete
Maybe this is foolproof for me only because I’ve been using find for decades. Is there a version of this for fd that inspires more confidence?
This ensures even if you have filenames starting with - they won't be interpreted as options for rm. For even more sanity of mind you may want to turn on -a for absolute paths, although I don't see an example right now where using relative paths would go wrong.
Usage: fd [OPTIONS] [pattern] [path]...
So, I presumed the path had to go last.
Eh, there are a lot of tools where it actually does kind of matter. I suspect for a lot of invocations of tools like `fd` and `rg`, they'll be done before an equivalent written in java has even had its JVM spin fully up.
There's _tons_ of Java software, but it somehow never managed to make a dent in the CLI space.
> To me, there is no reason to ever use “find”. If I’m on a new system, I just install fd and carry on.
I guess I should finally have a look at how to replace my `find $path -name "*.$ext" -exec nvim {} +` habit … turns out it's `fd -e $ext -X "nvim" "" $path`
If I didn't know how to use "find" and "grep" (though I prefer rg) then I'd be at a disadvantage in these situations. Also command-line git.
It's why I learned to use Vim well, though I daily Emacs.
"The uutils project reimplements ubiquitous command line utilities in Rust. Our goal is to modernize the utils, while retaining full compatibility with the existing utilities. We are planning to replace all essential Linux tools."
https://uutils.github.io/
uutils is being adopted in Ubuntu 25.10:
https://www.theregister.com/2025/03/19/ubuntu_2510_rust/
I never managed to use find because I always had to look up command line arguments. I would always find a different way to solve my problem (e.g. Midnight Commander).
I use fd all the time.
A better interface makes a big difference.
There are a few reasons you might still want to switch. In fd'a case:
- It respects .gitignore files (as well as similar .fdignore files that aren't git specific), which can help you find what you care about without a lot of noise, or having to pass a lot of exclude rules to find
- it can search in parallel, which can significantly reduce latency when searching large directories.
However, there are also reasons you might want to keep using find:
- fd can't do everything find can. fd is intended to replace the most common use cases with a simpler interface, but there are many less common cases that require finds greater flexibility. In fact, I still use find sometimes because of this.
- find is more likely to already be installed
Recently I remembered and installed it. Not too hard to install (although you need to use third party repos sometimes).
And then - voila - a lot of convenience for dinosaurs from Norton Commander era like myself, who cant remember cli tools syntax that well.
What about https://zolk3ri.name/cgit/zpkg/? A lot of improvements have been done behind the scenes apparently (rollback, proper states, atomicity, etc.), but I am not sure when he is willing to publish.
I personally use it as-is when I am compiling stuff myself and the ZPG_DST is ~/.local/. It works well for keeping track of programs that I compile and build myself.
But there is a big BUT! Lately I have to use grep/find huge nested dirs and found rg to be an order of magnitude faster. Had to get myself comfortable with retraining the muscle memory. Worth the effort.
Some of these new shiny tools are meh for my taste. Delta for instance. Or helix the editor. But it is personal. Overall I love the competition. It seems like industry once full of innovators and tinkerers is lacking some shake up.
? toolname <option>
Eg
? git branch
which gives me common examples of git branch
It aliases to tldr which is normally up to date with new tools.
See also cheat.sh etc.
https://tldr.sh/
I understand the point about muscle memory but I think that was more of a concern in the days before we had it easy with instant internet answers and now LLMs (eg GitHub copilot command line) doing our boring thinking for us.
I know fd has options to not ignore things, but I can never remember them, so I just go back to find because I know it'll search everything.
It can search multiple indexed NTFS drives in miliseconds. Indexing is usually a few seconds since it works directly on the NTFS structures. (and it integrates with Total Commander)
You just want `fd -u`. Or in ripgrep's case, `rg -uuu`.
`fd`, I believe, got `-u` from ripgrep. And I can say that ripgrep got its `-u` from `ag`.
No comments yet
I wish the flags between ripgrep and fd lined up as I think that's what confuses me (can't remember now haha).
find . | grep what_i_am_looking_for
Because I can never remember how finds arguments work. I like the integrated xargs like behavior as well.
One thing I did not see in there was how fd handles symlink directory traversal? Searched the whole readme for it, and only found options to match on symlinks or not.
Or just press F1 if you use my shell.
https://murex.rocks/user-guide/interactive-shell.html#autoco...
The other value add for alternative shells like my own is better support in the language for handling structured data (is not treat everything as a dumb byte stream).
Ultimately though, productivity is all about familiarity and if you’re also super efficient in zsh then I’m not going to try and convince you that some shiny new tool is better.
(That’s GNU Grep, just to be clear: https://www.gnu.org/software/grep/manual/grep.html#File-and-...)
I’m 100% on board with this recent-ish trend toward new replacement utilities that may or may not retain all of the flexibility of the originals but are vastly easier to use for common cases.
Why would you love them just because they're in rust?
I'd like to praise the author of this fd for not having "Rust" all over the web page or in the HN link, actually.
The Rust inquisition would get far less pushback if instead of threatening people that god will kill a kitten every time you use a non-kosher programming language, they'd promote software that improves (even opinionated improves, like this fd) on existing tools instead of having its main feature "but it's written in Rust!".
While you’re out here complaining, the Rust community has built a truly impressive array of improved, human-focused command line tools (rg, bat, fd, hyperfine, delta, eza, and on and on) and a bunch of best-in-class libraries for building tools (regex comes to mind).
For example, ripgrep shouldn't merge find and grep (actually, I use a personal wrapper around `find ... -exec grep ... {} +` because my only beef with this is that find's syntax to exclude stuff is horrible, https://git.sr.ht/~q3cpma/scripts/tree/master/item/find_grep). Or you see something like https://github.com/shssoichiro/oxipng/issues/629 because people don't even know how to use xargs...
Feels like if someone did a RIIR on ImageMagick, it'd be able to talk with Instagram and Flickr by itself or process RAW files like Darktable. Would probably have some kind of tagging system too.
GNU grep discards the Unix philosophy in all sorts of places too. Like, why does it even bother with a -r flag in the first place? Did people back then not know how to use xargs either? I mean, POSIX knew not to include it[1], so what gives?
What's actually happening here is that what matters is the user experience, and the Unix philosophy is a means, not an end, to improve the user experience. It's a heuristic. A rule of thumb to enable composition. But it doesn't replace good judgment. Sometimes you want a little more Unix and sometimes you want a little less.
Besides, you can drop ripgrep into a shell pipeline just like you would grep. All that Unix-y goodness is still there.
[1]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/g...
Who ever claimed that GNU was a proponent of the UNIX philosophy? Certainly not me.
> What's actually happening here is that what matters is the user experience, and the Unix philosophy is a means, not an end, to improve the user experience
Yes and no. The expression "user experience" implies that all users are the same and benefit from the same philosophy of software interface design.
> Besides, you can drop ripgrep into a shell pipeline just like you would grep. All that Unix-y goodness is still there.
The "Unix-y goodness" isn't only defined by what it can do, but also what it can't do. I don't want to turn this into a suckless rant, so I won't. Also, not having a mode giving it POSIX grep compatibility really flies in the face of that claim: I want my scripts to work on BSD, MacOS and even Solaris/AIX without having to install stuff.
------
I know I may sound very smug but believe me when I say that I really like some of these utils and the care put into them (and I know you're the author of ripgrep) and that I'm no UNIX OpenBSD user (Plan 9 exists for a reason, and I favor CL over C/Go any time of the day). But I really see them as optimizing for/targeting the VSCode/Zed crowd, instead of the vim/emacs one, if you know what I mean.
Nothing was specific to GNU in that comment. `-r` is available in BSD grep too. You are being needlessly pedantic and avoiding the actual points being made while doing so.
> Yes and no. The expression "user experience" implies that all users are the same and benefit from the same philosophy of software interface design.
No, it doesn’t. You have literally made that up out of whole cloth, but it isn’t how anyone else I’m aware of defines “user experience”.
As BurntSushi pointed out, ripgrep is even designed to adapt to different means of use: as a standalone command, as a utility in a pipe, and others. Improving the usability for one use-case isn’t necessarily at the cost of others.
> I want my scripts to work on BSD, MacOS and even Solaris/AIX without having to install stuff.
So your very reasonable desire to stick with POSIX for shell scripts… means we shouldn’t ever make new tools? I’m genuinely failing to understand your point here.
In fact, having a “grep compatibility flag” would very literally be the opposite of the UNIX philosophy you’re holding up elsewhere. ripgrep isn’t grep. It isn’t trying to be grep. It’s something new and different and (in some use-cases) better. The old thing still exists and if you want to use it for whatever reason… just do that?
Where's the actual point in "others do it too"?
> No, it doesn’t. You have literally made that up out of whole cloth, but it isn’t how anyone else I’m aware of defines “user experience”.
That's not a question of definition, but of use in an English sentence. If you say "this is good for user experience", you implicitly group all users together. I say some users prefer tools that do only one thing. ripgrep itself acknowledges that "despite initially not wanting to add every feature under the sun to ripgrep, over time, ripgrep has grown support for most features found in other file searching tools". And that's fine! But not for everybody.
> As BurntSushi pointed out, ripgrep is even designed to adapt to different means of use: as a standalone command, as a utility in a pipe, and others. Improving the usability for one use-case isn’t necessarily at the cost of others.
If I like the "finding files with auto filtering" part of ripgrep, how can I use it to feed another grep or even another tool? If it had been designed with the aforementioned philosophy, it would have been "ripfind" + "ripgrep" and maybe a "ripfg" pretty interface on top.
> So your very reasonable desire to stick with POSIX for shell scripts… means we shouldn’t ever make new tools? I’m genuinely failing to understand your point here.
> In fact, having a “grep compatibility flag” would very literally be the opposite of the UNIX philosophy you’re holding up
See my other reply: I love new tools, but "UNIX-like goodness" is also about portability (as in SUS), not just the philosophy. More of a misunderstanding than anything else, really.
A flag would be impossible, but introspection based on argv[0] (like Busybox) wouldn't be that far-fetched. But yeah, clearly ripgrep isn't trying to replace only grep, that was a mistake on my part.
As an addendum, the "UNIX philosophy" had a lot of crap parts (e.g. bytes as universal interchange format) and UNIX itself is a very poor implementation of said philosophy (after all, g/re/p itself is pointless porcelain over (s)ed, heh), I'm not sure discussing it seriously without specifying which part (like "do one thing") is very productive.
`rg --files`
Is your mind blown yet?
> is also about portability
Except for Windows.
> Except for Windows.
Of course POSIX portability doesn't apply to OSes which intentionally ignore it. I think I have another flame appreciating guy on the other side of the conversation =)
At this point it sounds like you're making a purely aesthetic/cultural judgment, rather than a technical one. ripgrep works just fine with emacs; I have "C-c p s r" bound to "run ripgrep in current project" and use it almost every day.
Which is fine, but now your position is more "old man yelling at clouds" than it is something specific about the Rust tools. (And to be clear, I'm not above yelling at clouds now and then.)
> Also, not having a mode giving it POSIX grep compatibility really flies in the face of that claim
No... it doesn't? ripgrep being compatible with the Unix philosophy is not the same as it being POSIX compatible. I didn't say you could use ripgrep in any place you could use grep in a drop-in compatible way. I said you could use it in shell pipelines. As in, it inter-operates with other tooling in a Unix-y way.
And besides, I don't know of any grep that has such a mode. How do you know you aren't using a grep flag that isn't available in other grep implementations?
> But I really see them as optimizing for/targeting the VSCode/Zed crowd, instead of the vim/emacs one, if you know what I mean.
Nope, I don't. I've been a vim (now neovim) user for decades. And neovim even uses ripgrep by default over grep if it's installed.
Obviously. The topic was Rust CLI tools, that's why my post focused on them. Indeed, my criticism is broader.
> Likely to just about all of GNU's tooling, for example. And probably BSD's and busybox's tooling too. Even busybox isn't above implementing POSIX+some, and AFAIK none of them provide a strict "POSIX only" mode.
This isn't about POSIX or not POSIX, this is about introducing yet another pure convenience feature for something that can easily be achieved with other accompanying (guaranteed to be on the concerned platform) tools. POSIX doesn't create features anyway, it's about describing things that have already become standard; the Austin Group bug tracker is quite explicit about this.
> No... it doesn't? ripgrep being compatible with the Unix philosophy is not the same as it being POSIX compatible.
Well, just a question of interpretation of "Unix-like goodness". UNIX is both a philosophy and a portability standard (since POSIX == SUSv4). And as I said, doing one thing and doing it well is another facet of that philosophy; finding files != searching in them.
> And besides, I don't know of any grep that has such a mode. How do you know you aren't using a grep flag that isn't available in other grep implementations?
Compatibility != strict rejection of anything else, quite obviously.
> Nope, I don't. I've been a vim (now neovim) user for decades. And neovim even uses ripgrep by default over grep if it's installed.
Which is why I said vim instead of neovim (even if neovim is clearly better).
------
I think you misunderstand my position because I like a bit of snark: compromising on the "do one thing" part of the UNIX philosophy isn't necessarily something horrible, but it's not an imaginary issue either. I could give other examples (like how the pretty clean xsv became https://github.com/dathere/qsv) but it really doesn't matter because I think it's more of a popular CLI "apps" thing than Rust alone, even if there's community overlap.
My point is that you are treating an ideal ("do one thing and doing it well") as an end instead of a means. Moreover, you are treating it as if there is any actual choice in the matter. In reality, the tooling most of us use on the command line violates the Unix philosophy in some way or another. So the question is not "should I adhere to the Unix philosophy?" but "what makes a good user experience?"
Some example of that idea I use every day is https://codemadness.org/sfeed-simple-feed-parser.html, where feed parsing, serialization and visualization are completely separated in small orthogonal binaries with very few options. It could have been one fat "sfeed" or something like its newsboat instead. (shilling it a bit more there: https://world-playground-deceit.net/blog/2024/10/sfeed-setup...)
FWIW, I’ve been using *nix for about 25 years, and had zero problems adapting to ripgrep. You kept -i and -v, which honestly covers 90% of common usage.
I don't usually like doing this, but if I had to step out of my wheelhouse and guesstimate, I think it's probably just some form of nostalgia reasoning. People remember the "simpler" times and just want to have it again. But of course, the "simpler" times are less a reflection of reality and more a reflection of your perception of the world. IMO anyway. (I have the same sort of inclination, but I try to recognize it for what it is: personal bias. Others seem unable to do this and mistake this personal bias for objective reality.)
Both are “must haves” as far as I’m concerned.
That's not it. People love the new rust tools because they're great tools with sane defaults. They happen to be written in Rust most of the time and that's it so people use this to describe them.
especially in the case of fd, it's way faster than gnu find
sorry if that annoys people, but a rust tool starts with a high trust in my mind that it will fly
ps: to show i'm not in full fanboy mode, i'm often surprised that fzf, brilliant and performant tool, is written in go, but i know it's not rust :)
I generally say that anything under 500ms is good for commands that aren't crunching data, and even Python CLI tools can come in under that number without too much effort.
The Azure CLI is a good example of this. It's been a year or two since I used it, so maybe it's improved since then, but it used to take a second or two just to run `az --help`, which infuriated me.
If you own a slow Python CLI, look into lazily importing libraries instead of unconditionally importing at the top of each Python file. I've seen that help a lot for slow Python CLIs at $DAYJOB
... also the only tool written in Rust that made it to HN lately that does not scream "I'm written in Rust!" everywhere :)
It also aims to improve on gnu find, not to reimplement it because it's in a non approved language.
It's what the Rust evangelists aren't.
Of course, then we get the question, ok, why not use a program that just provides those options? But, I think all the extra functionality of tar is nice to have sitting off there in the background; I’d have to look it up if I actually wanted to use it, but it is all there in the documentation, and nicely compatible with the files produced by my memorized commands.
That is, if before there is something at /some/path but not /another/path , after running
there will be something there (same as cp).If the first argument isn’t an absolute path, it must be relative to the second argument, and not to pwd.
./bar/baz will be a symlink whose literal contents are `./foo` so it will look for foo in the same directory (bar), rather than in bar’s parent directory.This is totally backwards from how other utilities behave. Which is completely understandable if you know what it’s doing under the hood. But it is counterintuitive and surprising.
Not a solution for all problems, but it works for me most of the time.
Sometimes you semantically want absolute paths (I want to symlink a thing to /bin/foo), sometimes you want relative paths (I want to symlink a thing to something in a nested directory).
Another piece of this parallel is that (with cp and mv) you can omit naming the destination file - you often just name the directory:
The corresponding shortcut with ln is not naming the destination at all, and the symlink is created in the working directory: Both of the above are my most common usage patterns, and the abbreviation of the second argument in both helps reinforce the parallel between cp, mv and ln.I remember only two flags for tar. That's it.
c for create, x for extract.I use it like so:
and extracting like That's all.The core operations taking up only this much headspace leaves the door open to slowly start to retain all its other useful options. For example, -v, which means verbose almost everywhere, thankfully means verbose here too, so I add that when I want verbosity.
Similarly, I find it easy to retain -l which lists the contents. I do this quite often for stuff from the internet -- I don't like it when I untargz in ~/Downloads and it untargz's it in the same directory instead of doing everything inside a parent directory (preferably with the same name as the archive)
Bonus: separating out the gzip like this makes it easy to remember how to use zstd if you want to, just pipe to zstd instead! "Unix philosophy" and all that.
I agree with you about `ln`, I can never seem to remember if it's source or target that comes first.
https://ss64.com/mac/mdfind.html
0: https://en.wikipedia.org/wiki/Spotlight_(Apple)#macOS
Oh well, that's what they invented aliases for in the first place.
Interestingly, the author is listed as one of the maintainers of `fd` as well.
It supports all the common grep options and you can split view in the -Q mode and it is smart enough to open your $editor at the line number it's talking about.
Try it, this is the workflow I've been waiting for.
Also lf. https://github.com/gokcehan/lf You've been wanting this for years as well.
No really. Just scroll: https://github.com/gokcehan/lf/wiki/Integrations ... https://github.com/chmouel/zsh-select-with-lf these things will change your life. I promise.
Check out yazi, its the nicest TUI file manager I have used: https://yazi-rs.github.io/
By the way, this reminds me of the Norton Commander days ...
curl https://raw.githubusercontent.com/chmouel/zsh-select-with-lf... | llm -x "Convert this zsh script to bash" > maybe-bash.sh
seems to work for me.
I would recommend trying home-manager, or just plain nix profiles before going all-in on NixOS, it's very easy to add Nix on top of any Linux (and MacOS to an extent).
This way you still have your tried and true base system and you can manage things somewhat like you're used to (and use venv and whatever bullshit package manager everyone invents), a lot of common tools will work poorly on NixOS (venv, npm...)and while they have Nix alternatives that are "better" it's DIFFERENT.
I run NixOS on desktop and laptop but I wouldn't recommend starting with it, you can benefit from all packages on any distro by just installing Nix.
Also home-manager adoption can be incremental.
ADHD end note: home-manager is perfect for CLI tools installation and config. You must unconfigure "secure path" in sudo though otherwise your CLI tools don't work with sudo
So not dual-licensed (MIT OR Apache-2.0), but MIT AND Apache-2.0? That's unusual.
Later: it seems they intended dual-licensing, and botched the wording: https://github.com/sharkdp/fd/issues/105
I eventually gave up! Thats how ergonomic find is :)
I feel like awk could be so much better.
Maintainability >>>>>>> terseness.
It's not 1976 where every cycle matters.
Hyperfine is another underrated gem that is just so great.
[0] https://github.com/luckman212/alfred-pathfind/
`fd` isn't one of those.
`fd` is amazing.
It is simpler, faster and easier to grok than `find`, the commands syntax is a straight up improvement, it comes with sane defaults which makes the easy things and common usecases simple, while the harder ones are way more accessible. It uses colored output in an actually useful way, its manpage is more structured and easier to read...
and MOST IMPORTANTLY
...it is capable of running commands on search results with parallel execution out of the box, so no need to get GNU parallel into the command.
THIS is the kind of renewal of classic command line utilitis we need!
Literally the only pain point for me, is that it doesn't use glob-patterns by default, and that is easily fixed by a handy
I also should have made it clear that my comment also wasn't so much about the search (although the parallel search is absolutely a nice-to-have)...it was about the `-x, --exec` being automatically in parallel.
A common usecase is to find all files of X criteria, and then perform the same operation on all of them, e.g. find all session logs older than N days and then compress them, or convert all wav files in a directory tree to mp3
If the operation is computationally expensive, using more than one core speeds things up considerably. With `find`, the way to do that was by piping the output to GNU parallel.
With `fd` I can just use `-x, --exec` and it automatically spins up threads to handle the operations, unless instructed not to.
It's a nice option to have when you need it.
Haven't found a way to use these tools in a user-local way.
For my own homemade tools, i put the binary in my dotfiles and they end up in .local/bin and i can use them by default. I can do that because I don't need to update them.
I like `fd -o {user}` and `fd --newer {time}` to find recently changed files and owned by others.
This is a VERY fast file searching tool. It's crazy good.
[0] https://news.ycombinator.com/item?id=31489004
I use fd all the time.
Simply having a better command line interface is a big deal.
Eg,
The other day I was searching for a PDF I downloaded three years ago from a certain company.
Simply ran `find ~/Downloads -iname '<company-name>'` and it immediately popped up.
rg would not have helped me there. Even if I used something like rga, it would have taken significantly longer to parse the contents of every file in my home directory.
File extensions - or their names - mean absolutely nothing with ELF. Maybe $APPLICATION decides to use the filename to store non-ASCII/Unicode parity data... because it was developed in a closet with infinite funding. Who knows. Who's to say.
Contrived, yes, but practical. Imposing isn't. The filesystem may contain more than this can handle.
Regardless, the point is moot because `fd` handles the filenames gracefully you just need to use a different flag [0].
[0]: https://news.ycombinator.com/item?id=43412190
It's also what made Python 3 very impractical when it orginally came around. It wasn't fixed until several versions in despite being a common complaint among actual users.
[1] https://docs.rs/regex/latest/regex/#opt-out-of-unicode-suppo...
For >99.99% of usecases, file paths are textual data, and people do expect to view them as text. And it's high time that kernels should start enforcing that they act as text, because it constitutes a security vulnerability for a good deal of software while providing exceedingly low value.
While I support the UTF-8 everywhere movement with every fiber of my body, that still sounds like a hard sell for all vintage computer enthusiasts, embedded developers, and anyone else, really.
The reality is that non-UTF-8 filenames already break most modern software, and it's probably more useful for the few people who need to care about it to figure out how to make their workflows work in a UTF-8-only filename world rather than demanding that everybody else has to fix their software to handle a case where there kind of isn't a fix in the first place.
(I'm the author of ripgrep, and this is my way of gently suggesting that "filenames aren't even text" isn't an especially useful model.)
As for "filenames aren't even text" not being a useful model, to me text is a `&str` or `String` or `OsString`, filenames are a `Path` or `PathBuf`. We have different types for paths & strings because they represent different things, and have different valid contents. All I mean by that is the types are different, and the types you use for text shouldn't be the same as the types you use for paths.
> Are the contents of files text?
It is perhaps the most prescient of all. What is the OS interface for files? Does it tell you, "This is a UTF-8 encoded text file containing short human readable lines"? No, it does not. All you get is bytes, and if you're lucky, you can maybe infer something about the extension of the file's path (but this is only a convention).
How do you turn bytes into a `&str`? Do you think ripgrep converts an entire file to `&str` before searching it? Does ripgrep even do UTF-8 validation at all? No no no, it does not.
I'd suggest giving https://burntsushi.net/bstr/#motivation-based-on-concepts and the crate docs of https://docs.rs/bstr/latest/bstr/ a read.
To be clear, there is no perfect answer here. You've got to do the best with what you've got. But the model I work with is, "treat file contents and file paths as text until you heuristically believe otherwise." But I work on Unix CLI tooling that needs to be fast. For most people, I would say, "validate file contents and file paths as text" is the right model to start with.
> but IMO it should be pointed out in the man page
Docs can always be improved, sure, but that is not what I'm trying to engage with you about. :-)
`std::path::Path` isn't necessarily a better design. I mean, on some days, I like it. On other days, I wonder if it was a mistake because it creates so much ceremony. And in many of those cases, the ceremony is totally unwarranted.
And I'm saying this as someone who has been adjudicating Rust's standard library API since Rust 1.0.
https://github.com/jakeogh/angryfiles
But if you do, fd still supports it. You just can't use the `-e/--extension` convenience:
That is, `fd` requires that the regex patterns you give it be valid UTF-8, but that doesn't mean the patterns themselves are limited to only matching valid UTF-8. You can disable Unicode mode and match raw bytes via escape sequences (that's what `(?-u:\xFF)` is doing).So as a matter of fact, the sibling comments get the analysis wrong here. `fd` doesn't assume all of its file paths are valid UTF-8. As demonstrated above, it handles non-UTF-8 paths just fine. But some conveniences, like specifying a file extension, do require valid UTF-8.
Here an invalid UTF-8 is passed via command line arguments. If it is desired to support this, the correct way is to use args_os https://doc.rust-lang.org/beta/std/env/fn.args_os.html which gives an iterator that yields OsString.
Nobody cares that valid filenames are anything except the null byte and /. Tell me one valid usecase for a non-UTF8 filename.
But yeah I suppose you would need support for all the other foreign-language encodings that came in between -- UCS-2 for example.
But basically nobody does that. Glib (which drives all GTK apps' and various other apps file reading) doesn't support anything other than UTF8 filenames. At that point I'd consider the "migration" done and dusted.
In a UTF-8-path-only world, what I would do is have a mount option that says that the pathnames on disk are Latin-1 (so that \xff is mapped to U+00FF in UTF-8, which I'm too lazy to work its exact binary representation right now), and let the people doing archaeology on that write their own tools to remap the resulting mojibake pathnames into more readable ones. Not the cleanest solution, but there are ways to support non-UTF-8 disks even with UTF-8-only pathnames.
It is the bane of my existence, but many programs support all the Latin-1 and other file name encodings that are incompatible with UTF-8, so users expect _your_ programs to work too.
Now if you want me to actually _display_ them all correctly, tough turds...
You wish to find and delete them all, now that they've turned your home directory into a monstrosity.
fd does that for English only. See the III/iii case in my comment; iii capitalizes to İİİ in Turkish, there's no way to have fd respect that.
That's false. Counter-example:
Your Turkish example doesn't work with `fd` because `fd` doesn't support specific locales or locale specific tailoring for case insensitive matching. It only supports what Unicode calls "simple case folding." It works for things far beyond English, as demonstrated above, but definitely misses some cases specific to particular locales.See: https://github.com/rust-lang/regex/blob/master/UNICODE.md#rl...
And then Unicode itself for more discussion on the topic: https://unicode.org/reports/tr18/#Simple_Loose_Matches
TR18 used to have a Level 3[1] with the kind of locale-specific custom tailoring support found in GNU's implementation of POSIX locales, but it was so fraught that it was retracted completely some years ago.
[1]: https://unicode.org/reports/tr18/#Tailored_Support
The original problem that led me to thinking about this, was that I wanted to do a search that would include all files below the current directory, except if they were within a specific subdirectory.
(Why not just `grep -v` for that directory? Because the subdir contained millions of files — in part, I was trying to avoid the time it would take find(1) just to do all the system calls required to list the directory!)
And yes, this is possible to specify in find(1). But no, it's not ergonomic:
You can add parentheses, if you like. (But you don't have to. It seems find(1) has implicit Pratt parsing going on): This incantation entirely changed my perspective on find(1) as a tool. Until I learned this, I had never understood exactly why find(1) nags you to put its arguments in a specific order.But this makes "what find(1) does under the covers" very clear: it's assembling your expression into a little program that it executes in the context of each dirent it encounters on its depth-first traversal. This program has filtering instructions, like `-path`, and side-effect instructions, like `-print`. You can chain as many of each as you like, and nest them arbitrarily within (implicit or explicit) logical operators.
After some further experimentation, I learned that find's programs are expression-based; that every expression implicitly evaluates to either true or false; and that any false return-value anywhere in a sub-expression, is short-circuiting for the rest of that sub-expression. (You could think of every instruction as being chained together with an implicit &&.) This means that this won't print anything:
In other words, my original expression above: ...is actually this, when translated to a C-like syntax: As soon as I realized this, I suddenly became very annoyed with find(1)'s actual syntax. Why is find(1) demanding that I write in this weird syntax with leading dashes, various backslash-escaped brackets, etc? Why can't I "script" find(1) on the command-line, the same way I'd "script" Ruby or Perl on the command-line? Or heck, given that this whole thing is an expression — why not Lisp? I do understand why find(1) was designed the way it was — it's to allow for shell variable interpolation, and to rely on shell argument tokenization.But it's pretty easy to work around this need — just push both of these concerns "to the edge" (i.e. outside the expression itself. Like SQL statement binding!)
• Support $VAR syntax for plain references to exported env-vars.
• Support positional binding syntax (?1 ?2 ?3) for positional arguments passed after the script.
• Support named binding syntax (?foo) for positional arguments after the script that match the pattern of a variable-assignment (ala make(1) arguments):
I don't know about you, but I personally find this grammar 10x easier to remember than what find(1) itself has going on.Also, this is still a find-style syntax, but my bfs utility supports -exclude [3]. So you can write
which is a bit more ergonomic.[1]: https://github.com/raforg/rawhide
[2]: https://github.com/jhspetersson/fselect
[3]: https://github.com/tavianator/bfs/blob/main/docs/USAGE.md#-e...