Deno 2.4

119 hackandthink 67 7/7/2025, 8:44:26 AM deno.com ↗

Comments (67)

outlore · 3h ago
Really love the ideas behind Deno, and tried to do things the Deno way (Deno.json, JSR, modern imports, Deno Deploy) for a monorepo project with Next.js, Hono and private packages. Some things like Hono worked super well, but Next.js did not. Other things like types would sometimes break in subtle ways. The choice of deployment destination e.g. Vercel for Next also gave me issues.

Here is an example of a small microcut I faced (which might be fixed now) https://github.com/honojs/hono/issues/1216

In contrast, Bun had less cognitive overhead and just "worked" even though it didn't feel as clean as Deno. Some things aren't perfect with Bun either like the lack of a Bun runtime on Vercel

WorldMaker · 1h ago
You picked a stack that is still very npm-centric, especially private npm packages. The sweet spot for doing things the Deno way still seems to be choosing stacks that themselves are very Deno and/or ESM-native. I've had some great experiences with Lume, for instance, and targeting things like Deno Deploy over Vercel. (JSR scores are very helpful at finding interesting libraries with great ESM support.) Obviously "start with a fresh stack" is a huge ask and not a great way to sell Deno, given how much time/effort investment exists in stacks like Next.js. But I think in terms of "What does Deno do best?" there's a sweet spot where you 0-60 everything in Deno-native/ESM-native tools.

Also, yeah, a lot of Deno's npm compatibility keeps getting better, as mentioned in these 2.4 release notes there are a few new improvements. As another comment in these threads points out, for a full stack like the one you were trying, using Deno package.json first can give a better compatibility feeling than deno.json first, even if the deno.json first approach is the nicer/cleaner one long term or when you can go 0-60 in Deno-native/ESM-native greenfields.

Ciantic · 3h ago
It works surprisingly well when used in npm compatiblity mode, a lot like Bun is used.

Running `deno install` in a directory with package.json will create a leaner version of node_modules, running `deno task something` will run scripts defined in `package.json`.

Deno way of doing things is a bit problematic, as I too find it is often a timesink where things don't work, then if you have to escape back to node/npm it becomes a bigger hassle. Using Deno with package.json is easier.

shepherdjerred · 3h ago
100%. I was all-in on Deno, but there were just too many sharp edges. In contrast, Bun just works.
voat · 9h ago
People underestimate the node compatibility that Deno offers. I think the compat env variable will do a lot of adoption. Maybe a denon command or something could enable it automatically? Idk.
CuriouslyC · 7h ago
Honestly, I was bullish on Deno back in the day, but I don't see why I'd use it over Bun now.
jitl · 6h ago
Less segfault, improved security / capability model
tmikaeld · 5h ago
The security model is very underestimated imo, it will be very evident when more bun projects reach production and not experimental.
spiffytech · 6h ago
As a Bun user I don't really get segfaults anymore.
surajrmal · 5h ago
I've written C for years. The only time it is safe from crashes is when the code doesn't churn and has consistent timing between threads. bun has constant feature churn and new hardware it runs on all the time providing novel timings. It is very unlikely going to be crash free any time soon.
pjmlp · 6h ago
I have yet no reason to fight IT and architects for having anything besides node on CI/CD pipelines and base images for containers.
aseipp · 6h ago
Nice list of solid changes. I really like Deno for scripting random glue code; I use it most places (maybe with the exception of random machine learning stuff, where python/uv fits.) Looking forward to gRPC support later this year, too, for some of my long-tail use cases. And the bundle command looks nice!
blinkingled · 8h ago
Crazy that Deno is still not workable on FreeBSD because of the Rust V8 bindings not being ported.
Mond_ · 8h ago
How big is the intersection of modern Javascript developers and FreeBSD users?
blinkingled · 8h ago
Not as big as Linux but I know a few FreeBSD shops that run NodeJS apps so it's not entirely crazy to think that there are more and they would want to try Deno. Besides making your OSS software compilable on *BSD/Linux/Mac/Win has historically been a good thing to do anyways.
whizzter · 7h ago
For a lowlevel runtime (ie V8 itself) I can accept certain lag since there might be some low-level differences in how signals,etc behave.

However for more generic code Linux'isms often signals a certain "works-on-my-machine" mentality that might even hinder cross-distro compatibility, let alone getting things to work on Windows and/or osX development machines.

I guess a Rust binding for V8 is a tad borderline, not necessarily low-level but still an indicator that there's a lack of care for getting things to work on other machines.

surajrmal · 5h ago
Is it big enough to prioritize fixing though? The answer seems to be a no so far.
gr4vityWall · 4h ago
Node.js is (maybe surprisingly) used a lot in less common operating systems like FreeBSD and Illumos.
shrubble · 6h ago
It's more than a little surprising that portability between different Unices is not given more emphasis. "Back in the day" a program being portable between Sun Solaris, HP's HP-UX, Linux, FreeBSD was considered a sign of clean code.
ctz · 8h ago
Looks like it is in ports?
blinkingled · 8h ago
Trying to compile it - it's 2.2.0 but better than nothing. I haven't seen any upstream patches for Rust V8 for FBSD so maybe out of tree ones in the ports if it does compile.
eranation · 8h ago
I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database (other than using 100% npm compatibility which will take many popular deno packages out of scope). The issue is that there is no real centralized package manager (by design) which makes it challenging. Was there any development in that direction?
TheDong · 7h ago
> I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database

If this were a real blocker, then C/C++ wouldn't be used in production either, since both just lean on the language-agnostic CVE/GHSA/etc databases for any relevant vulnerabilities there... and C also heavily encourages just vendoring in entire files from the internet with no way to track down versions.

Anyway, doesn't "deno.lock" exist, and anyone who cares can opt-in to that, and use the versions in there to check vulnerability databases?

simantel · 1h ago
Wouldn't this also be a problem for Go, which just imports from URLs (mostly GitHub) as well?

No comments yet

impulser_ · 3h ago
Surprised they went with esbuild for bundling instead of Rust based Rolldown which in about to be v1.
mcraiha · 6h ago
I really like that bundle subcommand is back. No need to use workarounds.
duesabati · 9h ago
I really love where Deno is going, it really is what Node should've been.

My only concern is that they lose patience to their hype-driven competition and start doing hype-driven stuff themselves.

forty · 7h ago
I thought that Deno was the hype-driven competition of nodejs ;)
deafpolygon · 9h ago
I keep hearing good things about Deno. It might just convince me to try js after all!
hn_throw2025 · 8h ago
These days it might be good to go straight to TS.
WorldMaker · 2h ago
Which is what the Deno defaults guide towards as well.
bugtodiffer · 8h ago
Dont
bflesch · 8h ago
Big fan of deno, congrats on shipping.

From a security standpoint it really icks me when projects prominently ask their users to do the `curl mywebsite.com/foo.sh | sh` thing. I know risk acceptance is different for many people, but if you download a file before executing it, at least you or your antivirus can check what it actually does.

As supply chain attacks are a significant security risks for a node/deno stack application, the `curl | sh` is a red flag that signals to me that the author of the website prefers convenience over security.

With a curl request directly executed, this can happen:

- the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code

- MITM attack gives you a different file than others receive

Node/deno applications using the npm ecosystem put a lot of blind trust into npm servers, which are hosted by microsoft, and therefore easily MITM'able by government agencies.

When looking at official docs for deno at https://docs.deno.com/runtime/getting_started/installation/ the second option behind `curl | sh` they're offering is the much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.

Even though deno has excellent programmers working on the main project, the deno.land website might not always be as secure as the main codebase.

Just my two cents, I know it's a slippery slope in terms of security risk but I cannot say that `curl | sh` is good practice.

dicytea · 6h ago
I really never understood the threat model behind this often repeated argument.

Most of these installation scripts are just simple bootstappers that will eventually download and execute millions lines of code authored and hosted by the same people behind the shell script.

You simply will not be capable of personally auditing those millions lines of code, so this problem boils down to your trust model. If you have so little trust towards the authors behind the project, to the point that you'd suspect them pulling absurdly convoluted ploys like:

> the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code

How can you trust them to not hide even more malicious code in the binary itself?

I believe the reason why this flawed argument have spread like a mind virus throughout the years is because it is something that is easy to do and easy to parrot in every mildly-relevant thread.

It is easy to audit a 5-line shell script. But to personally audit the millions lines of code behind the binary that that script will blindly download and run anyways? Nah, that's real security work and no one wants to actually do hard work here. We're just here to score some easy points and signal that we're a smart and security-conscious person to our peers.

> which are hosted by microsoft, and therefore easily MITM'able by government agencies.

If your threat model includes government agencies maliciously tampering your Deno binaries, you have far more things to worry about than just curl | sh.

gr4vityWall · 3h ago
I think bflesch's reasoning comes from the idea that the website developers may not hold their website to the same security standards as their software, and not from a trust issue. Nor from thinking the author themselves are malicious.

FWIW, I don't have a strong opinion here, besides that I like Debian's model the most. Just felt that it was worth to point out the above.

captn3m0 · 18m ago
See the codecov incident, where exactly this happened: https://about.codecov.io/security-update/
CJefferson · 7h ago
The problem is getting the new users onboarded. Telling people to use 'npm' doesn't help if you don't have npm installed.

How do I install npm? The npm webpage tells me to go and install nvm. At that tells me to use curl | sh .

So using npm for a new user is still requiring a curl | sh, just in a different place.

pxc · 6h ago
If the actual installation process can be made simple, you can have users copy/paste the whole installation script rather than pulling it down with curl.

See for instance...

Setup instructions for Pkgsrc on macOS with the SmartOS people's binary caches: https://pkgsrc.smartos.org/install-on-macos/

Spack installation instructions: https://spack-tutorial.readthedocs.io/en/latest/tutorial_bas...

Guix setup used to look like this but now they have a shell script for download. Even so, the instructions advise saving it and walk you through what to expect so you can have reasonable expectations while installing it.

Anyway, my point is that there are other ways to instruct people about the same kind of install process.

calrain · 7h ago
Security is either taken seriously, or it isn't.

If security shortcuts are taken here, trust nothing else.

geysersam · 7h ago
> much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.

It boils down to the question "is it more likely the attacker can impersonate or control `npm` servers or our own servers. If the answer to that question is "No" then curl pipe sh is not less secure than `npm install`.

This is security theater. If you're assuming an attacker can impersonate anyone in the internet your only secure option is to cut the cable.

bugtodiffer · 8h ago
using deno isn't good security practice, their sandbox is implemented like stuff from the 90s
homebrewer · 8h ago
If you're writing server stuff, at the coarse-grained level of isolation that Deno provides you're better off using just about anything else and restricting access to network/disks/etc through systemd. Unlike Deno, it can restrict access to specific filesystem paths and network addresses (whitelist/blacklist, your choice), and you're not locked into using just Deno and not forced to write JS/TS.

See `man systemd.exec`, `systemd-analyze security`, https://wiki.archlinux.org/title/Systemd/Sandboxing

crabmusket · 7h ago
Deno can restrict access to filesystem files or directories, and to particular network domains, see docs for examples. https://docs.deno.com/runtime/fundamentals/security/#file-sy...

However in general I don't think Deno's permission system is all that amazing, and I am annoyed that people call it "capability-based" sometimes (I don't know if this came from the Deno team ever or just misinformed third parties).

I do like that "deno run https://example.com/arbitrary.js" has a minimum level of security by default, and I can e.g. restrict it to read and write my current working dir. It's just less helpful for combining components of varying trust levels into a single application.

vorticalbox · 7h ago
> Unlike Deno, it can restrict access to specific filesystem paths and network addresses

deno can do this via --(allow/deny)-read and --(allow/deny)-write for the file system.

You can do the same for net too

https://docs.deno.com/runtime/fundamentals/security/#permiss...

mk12 · 7h ago
Bubblewrap is another convenient sandboxing tool for Linux: https://wiki.archlinux.org/title/Bubblewrap
bflesch · 8h ago
Is node "sandbox" different? Does it even have a sandbox?
oblio · 8h ago
Can you expand on this please? Also curious which 90s tech there inspired by.
bugtodiffer · 8h ago
It is matching strings instead of actually blocking things. That's how sandboxes were implemented when I was a kid.

E.g. --allow-net --deny-net=1.1.1.1

You cannot fetch "http://1.1.1.1" but any domain that resolves to 1.1.1.1 is a bypass...

It's crap security

whizzter · 7h ago
If security principles are important they should be on a deny-default basis with allow-lists rather than the other way around.

If the deno runtime implements the fetch module itself, then post-resolution checking definitely should be done though. It's more of an bug though than a principled security lapse.

bugtodiffer · 4h ago
The thing is that this applies to all parts of the sandbox https://secfault-security.com/blog/deno.html
jeltz · 7h ago
That isn't 90s security, that is just bad code. And bad code was written in the 90s and is still written today.
methyl · 8h ago
Has any attack like this been ever seen in the wild? Not saying it's impossible – but I'm just curious if this vector was ever successfully exploited.
bflesch · 8h ago
I'm sure there are cases where the website CMS was hacked and then malware served instead of the normal install script. The `curl | sh` approach has been around forever.

And depending on what "interesting" IP address you are coming from, NSA/Microsoft/Apple will MITM your npm install / windows update / ios update accordingly.

Same in the linux ecosystem, if you look at the maintainers of popular distributions, some of them had .ru / .cn email addresses before switching to more official email addressess using the project domain - IMO this change of email addressess happened due to public pressure on russia after the Ukraine invasion. Having access to main package signing keys for a linux distribution, you can provide special packages from your linux package mirror to interesting targets.

All of these scenarios are extremely hard to prove after the fact and the parties involved are not the type of people who do public writeups.

oblio · 8h ago
If the website CMS is hacked, they can just swap the installable binary to one's that's hacked, too.
pcl · 7h ago
That’s why downloading and then executing is preferable — as the GP pointed out, you or your machine’s antivirus can have an opportunity to inspect the file prior to execution, whereas that is not an option when the bytes are streamed directly to the interpreter.
jgalt212 · 7h ago
It would be great if curl could take file integrity hash value as a command line argument.
lioeters · 6h ago
I'd like to practice verifying file integrity, instead of running `curl | sh`. I see that sha256sum (or 512) is the standard command people use.

    # Download package and its checksum
    curl -fsSLO https://example.com/example-1.0.0.tar.gz
    curl -fsSLO https://example.com/example-1.0.0.tar.gz.sha256

    # Verify the checksum
    sha256sum -c example-1.0.0.tar.gz.sha256
But if the server is compromised, the malicious actor would likely be able to serve a matching hash to their file?
troupo · 7h ago
> ask their users to do the `curl mywebsite.com/foo.sh | sh` thing.

Because it's easier than maintaining packages across 10+ package managers. And in case of Linux it might not require sudo to install something.

oulipo · 8h ago
how is it more or less good practice than running any untrusted binary on your system? the only possible stuff would be that the script download is broken midway and it becomes a "dangerous script" because eg of a `rm -rf /some/path` which would become a `rm -rf /` but other than that, it's just the same as downloading any executable on your laptop and running them... any attacks you described on the shell download would work with any other binary, which users routinely do
whyever · 8h ago
All the attacks you described also apply to downloading and executing a file. I don't think `curl | sh` is worse in this regard.
bflesch · 8h ago
With a downloaded file your antivirus will run automated checks on it, you can calculate a hash signature and compare the value with others who also download the file, and you will notice if the file changes after you execute it.
davedx · 8h ago
If you download it first, you can at least eyeball what's been downloaded to check it doesn't start by installing a bitcoin miner
geysersam · 7h ago
How often do people do that when they install a package from npm, pypi, or other package repository? In practice never.

No comments yet