> Notice I have changed the extension from .js to .mjs. Don’t worry, either extension can be used. And you are going to run into issues with either choice
As someone that has used module systems from dojo to CommonsJS to AMD to ESM with webpack and esbuild and rollup and a few others thrown in ... this statement hits hard.
SCLeo · 4h ago
Yeah, the commonjs to esm transition has been the python 2 to python 3 transition of JavaScript, except the benefits are limited (at least compared to the hassle created).
There are many libraries that have switched to esm only (meaning they don't support commonjs), but even today, the best way to find the last commonjs version of those libraries is to go to the "versions" tab on npm, and find the most downloaded version in the last month, and chances are, that will be the last commonjs version.
Yes, in a vacuum, esm is objectively a better than commonjs, but how tc39 almost intentionally made it incompatible with commonjs (via top-level awaits) is just bizarre to me.
eyelidlessness · 1h ago
It had to be incompatible with CommonJS regardless of top level await. There is no imaginable scenario where browsers would ship a module system with synchronous request and resolution semantics. A module graph can be arbitrarily deep, meaning that synchronous modules would block page load for arbitrarily deep network waterfalls. That’s a complete non-starter.
Given that, top-level await is a sensible affordance, which you’d have to go out of your way to block because async modules already have the same semantics.
Recently, Node has compromised by allowing ESM to be loaded synchronously absent TLA, but that’s only feasible because Node is loading those models from the file system, rather than any network-accessible location (and because it already has those semantics for CJS). That compromise makes sense locally, too. But it still doesn’t make sense in a browser.
bubblyworld · 9h ago
Yeah, modules in jsland are just trauma... now we have import maps in the browser too. Let's see what kinds of fun we can have with those.
I haven't thought about that in years. I didn't realize it had been solved.
Browser support looks pretty good.
I guess now I have to figure out how to get this to play nice with Vite and TypeScript module resolution.... and now it's starting to hurt my brain again, great.
socalgal2 · 7h ago
There is more here that is likely to cause problems in the future. One is the author's use of var instead of let or const. var continues to work but most JS devs have linters that ban the use of var. The issue is, var has function scope, not brace scope. Most non-JS devs coming from other languages will eventually run into this issue.
Another issue porting native apps is, native apps are compiled for a specific platform and hardcoded to that platform's conventions. A good example of this is hardcoding Ctrl-C (copy), Ctrl-V (paste) at compile time, which maybe works on Linux and Windows
but doesn't work on Mac.
IIRC the way you're supposed to handle this on the web is listen for copy and paste events. AFAIK Unity has this issue. They hard coded Ctrl-C, Ctrl-P and so copy and paste don't work on Mac. Most games don't need copy and paste but once in a while someone does something that does need it, then exports to the web and runs into this issue.
knallfrosch · 7h ago
Nice writeup! You definitely chose a pretty hard way, but the project setup is always the most complex part. Bonus points for immediately running into security/header issues, but my bet would have been CORS.
At $WORK, we're also building with emscripten/C++. We'll add WebGPU/shaders and WebAudio for bonus pain.
lerax · 10h ago
Masochist? that's much more sane than the js clusterfuck ecosystem
rapind · 4h ago
"Clusterfuck" is implicit and may be omitted.
divbzero · 9h ago
I always assumed that compiling code to run in the browser would be slow, but OP points out that this is not the case. As the Emscripten project describes:
> Thanks to the combination of LLVM, Emscripten, Binaryen, and WebAssembly, the output is compact and runs at near-native speed.
Integrating SDL for a project, there were CMake callouta for APPLE, MSVC, and EMSCRIPTEN.
And here we are seeing it again on hn in a few days.
I should put an afternoon aside for some deep diving on it for context.
scubbo · 4h ago
> Yellow bus syndrome in action for me today
There's a certain irony to being able to introduce you to the term "Baader-Meinhof Phenomenon" (which is the more-common name for what I assume you're referring to, as Google searches for "Yellow Bus Syndrome" didn't bring anything up for me). Now you know the name, you'll see it everywhere!
57473m3n7Fur7h3 · 4h ago
The colloquial term they were misremembering is “yellow car” effect.
phatskat · 3h ago
Funny, I always called it “the GTA effect” as in either Grand Theft Auto 1 or 2, one of the top-down ones, once you got a particular kind of car you would see more of that same car on the road. I don’t know if it was an optimization strategy or just me falling victim to the effect I ascribed to the game.
scubbo · 2h ago
Thank you, TI(2)L!
burningChrome · 7h ago
>> the output is compact and runs at near-native speed.
This kind of subjective, no? I wonder what they consider "near native speed"? I couldn't find any real numbers in their documentation.
gspencley · 6h ago
Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.
Although JavaScript is still an interpreted language, it basically gets "compiled" when the browser parses the bundle. On the surface, the only thing WebAssembly automatically gets you is you get to skip the runtime compilation phase.
I might be talking out of my ass, so take this with a grain of salt, but I wouldn't be surprised if once we start collecting real data on this stuff, SOME WebAssembly code could actually run slower than just using JS code. My hypothesis is that if you're starting with non-JavaScript code, you might be doing things in that language that would be slower to do the same way in JavaScript. I'm thinking of things like Array.map(), .filter() etc. ... which are hyper-optimized in V8. If you're taking an algorithm from C code or something which then gets compiled to WebAssembly, it's not an automatic given that it's going to compile to WebAssembly that is just as optimized as what V8 would do when it comes across those API calls. Again, this is just a hypothesis and I could be way off base.
In any case, what we need is real world data. I have no doubt that for certain applications you can probably avoid land mines by hiring devs who are experienced building certain performance-critical things at a lower-level than your average JS dev... and their experience in those languages may transfer very well to the browser. In this scenario, you're not getting huge perf wins from using WebAssembly per-se... you're getting huge perf wins for not doing typical stupid, lazy, ignorant things that most average JS devs do ... like cloning large objects using the spread operator and then doing that over and over and over again "because immutability."
aDyslecticCrow · 4h ago
WebAssembly is still a flavour of assembly. It's only nearly native performance to the real code because the interface to JavaScript has overhead. Every action in JavaScript incurs overhead due to dynamic types and objects, as well as dynamic memory allocation and garbage collection. Wasm can theoretically ignore it all and run as if it were compiled for the host system, except when it needs to interact with the JavaScript environment.
It's astonishing how fast JavaScript has become. But even if it were fully compiled, it would still be a language with higher overhead.
You can still write bad code, or compile a language with high overhead into WASM. This remains valuable for porting existing libraries into the browser and reducing bandwidth usage. But properly done with a fast compiled language like c or rust.... wasm can unlock some magical things into the web ecosystem.
broken_broken_ · 10h ago
Good article, I also have a C program (a compiler) that I would like to compile to webassembly to offer a playground web page, so that is good information. Thank you!
About the file system stuff: modern browsers ship with SQLite which is available to JavaScript (is it available to webassembly? No idea) so I would probably use that instead. Ideally you could use the sqlite API directly in C and emscripten would bridge the calls to the browser SQLite db. Something to investigate.
wmichelin · 11h ago
`var myLibraryInstance = away MyLibrary();`
udev4096 · 11h ago
What's with the use of port 48 for SSL? Any particular reason?
sebtron · 11h ago
Ah, that's a good question. It's kinda random (except that the name of the solver is "H48"). Setting up that web app required some extra HTTP headers (I explain that in the post), and the easiest way I found to do that without messing up the rest of my website was using a different port. https://h48.tronto.net redirects there too.
Later I looked into a better way to do this, but I could not fully work it out. I use OpenBSD's httpd, which does not support setting extra headers, and relayd. At some point I'll take a look at this again, or I'll move the tool to another domain.
As someone that has used module systems from dojo to CommonsJS to AMD to ESM with webpack and esbuild and rollup and a few others thrown in ... this statement hits hard.
There are many libraries that have switched to esm only (meaning they don't support commonjs), but even today, the best way to find the last commonjs version of those libraries is to go to the "versions" tab on npm, and find the most downloaded version in the last month, and chances are, that will be the last commonjs version.
Yes, in a vacuum, esm is objectively a better than commonjs, but how tc39 almost intentionally made it incompatible with commonjs (via top-level awaits) is just bizarre to me.
Given that, top-level await is a sensible affordance, which you’d have to go out of your way to block because async modules already have the same semantics.
Recently, Node has compromised by allowing ESM to be loaded synchronously absent TLA, but that’s only feasible because Node is loading those models from the file system, rather than any network-accessible location (and because it already has those semantics for CJS). That compromise makes sense locally, too. But it still doesn’t make sense in a browser.
I haven't thought about that in years. I didn't realize it had been solved.
Browser support looks pretty good.
I guess now I have to figure out how to get this to play nice with Vite and TypeScript module resolution.... and now it's starting to hurt my brain again, great.
Another issue porting native apps is, native apps are compiled for a specific platform and hardcoded to that platform's conventions. A good example of this is hardcoding Ctrl-C (copy), Ctrl-V (paste) at compile time, which maybe works on Linux and Windows but doesn't work on Mac.
IIRC the way you're supposed to handle this on the web is listen for copy and paste events. AFAIK Unity has this issue. They hard coded Ctrl-C, Ctrl-P and so copy and paste don't work on Mac. Most games don't need copy and paste but once in a while someone does something that does need it, then exports to the web and runs into this issue.
At $WORK, we're also building with emscripten/C++. We'll add WebGPU/shaders and WebAudio for bonus pain.
> Thanks to the combination of LLVM, Emscripten, Binaryen, and WebAssembly, the output is compact and runs at near-native speed.
https://emscripten.org/
Last week I never heard of Emscripten.
Integrating SDL for a project, there were CMake callouta for APPLE, MSVC, and EMSCRIPTEN.
And here we are seeing it again on hn in a few days.
I should put an afternoon aside for some deep diving on it for context.
There's a certain irony to being able to introduce you to the term "Baader-Meinhof Phenomenon" (which is the more-common name for what I assume you're referring to, as Google searches for "Yellow Bus Syndrome" didn't bring anything up for me). Now you know the name, you'll see it everywhere!
This kind of subjective, no? I wonder what they consider "near native speed"? I couldn't find any real numbers in their documentation.
Although JavaScript is still an interpreted language, it basically gets "compiled" when the browser parses the bundle. On the surface, the only thing WebAssembly automatically gets you is you get to skip the runtime compilation phase.
I might be talking out of my ass, so take this with a grain of salt, but I wouldn't be surprised if once we start collecting real data on this stuff, SOME WebAssembly code could actually run slower than just using JS code. My hypothesis is that if you're starting with non-JavaScript code, you might be doing things in that language that would be slower to do the same way in JavaScript. I'm thinking of things like Array.map(), .filter() etc. ... which are hyper-optimized in V8. If you're taking an algorithm from C code or something which then gets compiled to WebAssembly, it's not an automatic given that it's going to compile to WebAssembly that is just as optimized as what V8 would do when it comes across those API calls. Again, this is just a hypothesis and I could be way off base.
In any case, what we need is real world data. I have no doubt that for certain applications you can probably avoid land mines by hiring devs who are experienced building certain performance-critical things at a lower-level than your average JS dev... and their experience in those languages may transfer very well to the browser. In this scenario, you're not getting huge perf wins from using WebAssembly per-se... you're getting huge perf wins for not doing typical stupid, lazy, ignorant things that most average JS devs do ... like cloning large objects using the spread operator and then doing that over and over and over again "because immutability."
It's astonishing how fast JavaScript has become. But even if it were fully compiled, it would still be a language with higher overhead.
You can still write bad code, or compile a language with high overhead into WASM. This remains valuable for porting existing libraries into the browser and reducing bandwidth usage. But properly done with a fast compiled language like c or rust.... wasm can unlock some magical things into the web ecosystem.
About the file system stuff: modern browsers ship with SQLite which is available to JavaScript (is it available to webassembly? No idea) so I would probably use that instead. Ideally you could use the sqlite API directly in C and emscripten would bridge the calls to the browser SQLite db. Something to investigate.
Later I looked into a better way to do this, but I could not fully work it out. I use OpenBSD's httpd, which does not support setting extra headers, and relayd. At some point I'll take a look at this again, or I'll move the tool to another domain.