Show HN: Spark, An advanced 3D Gaussian Splatting renderer for Three.js
350 dmarcos 75 6/11/2025, 5:02:56 PM sparkjs.dev ↗
I'm the co-creator and maintainer of https://aframe.io/ and long time Web 3D graphics dev.
Super excited about new techniques to author / render / represent 3D. Spark is a an open source library to easily integrate Gaussian splats in your THREE.js scene I worked with some friends and I hope you find useful.
Looking forward to hearing what features / rendering techniques you would love to see next.
As an only-dabbling-hobbiest game developer who lacks a lot of 3d programming knowledge, the only feedback I can offer is you might perhaps define what "Gaussian Splatting" is somewhere on the github or the website. Just the one-liner from wikipedia helps me get more excited about the project and potential uses: Gaussian splatting is a volume rendering technique that deals with the direct rendering of volume data without converting the data into surface or line primitives.
Super high performance clouds and fire and smoke and such? Awesome!
The performance seems amazingly good for the apparent level of detail, even on my integrated graphics laptop. Where is this technique most commonly used today?
Edit: typos
Probably an interesting use for a pretrained model to estimate scale based on common items seen in scenes (cars, doorframes, trees, etc…)
How do Babylon, Aframe, Three.js, and PlayCanvas [2] compare from those that have used them?
IIUC, PlayCanvas is the most mature, featureful, and performant, but it's commercial. Babylon is the featureful 3D engine, whereas Three.js is fairly raw. Though it has some nice stuff for animation, textures, etc., you're really building your own kit.
Any good experiences (or bad) with any of these?
OP, your demo is rock solid! What's the pitch for Aframe?
How do you see the "gaussian splat" future panning out? Will these be useful for more than visualizations and "digital twins" (in the industrial setting)? Will we be editing them and animating them at any point in the near future? Or to rephrase, when (or will) they be useful for the creative and gaming fields?
[1] https://github.com/aframevr/aframe
[2] https://playcanvas.com/
One of the Spark goals is exploring applications of 3D Gaussian Splatting. I don't have all the answers yet but already compelling use cases quickly developing. e.g photogrammetry / scanning where splats represent high frequency detail in an appealing and relatively compact way as you can see in one of the demos (https://sparkjs.dev/examples/interactivity/index.html). There are great examples of video capture already (https://www.4dv.ai/). Looking forward to seeing new applications as we figure out better compression, streaming, relighting, generative models, LOD...
A-Frame hello world
<html> <head> <script src="https://aframe.io/releases/1.7.1/aframe.min.js"></script> </head> <body> <a-scene> <a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box> </a-scene> </body> </html>
PlayCanvas Engine: https://github.com/playcanvas/engine
PlayCanvas Web Components: https://github.com/playcanvas/web-components
PlayCanvas React: https://github.com/playcanvas/react
The good:
1. Blender plugin for baked mesh animation export to stream asset is cool
2. the procedural texture tricks combined with displacement maps mean making reasonable looking in game ocean/water possible with some tweaking
3. adding 2D sprite swap out for distant objects is trivial (think Paper Mario style)
The bad:
1. burns gpu vram far faster than normal engines (dynamic paint bloats up fast when duplicating aliases etc. )
2. JS burns CPU cycles, but the wasm support is reasonable for physics/collision
3. all resources are exposed to end users (expect unsophisticated cheaters/cloners)
The ugly:
1. mobile gpu support on 90% of devices is patchwork
2. baked lighting ymmv (we tinted the gpu smoke VFX to cheat volumetric scattering)
3. in browser games essentially combine the worst aspects of browser memory waste, and security sandbox issues (audio sync is always bad in browser games)
Anecdotally, I would only recommend the engine for server hosted transactional games (i.e. cards or board games could be a good fit.)
Otherwise, if people want something that is performant, and doesn't look awful.... Than just use the Unreal engine, and hire someone that mastered efficient shader tricks. =3
In general, most iOS devices are forced to use/link their proprietary JS vm API implementation. While Babylon makes it easier, it often had features NERF'd by both Apple iOS, and Alphabet Android. In the former case it is driven by a business App walled garden, and in the latter it is device design fragmentation.
I like Babylon in many ways too, but we have to acknowledge the limitations in deployment impacting end users. People often end up patching every update Mozilla/Apple/Microsoft pushes.
Thus, difficult to deploy something unaffected by platform specific codecs, media syncing, and interface hook shenanigans.
This coverage issue is trivial to handle in Unity, GoDot, and Unreal.
The App store people always want their cut, and will find convenient excuses to nudge that policy. It is the price of admission on mobile... YMMV =3
Would you have any suggestions on what JS/TS package to use? I built a quick prototype in three.js but I am neither a 3D person nor a web dev, so I would appreciate your advice.
Examples:
- https://audiolabs-erlangen.de/media/pages/resources/MIR/2024...
- https://images.squarespace-cdn.com/content/v1/5ee5aa63c3a410...
1. Use global fixed 16bit 44.1kHz stereo, and raw uncompressed lossless codec (avoids gpu/hardware-codec and sound-card specific quirks)
2. Don't try to sync your audio to the gpu 24fps+ animations ( https://en.wikipedia.org/wiki/Globally_asynchronous_locally_... ). I'd just try to cheat your display by 10Hz polling a non-blocking fifo stream copy. ymmv
3. Try statically allocated fifo* buffers in wasm, and software mixers to a single output stream for local chunk playback ( https://en.wikipedia.org/wiki/Clock_domain_crossing )
* recall fixed rate producer/consumers should lock relative phase when the garbage collector decides to ruin your day, things like software FIR filters are also fine, and a single-thread output pre-mixed stream will eventually buffer though whatever abstraction the local users have setup (i.e. while the GC does its thing... playback sounds continuous.)
Inside a VM we are unfortunately at the mercy of the garbage collector, and any assumptions JIT compiled languages make. Yet wasm should be able to push io transfers fast enough for software mixers on modern cpus.
Best of luck =3
And then select one-meter square patches of land... and one-meter cubes of spots with bushes...
And then make a "Minecraft-looking" world, repeating the grass block all over the place, with occasional dirt and bushes?
I'm guessing I'd need some pretty beefy hardware to render thousands of blocks...
Do you have any insights into the current performance bottlenecks? Especially around dynamic scenes. That particle simulation one seems to struggle but then improves dramatically when the camera is rotated, implying the static background is much heavier than it appears.
And as a counterpoint to the bottlenecks, that Sierpinski pyramid, procedurally, is brilliant.
There was a guassain splat based Matterport port clone at least year's siggraph. To view a 2 bedroom apartment required streaming 1.5gig
Cool demo
Fancier compression methods are coming (e.g SOGS). This is 30MB!
https://vincentwoo.com/3d/sutro_tower/
Would the format work better if you made that cut-off at something like 1 sigma instead? Then instead of these blurry blobs you'd effectively be rendering ovals with hard edges. I speculate out loud that maybe you could get a better render with fewer hard-edged ovals than tons of blurry blobs.
I agree with you though that in general 3DGS is a worse representation for hard, flat, synthetic things with hard edges. But in the flip side, I would argue it's a better representation for many organic, real-world things, like imagine fur or hair or leaves on a tree... These are things that can render beautifully photo realistically in a way that would require much, much more complex polygon geometry and texturing and careful sorting and blending of semi-transparent texels. This is one reason why 3DGS has become so popular in scanning and 3D reconstruction.. you just get much better results with smaller file sizes. When 3DGS first appeared, everyone was shocked by how photorealistic you could render things in real time on a mobile device!
But one final thought I want to add: with Spark it's not an either/or. You can have BOTH in the same Three.js scene and they will blend together perfectly via the Z-buffer. So you can scan the world around you and render it with 3DGS, and then insert your hard-edged robot character polygon meshes right into that world, and get the best of both!
https://trianglesplatting.github.io/
Do you foresee this project getting a Web Components API like A-Frame and Google's <model-viewer> component (https://modelviewer.dev/)?
On most newer devices the sorting can happen pretty much every frame with approx 1 frame latency, and runs in parallel on a Web Worker. So the sorting itself has minimal performance impact, and because of that Spark can do fully dynamic 3DGS where every splat can move independently each frame!
On some older Android devices it can be a few frames worth of latency, and in that case you could say it's amortized over a few frames. But since it all happens in parallel there's no real impact to the overall rendering performance. I expect for most devices the sorting in Spark is mostly a solved problem, especially with increasing memory bandwidth and shared CPU-GPU memory.
I've implemented a radix sort on GPU to sort the splats (every frame).. and I'm not quite happy with performance yet. A radix sort (+ prefix scan) is quite involved with lot's of dedicated hierarchical compute shaders.. I might have to get back to tune it.
I might switch to float16s as well, I'm a bit hesitant, as 1 million+ splats, may exceed the precision of halfs.
I have spent countless hours playing with the R3F - adding vertex and fragment shaders, eventually giving up. The math is just tedious.
https://github.com/sparkjsdev/spark-react-r3f
The tradeoff is initial complexity (your "hello world" for WebGL showing one object will include a shader and priming data arrays for that shader), but as consequence of design the API sort of forces more computation into the GPU layer, so the fact JavaScript is driving it matters very little.
THREE.js adds a nice layer of abstraction atop that metal.
WebGL2 isn't the best graphics API, but it allows anyone to write Javascript code to harness the GPU for compute and rendering, and run on pretty much any device via the web browser. That's pretty amazing IMO!
from your fellow gh accelerator friend, vinnie!
The OCP CAD viewer extension for build123d and cadquery models, for example, is also built on Three.js. https://github.com/bernhard-42/vscode-ocp-cad-viewer
Is this "I worked with some friends and I hope you find useful" or is it "So proud of the World Labs team that made this happen, and we are making this open source for everyone" (CEO, World Labs)?
https://x.com/drfeifei/status/1929617676810572234