Show HN: I built a synthesizer based on 3D physics

389 humbledrone 96 5/2/2025, 6:12:15 PM anukari.com ↗
I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.

So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.

The official launch announcement video is here: https://www.youtube.com/watch?v=NYX_eeNVIEU

But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: https://x.com/Mick_Gordon/status/1918146487948919222

I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here: https://anukari.com/blog/devlog

I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: https://www.youtube.com/watch?v=lb8b1SYy73Q

Comments (96)

AaronAPU · 5h ago
Glad I’m not the only audio developer around here.

The landing page needs an immediate audio visual demo. Not an embedded YouTube but a videojs or similar. Low friction get the information of what it sounds and feels like immediately.

My 2 cents

kookamamie · 53m ago
Exactly. Had to scroll for ages to find anything to do with demo audio. A good demo song/track should be the first thing on the page, I think.
senbrow · 3h ago
1000% - I had to be able to find something listenable
airstrike · 6h ago
Really cool stuff! I would suggest putting a 60-second video at the very top of the page that stitches together short clips of the many ways it is awesome.
nayuki · 11h ago
This reminds me of the reverse, where music drives 3D animations. I remember Animusic from the early decade of 2000.

https://en.wikipedia.org/wiki/Animusic , https://www.animusic.com/ , https://www.youtube.com/results?search_query=animusic , https://www.youtube.com/@julianlachniet9036/videos

humbledrone · 9h ago
I'm a huge fan of Animusic. I remember seeing it for the first time in some big fancy mall in LA and they had it projected on a wall, and I was blown away. It was absolutely an inspiration! Animusic -type ideas are a big part of why I made the 3D graphics fully user-customizable, for anyone who wants to go deep down that rabbit hole.
mjcohen · 10h ago
I have the first two Animusic reels (vhs and dvd) and thought they were great. Unfortunately, the creator scammed people by taking money for Animusic 3 and then not making anything.

Most of them are on youtube.

tarentel · 12h ago
Not sure I'll ever use this as it seems like a lot of work but wanted to say thank you for allowing me to download a demo without giving an email.

Also, even though I said I wouldn't use it, something that would be nice is a master volume, maybe I missed it. I often use VSTs standalone and being able to change the volume without messing with the preset would make it a bit easier to use.

Definitely the most interesting synth I've ever seen.

humbledrone · 12h ago
Thanks, yeah, it really should have master volume -- you didn't miss it, it's just not there yet!
ssfrr · 7h ago
I’m very curious about your experience doing audio on the GPU. What kind of worst-case latency are you able to get? Does it tend to be pretty deterministic or do you need to keep a lot of headroom for occasional latency spikes? Is the latency substantially different between integrated vs discrete GPUs?
humbledrone · 6h ago
Short answer: it has been a big pain in the butt. The GPU hardware is mostly really great, but the drivers/APIs were not designed for such a low-latency use case. There's (for audio) a large overhead latency in kernel execution scheduling. I've had to do a lot of fun optimization in terms of just reducing the runtime of the kernel itself, and a lot of less-fun evil dark magic optimization to e.g. trick macOS into raising the GPU clock speed.

Long answer: I've written a fair bit about this on my devlog. You might check out these tags:

https://anukari.com/blog/devlog/tags/gpu https://anukari.com/blog/devlog/tags/optimization

ssfrr · 4h ago
Thanks for the extra info, I read through some of your entries on GPU optimization and it definitely seems like it's been a journey! Thanks for blazing the trail.
florilegiumson · 8h ago
Really cool to see GPUs applied to sound synthesis. Didn’t realize that all one needed to do to keep up with the audio thread was to batch computations at the size of the audio thread. I’m fascinated by the idea of doing the same kind of thing for continua in the manner of Stefan Bilbao: https://www.amazon.com/Numerical-Sound-Synthesis-Difference-...

Although I wonder if mathematically it’s the same thing …

sitkack · 12h ago
I would love to watch (and listen) to a discussion between you and Noah from Audiocube, https://news.ycombinator.com/item?id=42877399 https://main.audiocube.app/ a 3d spatial DAW.
humbledrone · 9h ago
I have been peripherally aware of Audiocube for a while, and it seems ridiculous that he and I have not interacted in any way. Maybe I'll bug him sometime. :)
adzm · 8h ago
Note they are referring to Mick Gordon who is notable for the recent DOOM soundtrack. DOOM Eternal has a truly phenomenal score. Mick Cormick is a mistake I believe.

Congratulations!!

humbledrone · 8h ago
Oh goodness, that's truly embarrassing that I typoed "Mick Cormick" instead of Mick Gordon. O_o I wonder if my brain somehow crossed wires with John Carmack. Thanks for the correction!
tbalsam · 6h ago
McCormick is a popular brand of seasonings hahaha

https://i5.walmartimages.com/seo/McCormick-Pure-Ground-Black...

throwaway7894 · 3h ago
Of note is the drama surrounding the Doom Eternal soundtrack: https://medium.com/@mickgordon/my-full-statement-regarding-d...
nyanpasu64 · 1h ago
The sounds seem to mostly be modulated sines with limited timbral variety? I'm not sure how https://youtu.be/NYX_eeNVIEU?t=179 got harmonic series out of the building blocks.
sunray2 · 12h ago
Thank you for this, it looks very cool!

Remind me of Korg's Berlin branch with their Phase8 instrument: https://korg.berlin/ . Life imitates art imitates life :)

I highly support and encourage this. Is there a way I could contribute to Anukari at all (I'm a physicist by day)? These kinds of advancements are the stuff I would live for! However I should stay rooted in what's possible or helpful: I'm not sure if this is open-source for example. As long as I could help, I'm game.

humbledrone · 12h ago
For the foreseeable future I'm just going to be working on stability/performance, but eventually I will get back to adding more cool physics stuff. It's not open-source, but certainly I'd enjoy talking to a real physicist (I'm something a couple notches below armchair-level). Hit me up at evan@anukari.com sometime if you like!
sunray2 · 12h ago
Thanks, will hit you up later!

I was using the demo just now: the sounds you get out of this are actually better than I expected! And I see what you meant in the videos about intuitive editing, rather than abstract.

Although, I was often hitting 100% CPU with some presets, with the sound glitching accordingly. So I could experiment only in part. I'm on an M1 Pro; initially I set 128 buffer sample size in Ableton but most presets were glitching, I then set to 2048 just to check for improvement, which it did, nevertheless it does seem a bit high. Maybe my audio settings are incorrect? I can give more info later if it helps you.

humbledrone · 9h ago
Yeah performance at low buffer sizes is a big challenge, generally I recommend 512 or higher, which I know is not great but right now it's the most practical thing. The issue is that the computation is all done on the GPU, and there's a round-trip latency that has to be amortized. One day I'd like to convince Apple to work on the kernel scheduling latency...
imhoguy · 10h ago
This is so cool and has unlimited potential, like you could model real instruments, e.g. guitar to experiment with resonant chamber shapes, materials etc. Can't upvote enough on good old perpetual licensing model!
smolder · 1h ago
I have mixed feelings about all of my supposed clever ideas being executed on by other people way ahead of me, but this is cool and you have my respect.
jbverschoor · 1h ago
Man. “Sheet music is not portable”. I immediately thought about the Apple Vision Pro

The battery-less medium is something we do desperately try to mimic

michaelhoney · 9h ago
So many of us have ideas for something cool and never build them. You did it. I salute you, you madman
modeless · 7h ago
Love physics based audio! Using the GPU is a great idea.

Another physical audio simulation I like is the engine sound simulator made by AngeTheGreat: https://youtu.be/RKT-sKtR970?si=t193nZwh-jaSctQM

humbledrone · 6h ago
His stuff is so incredibly cool. He has a video on physical modeling for trumpets using the GPU and for a second I thought he might be building a competitor! :)
corytheboyd · 13h ago
Whoa this looks really cool! I love how you made something physically 3D to stand out in a world full of 2D knobs and sliders… but it still has 2D sliders because those work the best for dialing things in with precision.
hamoid · 9h ago
Looks very fun :-) Does anyone notice issues with the sound quality? In many of the examples I hear clicking: sometimes as if the attack is too high, or as if there is some kind of aliasing or sample rate issue, or just clipping. Probably noticeable with headphones. For instance in the announcement video at 3:11 or in the "J.S. Bach, Prelude in C Major (BWV 846)" video between 4.4s and 7.2s. It's somewhat visible if I load that audio in Audacity and turn on the spectrogram view with these settings: Logarithmic, 200 to 6000 Hz. Algorithm: Reassignment, 1024, Blackman-Harris, 1. Colors: 50, 40, 50.

What's odd is that I hear the glitches in in Firefox and in the file downloaded with yt-dlp, but not in Chromium. Is Google serving me bad audio on purpose?

Correction: some videos also do have glitchs on Chromium.

humbledrone · 8h ago
Screen-recording Anukari has been a bit of a challenge, as OBS works best while using GPU encoding, and also seems to do things that the GPU doesn't like in general (and Anukari uses the GPU). I suspect what you're hearing in the videos has to do with that. But also I'm sure that the model for the mic compression could be improved, and I'm not sure about the default attack time, etc.
fc417fc802 · 7h ago
If screen recording is actually the thing causing the issue you might try CPU encoding with one of the fast lossless codecs and doing the "real" encoding in a second pass later. As a bonus, software encoding should also give a higher quality result. That does require an SSD and quite a bit of free space though.
siavosh · 9h ago
So beautiful - I wonder what kind of an instrument an AI can build, creating sounds never before heard...
brookst · 4h ago
I’m so tempted to buy, but some info is missing on the website:

- If I buy once can I run it on both my Windows desktop and MacBook travel computer?

- If so, are files compatible between them?

- What are GPU requirements on Windows? I’m sure it scales, but is a 3080 overkill or not enough?

mutagen · 2h ago
My account shows 3 devices available to install on and I can disable computers on demand. Runs well on my M1 and on my 3060 and even all but the most demanding of assemblies on my little work laptop with onboard Intel graphics.

I assume files are compatible, presets are the same on both MacOS and Windows.

ziddoap · 5h ago
I'm not really familiar with audio stuff, but holy do I ever appreciate the write-ups you've done. This is absolutely fascinating stuff. I'm eager to keep reading. The video from Mick Gordon was awesome, too.

Congratulations on the launch, and best of luck!

gregschlom · 13h ago
Absolutely awesome! I know nothing about music production but I want to play with it just for fun. Maybe a very simplified, web-based version for people who just to play a bit? Would be awesome.

Congrats on the hard work and the launch, in any case!

Edit: I see you have a demo mode, that's great! Exactly what I was looking for

bufferoverflow · 11h ago
Dang it. I am working on the same thing, but in 2D.
sunray2 · 11h ago
Don't be discouraged! It might even be that 2D is better than 3D in this case: it's all about how it sounds, right? And if a 2D simulation can be less expensive than a 3D while sounding just as good or better, it works in your favour!

I think that's the real key to this stuff: what makes these things actually sound good?

humbledrone · 9h ago
There are a lot of advantages to 2D -- you could simulate more objects and more complex interactions with the lesser computational demands, and as other comments say, it will likely be way easier to build a better GUI. Think about how many compressor VSTs there are out there, and still people keep making them! And a 2D Anukari could be much more different from 3D Anukari than most compressors are from one another.
IshKebab · 10h ago
I think 2D is probably the better move for this thing. 3D doesn't really open many possibilities that you can't do in 2D, and it adds loads of UI awkwardness.
chadcmulligan · 7h ago
3d is cooler, but 2d is easier to use for people, there was some research on it, though don't have it at hand

edit: a hn thread https://news.ycombinator.com/item?id=19961812

dylanz · 8h ago
This is insane. I've used tons of virtual synths in my life but this is by far the coolest I've ever seen. Mick's video was amazing!
an_aparallel · 10h ago
Hey evan Just wondering, can you import 3d models into this envireonment? Im still pining for a less "code" driven environment for this than max/msp modalsys.
humbledrone · 9h ago
You can replace all the 3D models and skyboxes. Everything is in open formats, wrapped up in zip files. See:

https://anukari.com/support/faq#custom-skyboxes https://anukari.com/support/faq#custom-skins

AFAIK nobody has attempted this yet, so the write-up might not be perfect. If you try it, let me know how it goes!

an_aparallel · 8h ago
Awesome, say i import a bugle model, for arguments sake, does anakuri support a wind stream? I dont see air/wind listed as an object like you see bow and pluck
humbledrone · 6h ago
No wind-type model so far, though the bow model can do some rather flute-y sounds (in fact for the current bow model, I might even argue that it is better for pan flute stuff than sounding like a realistic bow).
an_aparallel · 1h ago
Thanks Evan, is that because wind models are complex to model?
erwincoumans · 4h ago
Very nice to see. Maybe nice in VR, streaming to Quest 3/Apple Vision Pro (OpenVR/WebXR?)
peteforde · 5h ago
I am your target market, and I can't wait to try out the demo once I'm back from Superbooth and allowed to be distracted even a little.

... but I wanted to say that even with all of the glowing feedback, US$70 for a beta v1 soft synth is a big enough ticket that it will be off-putting to some and difficult to afford for others. Yes, there are many [much] more expensive virtual instruments, and this occupies a pretty unique space. But if you're open to feedback, this is my initial gut reaction.

One thing I am surprised by is that there's no mention of VR/AR ambitions. When I fantasize about 3D instruments, I do so in the context of wanting to interact with them in a space I inhabit. Does this speak to you as well?

junon · 11h ago
Ha! I've had this idea for ages but never had the urge to make it. I'll have to play with it, thanks for sharing!
sneak · 1h ago
Consider putting the purchase CTA inside the app, and just having a giant DOWNLOAD NOW button above the fold. The download for the demo and the real one should be the same download.

Then I don’t have to make a buying decision up front - I can get it on my computer and running first in all cases.

titaphraz · 9h ago
Seriously cool! A Linux build on the horizon?
humbledrone · 8h ago
rvba · 12h ago
The website should have a better youtube video and at the beginning
humbledrone · 12h ago
You are 100% correct. It turns out that I am an OK engineer, and a terrible marketer. There is a LOT that I need to improve on the site and the videos.
mkl · 10h ago
Every one of those pretty-looking screenshots should have a play button with a few seconds of audio. It's really strange to market a synthesiser with visuals instead of sound.
tbalsam · 10h ago
Yes, it's a synthesizer -- you may know it inside and out, but having demo videos showing what it can do will help people with no context get that quick "ahhh, that makes sense" moment from things. :)
pierrec · 4h ago
>a terrible marketer

I wouldn't go so far, apart from this point the landing page is excellent.

shannifin · 12h ago
Some little audio examples would also be nice so visitors don't have to scroll through the video to hear them.

Still, awesome work!

imhoguy · 10h ago
Yeah, all the cool 3D pictures should play demo videos on click!
royal__ · 8h ago
This is crazy, incredible work.
dfedbeef · 12h ago
Any chance we'll get a Linux VST or CLAP?
humbledrone · 12h ago
Linux: I very much want to do this, but unfortunately it's lower priority than getting things rock-solid on windows/mac. It's not a gigantic amount of work, but it's not trivial. Hopefully I can do it once things calm down with the Beta.

CLAP: I'm using the JUCE framework for plugin integrations, which doesn't currently support CLAP. But their roadmap says that the next major version will support CLAP, and I will definitely implement that in Anukari. Not sure when JUCE 9 comes out though, it could be a while.

thrtythreeforty · 11h ago
There's an external plugin to emit a CLAP plugin for JUCE 8: https://github.com/free-audio/clap-juce-extensions

It works quite well, but it's also reasonable to wait for official framework support.

humbledrone · 9h ago
Thanks, that could be good if the JUCE support is going to be way off.
ghawkescs · 11h ago
Incredible work and a very creative product. I can't wait to see what is created using Anukari.
chaosprint · 3h ago
very impressive. is it built with juce?
exodust · 2h ago
I like the perpetual license, no AI, and customisable 3D models and animations. This last feature hopefully opens up potential for making creatively synchronised graphics such as animating the expression - say the mouth shape on a 3D face in response to modulation. I wonder if the animation needs to be a fixed amount of frames or length?
ww520 · 12h ago
Wow. This is so cool. It opens up a different approach to the problem.
nprateem · 3h ago
Your beta video assumes I know why I should care and jumps straight into technicals.

This is potentially new to producers. Tell them why they should care first.

akomtu · 6h ago
At a glance, this looks like a bunch of coupled oscillators. A natural extension of this idea is strings: a 1d array of oscillators modelling a wave equation. For example, a piano sound can be modelled by attaching a basic oscillator to one end of a string and a mic to the other end of the string. The string and the oscillator push each other, creating the piano tone. Real pianos use 3 such string with different properties.

Another idea. What if you make a circular string and attach 1 or more oscillators at random points? Same idea as above, but more symmetric. This "sound ring" instrument may produce unreal sounds.

Eduard · 8h ago
is this about sound generation? Because I didn't find any sample sounds on this wall of text and pictures.
humbledrone · 8h ago
Sorry about that. It's just me working on it, and so far my personal tenet is that the product itself is the top priority, and all else including marketing comes second, so none of the website/youtube/etc are as good as I'd like. Possibly I'll soon have money to hire help with some of that, or I'll get the product to a place where I'm happy enough to work more on marketing myself.
throwpoaster · 13h ago
Very neat!

Is the simulation deterministic?

humbledrone · 12h ago
Yep! I have a lot of unit/integration tests that were a lot easier to write and more reliable by making the simulation fully-deterministic. It does produce slightly different results on different GPUs due to small differences in the FP operations. (For this application it's really beneficial to let the compiler go crazy with FP re-ordering to get speed.)
fsckboy · 12h ago
does it benefit from or even require top end GPUs to get the best results?
humbledrone · 9h ago
mutagen sums up my experience pretty well -- I have tested it on a laptop with a pretty wimpy Intel Iris chipset and it definitely works, but you might be limited in terms of how complex of a preset you can run. But there is a LOT of fun to be had with small presets so it may still be worthwhile. Be sure to install the absolute most up-to-date drivers from Intel directly, and use a larger buffer size.
mutagen · 10h ago
I've been able to run it on an Intel laptop with integrated video. I haven't been able to test the most complex models / presets, I might give that a shot this weekend and see where it falls apart.
pjbk · 13h ago
Cool idea. Sounds like AAS Chromaphone on steroids.

No comments yet

fractallyte · 11h ago
I second the recommendation: even if you don't have a Twitter/X account, find some way to watch Mick Gordon's session!

I find it hysterically funny, but at the same time, it really shows what this synth is capable of.

Excellent!

TheOtherHobbes · 9h ago
Fun :)
carterschonwald · 12h ago
This is super super duper cool. Thx for sharing
drcongo · 12h ago
I like the look of this, do you have any plans to release it for iPad?
humbledrone · 12h ago
The thought has definitely occurred to me. It's always been in the back of my mind on the "if it shows some success..." list. Glad to hear there's interest, I think it would be really fun if I implemented proper multi-touch. There are some other details I'd need to think through, though, since right now it mostly assumes you have a MIDI keyboard, but on an iPad it's just begging for touch controls for a lot of that stuff.
endofreach · 11h ago
I'd focus on ipad & an awesome multitouch experience. App store sales are easier & i'd bet apple would spotlight it.
yapyap · 13h ago
It definitely looks really cool! As an outsider to the audio stuff with an okay amount of knowledge I’m curious as to the workflow for sure
humbledrone · 12h ago
You can get a sense for the workflow in the quick-start tutorial: https://www.youtube.com/watch?v=Gva-BVuWbUA. It doesn't cover the advanced features but it should give you and idea what it's like!
anigbrowl · 10h ago
Not supported: Intel-based Macs

Boo

TheOtherHobbes · 9h ago
I don't think Intel Macs have the grunt for this. Physical modelling of complex networks is pretty intensive.

You can take it a stage further and model networks of complex shapes like metal plates. That gets even more interesting because you get multiple resonant modes.

In the limit you could use finite element modelling to create precise simulations of acoustic instruments - like all of the strings in a piano, all of the dampers, the resonator, and the wooden enclosure.

But that's a brute force way to do it, and there isn't nearly enough compute available to make it happen in real time. (You might be able to do it on a supercomputer. I'm not aware of anyone's who's tried.)

anigbrowl · 8h ago
I feel like you're overlooking the fact that it's GPU based (sorta the whle point) and that's why it also runs on Windows.
newobj · 13h ago
Whoa this is sick
badmonster · 13h ago
looks really cool! congrats!
vid43 · 12h ago
This is so cool.