Show HN: Whispering – Open-source, local-first dictation you can trust

37 braden-w 11 8/18/2025, 4:52:29 PM github.com ↗
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app.

I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.

So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. All your data is stored locally on your device. For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).

Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.

Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs, and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ.

There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://github.com/cjpais/Handy). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.

Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (https://github.com/epicenter-so/epicenter), which I should explain a bit...

I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.

Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.

I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.

Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (https://github.com/epicenter-so/epicenter) and join the Discord here (https://go.epicenter.so/discord). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!

Comments (11)

dumbmrblah · 14m ago
I’ve been using whispering for about a year now, it has really changed how I interact with the computer. I make sure to buy mice or keyboards that have programmable hotkeys so that I can use the shortcuts for whispering. I can’t go back to regular typing at this point, just feels super inefficient. Thanks again for all your hard work!
glial · 13m ago
This is wonderful, thank you for sharing!

Do you have any sense of whether this type of model would work with children's speech? There are plenty of educational applications that would value a privacy-first locally deployed model. But, my understanding is that Whisper performs pretty poorly with younger speakers.

braden-w · 45m ago
For those checking out the repo this morning, I'm in the middle of a release that adds Whisper C++ support!

https://github.com/epicenter-so/epicenter/pull/655

After this pushes, we'll have far more extensive local transcription support. Just fixing a few more small things :)

solarkraft · 54m ago
Cool! I just started becoming interested in local transcription myself.

If you add Deepgram listen API compatibility, you can do live transcription via either Deepgram (duh) or OWhisper: https://news.ycombinator.com/item?id=44901853

(I haven’t gotten the Deepgram JS SDK working with it yet, currently awaiting a response by the maintainers)

braden-w · 47m ago
Thank you for checking it out! Coincidentally, it's on the way:

https://github.com/epicenter-so/epicenter/pull/661

In the middle of a huge release that sets up FFMPEG integration (OWhisper needs very specifically formatted files), but hoping to add this after!

Johnny_Bonk · 44m ago
Great work! I've been using Willow Voice but I think I will migrate to this (much cheaper) but they do have a great UI or UX just by hitting a key to start recording and the context goes into whatever text input you want. I haven't installed whispering yet but will do so. P.S
braden-w · 10m ago
Amazing, thanks for giving it a try! Let me know how it goes and feel free to message me any time :) happy to add any features that you miss from closed-source altneratives!
newman314 · 32m ago
Does Whispering support semantic correction? I was unable to find confirmation while doing a quick search.
braden-w · 15m ago
Hmm, we support prompts at both 1. the model level (the Whisper supports a "prompt" parameter that sometimes works) and 2. transformations level (inject the transcribed text into a prompt and get the output from an LLM model of your choice). Unsure how else semantic correction can be implemented, but always open expand the feature set greatly over the next few weeks!
joshred · 5m ago
They might not now how whisper works. I suspect that the answer to their question is 'yes' and the reason they can't find a straightforward answer through your project is that the answer is so obvious to you that it's hardly worth documenting.

Whisper for transcription tries to transform audio data into LLM output. The transcripts generally have proper casing, punctuation and can usually stick to a specific domain based on the surrounding context.

codybontecou · 6m ago
Now we just need text to speech so we can truly interact with our computers hands free.