Launch HN: Hyprnote (YC S25) – An open-source AI meeting notetaker
Source code: https://github.com/fastrepl/hyprnote Demo video: https://hyprnote.com/demo
We built Hyprnote because some of our friends told us that their companies banned certain meeting notetakers due to data concerns, or they simply felt uncomfortable sending data to unknown servers. So they went back to manual note-taking - losing focus during meetings and wasting time afterward.
We asked: could we build something just as useful, but completely local?
Hyprnote is a desktop app that transcribes and summarizes meetings on-device. It captures both your mic input and system audio, so you don't need to invite bots. It generates a summary based on the notes you take. Everything runs on local AI models by default, using Whisper and HyprLLM. HyprLLM is our proof-of-concept model fine-tuned from Qwen3 1.7B. We learned that summarizing meetings is a very nuanced task and that a model's raw intelligence (or weight) doesn't matter THAT much. We'll release more details on evaluation and training once we finish the 2nd iteration of the model (still not that good we can make it a lot better).
Whisper inference: https://github.com/fastrepl/hyprnote/blob/main/crates/whispe...
AEC inference: https://github.com/fastrepl/hyprnote/blob/main/crates/aec/sr...
LLM inference: https://github.com/fastrepl/hyprnote/blob/main/crates/llama/...
We also learned that for some folks, having full data controllability was as important as privacy. So we support custom endpoints, allowing users to bring in their company's internal LLM. For teams that need integrations, collaboration, or admin controls, we're working on an optional server component that can be self-hosted. Lastly, we're exploring ways to make Hyprnote work like VSCode, so you can install extensions and build your own workflows around your meetings.
We believe privacy-first tools, powered by local models, are going to unlock the next wave of real-world AI apps.
We're here and looking forward to your comments!
The social proof logo list is an old scheme on the growth hacking checklist. There was a time when it was supposed to mean the company had purchased the software. Now it just means they knew someone who worked at those companies who said they’d check it out.
At this point, when I visit a small product’s landing page and see the logo list the first thing I think of is that they’re small and desperate to convince me they’re not.
Of course, dishonesty is as old as time, but these last couple of years have been hard to watch…
If the users are from those companies, this is not lying.
If they added logos for companies their users are not from, it would be lying.
Adding a logo to your webpage has started to follow different patterns for the stage of the company.
Early stage companies show things like "people at X, Y, Z use our product!" (showing logos without permission), whilst later stage ones tend to show logos after asking for permission, and with more formal case studies.
They may not have asked for permission to show these logos, but that's not the same thing as lying.
It's a lie of accuracy, but still a lie.
Do you really believe all of those companies allow employees to install pre-release software on their computers which records company meetings and interacts with a long list of 3rd party APIs? I doubt it.
They could have had people who are employed by these companies use it on their personal computers for some purpose, but the implication they’re trying to make is that those companies have chosen this software. That’s a lie.
It does not interact with 3rd party API.(except opt-out-able analytics) It uses local-ai models. No data leave user's device. It helps users in large org to try it.
> but the implication they’re trying to make is that those companies have chosen this software.
We used "Our *Users* are Everywhere" to avoid that implication. It is not typical B2B software, but open-source desktop app that individuals can use.
Help me understand what this means
The current growth hacking play is to have people look through their personal network to find friends who work at those companies, then to have those friends say they’ll try the software
So it’s unlikely not even organic signups. It’s being pushed to friends and friends-of-friends who are unknowingly being used for their company affiliation.
There are individuals within the orgs who have used the app, giving us feedback through calls, or even paying for individual licenses.
Well, it looks a lot like you're playing word games to get clout-by-association that you don't necessarily deserve. That doesn't seem like something an authentic person (or people) would try to do. Are the other claims about your team and software equally unserious?
I hope, since it’s opensource, you are thinking about exposing api / hooks for downstream tasks.
I’m the opposite: If something is expected to accurately summarize business content, I want to use the best possible model for it.
The difference between a quantized local model that can run on the average laptop and the latest models from Anthropic, Google, or OpenAI is still very significant.
Calendar integration would be nice to link transcripts to discrete meetings.
Please add more details here: https://github.com/fastrepl/hyprnote/issues/1203
For calendar, we have native Apple Calendar integration in MacOS.
Also Linux issue pointer: https://github.com/fastrepl/hyprnote/issues/67#issuecomment-...
I made it required to prevent accidentally ship app without any analytics/error tracking. (analytics can be opted out)
For ex, https://github.com/fastrepl/hyprnote/blob/327ef376c1091d093c...
EDIT: Prod -> release
Given your target market, have you considered looking at Bugsink?[0] sentry compatible. still not local, but at least you won't have to additionally ask your customers to trust sentry/posthog.
Disclosure: that's me
[0] https://www.bugsink.com/
Would be great if you could include in your launch message how you plan to monetize this. Everybody likes open source software and local-first is excellent too, but if you mention YC too then everybody also knows that there is no free lunch, so what's coming down the line would be good to know before deciding whether to give it a shot or just move on.
We have a Pro license implemented in our app. Some non-essential features like custom templates or multi-turn chat are gated behind a paid license. (A custom STT model will also be included soon.) There's still no sign-up required. We use keygen.sh to generate offline-verifiable license keys. Currently, it's priced at $179/year.
For business:
If they want to self-host some kind of admin server with integrations, access control, and SSO, we plan to sell a business license.
Let's actively not support software that chooses anti-security.
the reason we’re gating the admin server under a business license is less about profiting off sso and more about drawing a line between individual and organizational use. it includes a bunch of enterprise-specific features (sso, access control, integrations, ...) that typically require more support and maintenance.
that said, the core app is fully open-source and always will be - so individuals and teams who don’t need the admin layer can still use it freely and privately, without compromising security.
we’ll keep listening and evolving the model - after all, we're still very early and flexible. appreciate the pushback.
(edit: added some more words to reinforce our flexibility)
I find myself often using otter.ai - because while it's inferior to Whisper in many ways, and anything but on-device, it's able to show words on the live transcript with minimal delay, rather than waiting for a moment of silence or for a multi-second buffer to fill. That's vital if I'm using my live transcription both to drive async summarization/notes and for my operational use in the same call, to let me speed-read to catch up to a question that was just posed to me while I was multitasking (or doing research for a prior question!)
It sometimes boggles me that we consider the latency of keypress-to-character-on-screen to be sacrosanct, but are fine with waiting for a phrase or paragraph or even an entire conversation to be complete before visualizing its transcription. Being able to control this would be incredible.
Doing it locally is hard, but we expect to ship it very soon. Please join our Discord(https://hyprnote.com/discord) if you are interested to hear from us.
From a business perspective, and as someone looking also into the open-source model to launch tools, I'd be interested though how you expect revenue to be generated?
Is it solely relying on the audience segment that doesn't know how to hook up the API manually to use the open-source version? How do you calculate this, since pushing it via open-source/github you would think that most people exposed to it are technical enough to just run it from source.
Hope that make sense
But because MacWhisper does not store transcripts or do much with them (other than giving you export options), there are some missed opportunities: I'd love to be able to add project tags to transcripts, so that any new transcript is summarized with the context of all previous transcript summaries that share the same tag. Thinking about it maybe I should build a Logseq extension to do that myself as I store all my meeting summaries there anyway.
Speaker detection is not great in MacWhisper (at least in my context where I work mostly with non native English speakers), so that would be a good differentiation too.
automated meeting detection - working on this. push to transcribe - want to understand more about this. (could we talk more over at our discord? https://hyprnote.com/discord)
if you're using logseq, we'd love to build an integration for you.
finally, speaker identification is a big challenge for us too.
so many things to do - so exciting!
But your home page makes it looks like you already have it. I just tried it in a 30-minute meeting with 20 people and it put the entire conversation under a single speaker, in a single paragraph.
However, due to various challenges and priority changes, we haven't been able to do so yet. We'll update the landing page soon.
no. actually it is pretty good :)
https://github.com/fastrepl/hyprnote/blob/d0cb0122556da5f517...
this is invalid on Mac mini. Should be fixed today.
/Applications/Hyprnote.app/Contents/MacOS/Hyprnote
The thing I think some enterprise customers are worried about in this space is that in many jurisdictions you legally need to disclose recording - having a bot join the call can do that disclosure - but users hate the bot and it takes up too much visibility on many of these calls.
Would love to learn more about your approach there
Something that I think is interesting about AI note taking products is focus. How does it choose what's important vs what isn't? The better it is at distinguishing the signal from the noise, the more powerful it is. I wonder if there is an in-context learning angle here where you can update the model weights (either directly or via LoRA) as you get to know the user better. And, of course, everything stays private and on-device.
The idea of Hyprnote is that you write chicken-scratch-raw note during the meeting(what you think is important), and AI enhance based on it.
On-device learning is interesting too. For example, Gboard: https://arxiv.org/abs/2305.18465
And yes - we are open to this too
[1]: https://developer.apple.com/documentation/xcode/configuring-...
Almost all of our meetings are hybrid in this way, and it's a real pain having almost half of the meeting be identified as a single individual talking because the mic is hooked up to their machine.
It's a total dealbreaker for us, and we won't use such tools until that problem is solved.
If you are interested, you can join our Discord and follow updates. :) https://hyprnote.com/discord
I'll look forward to the Linux version.
Is there any chance of a headless mode? (I.e. start, and write transcript to stdout with some light speaker diarization markup. e.g. "Speaker1: text")
maybe. we might be able to add extension system that each extension can have that info and do whatever it want within the app.
> I'll look forward to the Linux version.
https://github.com/fastrepl/hyprnote/issues/67 We have open issue. You might want to subscribe to it!
Either everyone is in the same physical room, or everyone is remote.
The quality of communication plummets in the hybrid case:
* The physical participants have much higher bandwidth communication than those who are remote — they share private expressions and gestures to the detriment of remote.
* The physical participants have massively lower latency communications. In all-online meetings, everyone an adjust and accommodate the small delays; in hybrid meetings it often locks out remote participants who are always just a little behind or have less time to respond.
* The audio quality of remote is significantly worse, which I have seen result in their comments being treated as leas credible.
* Remote participants usually get horrible audio quality from those sharing a mic in the room. No one ever acknowledges this, but it dramatically impacts ability to communicate.
The second tool is likely hardware limitation. A multi-cam-mic with beam forming capability to deconstruct overlapping sounds.
Is the future goal of Hyprnote specifically meeting notes and leaning into features around meeting notes, or more general note taking and recall features?
We actually have "export to Obsidian". I think you can pair Hyprnote nicely with Obsidian.
Screenshot: https://github.com/user-attachments/assets/5149b68d-486c-4bd...
You need this plugin installed in Obsidian first: https://github.com/coddingtonbear/obsidian-local-rest-api
Obsidian export code 1:
https://github.com/fastrepl/hyprnote/blob/d0cb0122556da5f517...
Obsidian export code 2:
https://github.com/fastrepl/hyprnote/tree/main/plugins/obsid...
Only for speech to text though.
Paid subscription, not open source.
HOWEVER, I was extremely disturbed to find out that Granola was automatically making each of my notes folders public to the entire organization. Including 1:1s, you guessed it.
Yeah, really shook my trust in this product by a lot.
ref: https://github.com/daangn/stackflow
We're putting a lot of effort into making it run smoothly on local machines. There are no signups, and the app works without any internet connection after downloading models.
Would I be able to create an extension that could do this?
ELI5 sounds useful.
MMSS sounds terrifying though, honestly.
mmss was something that a lot of users suggested - they wanted to be saved from public humiliation
https://goodsnooze.gumroad.com/l/macwhisper
any interest in the Cluely-style live conversation help/overlay?
Looks cool, I'll wait for the Linux version and try it.
Integration of automatic translations could be an interesting business plan value-add. Branching out into CRM things also makes sense to me.
Good luck, keep shipping.
For macOS, it is dogfooding, and also lots of AI models are Mac optimized + overall Mac device more powerful than windows, so suited to on-device thing.
:(
But none of it should prevent someone from just using it (GPL does not mean any usage data is being made "public").
Even tried making a teams meeting bot. But Teams doesn't give live audio to developer unless you are a special partner.
Glad you made this. Will play around
https://github.com/fastrepl/hyprnote/blob/main/packages/util....
Monetization:
For individuals: We have a Pro license implemented in our app. Some non-essential features like custom templates or multi-turn chat are gated behind a paid license. (A custom STT model will also be included soon.) There's still no sign-up required. We use keygen.sh to generate offline-verifiable license keys. Currently, it's priced at $179/year.
For business: If they want to self-host some kind of admin server with integrations, access control, and SSO, we plan to sell a business license.
A few random bits of realtime feedback:
You have an icon with the Finder face labeled "Open Finder view." I would expect this to open the app's data folder in the macOS Finder. Instead, it opens an accessory window with some helpful views such as calendar view. I'd encourage you to find another name for that window, because it's too confusing to call it "Finder" (especially with the icon).
I'd also add a menu item for Settings (and Command-comma shortcut) in the Application menu.
You also need a dark mode at some point.
Finally, I'm not sure where note files end up. Seeing that there's an Obsidian integration, I would love an option to save notes in Markdown format into a folder of my choice. I'm an iA Writer user, and would love to have meeting notes go directly into my existing notes folder.
I'll let you know how the actual functionality is working for me after my next few meetings!
we do have settings shortcut already! be sure to test it out :)
dark mode - noted as well.
we save our notes in db.sqlite that can be found in: ~/Library/Application\ Support/com.hyprnote.stable
this decision was made because we have three documents - raw note, enhanced note, and transcript - assigned to a meeting note.
would love to create an iA integration for you - or just a simple way to export MD for the time being
join our discord for more updates! https://hyprnote.com/discord