I love the idea and I would like to build something like this. But the few attempts i have made using whisper locally has so far been underwhelming. Has anyone gotten results with small whisper models that are good enough for a use case like this?
Maybe I've just had a bad microphone.
codethief · 21m ago
> Maybe I've just had a bad microphone.
Yeah, I would definitely double-check your setup. At work we use Whisper to live-transcribe-and-translate all-hands meetings and it works exceptionally well.
catapart · 4h ago
Man, I'd really love it if this were just a product/app I could download and use a UI to configure/teach.
But this guide gives me what I need to make that, I think, so a big thank you for this!
thedeep_mind · 5h ago
This is great, thanks for putting this together.
Haven't followed it through yet, but does this model run successfully on an iPhone?
My 9 year old ran a Qwen 0.6B model using ollama quite well, anything else was too slow to offer a good UX.
parpfish · 2h ago
Oh, a nine year old PHONE.
I was thinking there was a fourth grader out there deploying models when at that age I was still learning multiplication tables.
NetOpWibby · 2h ago
My son just turned 9 today so I was like, "Wow! I wonder if my kid would be interested in doing this?"
SparkyMcUnicorn · 3h ago
MLC[0] indicates that it can run models in the 8B range on iOS, but 1-3B sounds more reasonable to me.
I’ve noticed recently (maybe I missed an announcement) that Siri now functions locally for at least some commands. Try putting an Apple watch in airplane mode and asking it to set a timer or reminder
cadamsdotcom · 2h ago
Why haven’t Apple taken a look at the data then hardcoded handlers for the top ~1000 usages???
mrcwinn · 5h ago
Cool project and nice write-up!
mystified5016 · 4h ago
Does Apple even allow you to replace Siri with another assistant? For the longest time on android, all non-Google assistants were crippled by not being able to listen in the background or use the assistant hardkey, gestures, or shortcuts. I'm not sure if the Google assistant still has privileges others don't, but I wouldn't be surprised in the least.
matthewfcarlson · 3h ago
Part of the problem is the wake word “hey siri” is actually handed by a separate coprocessor (AOP) with the model compiled into the firmware. While anything is technically possible, it isn’t as simple as just letting the google app run in the background since the AP is asleep when any of these gesture happen. You could probably setup the action button on the side to open an assistant, but that’s going to be a less pleasant experience (app might not be open, etc).
Same with android phones - a super-specific hardcoded phrase is much easier to work in the power budgets required for an "always on" part of the device.
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
layer8 · 1h ago
I think people would be fine with having to call it Siri if only they could replace the actual assistant.
dangus · 24m ago
I presume you could pretty easily use new-ish action button to run a custom shortcut that brings up an alternative assistant app.
jedisct1 · 4h ago
More or less. This is what Perplexity does.
bronco21016 · 1h ago
I saw an article about this and downloaded the Perplexity app but I was unable to figure out if this was true? Do I need a paid tier? I just quickly worked through the free sign up and couldn't sort it out. The demo looked really slick. Is it worth pursuing?
caust1c · 6h ago
So build your own crappy agent-assistant?
In earnest though, I'm certain we'll see a community replacement of Siri by end-of-year if the iPhone permissions model allows it or there's some workaround. IDK what the limitations are here but I'm eagerly awaiting the community to step in where Siri has failed.
Maybe I've just had a bad microphone.
Yeah, I would definitely double-check your setup. At work we use Whisper to live-transcribe-and-translate all-hands meetings and it works exceptionally well.
But this guide gives me what I need to make that, I think, so a big thank you for this!
Haven't followed it through yet, but does this model run successfully on an iPhone?
My 9 year old ran a Qwen 0.6B model using ollama quite well, anything else was too slow to offer a good UX.
I was thinking there was a fourth grader out there deploying models when at that age I was still learning multiplication tables.
[0] https://llm.mlc.ai/docs/deploy/ios.html#bring-your-own-model
Details are listed below
https://machinelearning.apple.com/research/hey-siri
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
In earnest though, I'm certain we'll see a community replacement of Siri by end-of-year if the iPhone permissions model allows it or there's some workaround. IDK what the limitations are here but I'm eagerly awaiting the community to step in where Siri has failed.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." - https://news.ycombinator.com/newsguidelines.html
(Your comment would be fine without that first bit.)
- time of day
- calendar date
- weather
- set a timer
- simple math calculation
That’s 90% of the functionality right there.