Show HN: AI Baby Monitor – local Video-LLM that beeps when safety rules break
Why?
When we bought a crib for our daughter, the first thing she tried was climbing over the rail :/ I got a bit paranoid about constantly watching her over, so I thought of a helper that can *actively* watch the baby, while parents could stay *semi-actively* alert. It’s meant to be an additional set of eyes, and *not* a replacement for the adult. Thus, just a beep sound and not phone notifications.
How it works
* *stream_to_redis.py* captures video stream frames → Redis streams
* *run_watcher.py* pulls the latest N frames, injects them + the rules into a prompt and hits a local *vLLM* server running *Qwen 2.5 VL*
* Model returns structured JSON (`should_alert`, `reasoning`, `awareness_level`)
* If `should_alert=True` → `playsound` beep
* Streamlit page displays both camera and LLM logs
Yes, I tried to stress it hard that it's in no way a replacement for an adult, just an additional eye as a safeguard
Everyone else's criticism is effectively entirely warranted even if it's just a hobby project if only because the criticisms are mostly pretty mild and reasonable things like "don't try to automate parenting with tech that still isn't ready yet".
Also who knows ? With enough popularity, OP might launch this into a startup.
How on earth would this project, by itself, cause harm?
Sure, inattentive parents can lead to children hurting themselves, but that can happen while browsing gmail, or even when frigate is setup, does that mean gmail and frigate cause harm to children? Obviously not.
There are actually hundreds of applications for this basic idea. Common sense applied to a video feed.
I am the developer and happy to answer questions.
You can basically setup your own instructions and setup you own observation solutions...you can imagine everything from security to farm operations, the sky's the limit.
(Just get a fence is the conclusion)
We have 50 ft of lake shore with neighbors on either side. Assuming I fenced my 50 ft there is still a path around said fence on either side.
At the very most I could gate the dock, but again, there are about 8 other docs readily available.
I had a friend that almost lost his kid with a pool with a self closing gate, an object happened to get caught in the latch when someone was leaving the pool and their 2 year old capitalized on this to almost get themselves killed. They had to perform CPR and rush to the hospital.
A backup system that could alert is just another layer of the security onion.
A long time ago I built a cat detector to keep my cat out of the baby play area, this was before modern AI systems and I'm sure it could be so much better now. https://www.twilio.com/en-us/blog/baby-proofing-raspberry-pi...
If your crib is unsafe or undersized, the solution is to make it safe by adding guardrails or buying a safer/correct sized model. Or remove the hazard entirely by putting their bed on the floor.
Adding AI/tech to catch deficiencies is not the right way to go about it. You're willing to risk injury/death in case the AI is wrong, you don't hear the beep, or there are too many false positives and you end up ignoring it?
This is the same thing as self driving. It seems good enough to that it takes your attention away - until it doesn't. And it's AI, not a model specifically trained on catching the issue.
what OP has is a fully self hosted, private video feed that can alert for more sophisticated events like:
- is anything happening that shouldn’t be? - are things not where they’re supposed to be? - has anything fallen? (plants, things into the pool)
good job OP. i’m going to take a look at this in the next couple months.
Btw, it doesn't really sound like the problem needs a video as an input to llm. Feels like sending an image is okay. So that makes it less demanding(?)
About the hard numbers, it's tough to test it quantitatively, because there's not a lot of data for babies in danger :D and I hope it stays that way
In general, I'm hoping that the open models will get better, there has been a lot of acceleration in video modality recently
Are "regular" baby monitors any more complicated than a dumbed down cheapest you can build it walkie-talkie? Society really needs to stop wanting other people to be responsible for their actions. The choice of what devices you use on your kids should first and foremost be on you. AI or no AI. Fear mongering with literal "someone think of the kids" is getting old, IMO.
* Summer Infant Baby Monitor Overheating Settlement – $10 million after reports of overheating monitors leading to fire hazards.
* Angelcare Monitor Recall Lawsuit – $7 million settlement due to defective cord placement that led to strangulation risks.
* Levana Baby Monitor Overheating Lawsuit – $5.5 million awarded in cases of monitors causing burns to children.
* VTech Baby Monitor Battery Defect Settlement – $6.2 million after reports of exploding batteries causing fire risks.
* Motorola Baby Monitor Signal Failure Class Action – $4.8 million settlement after claims of poor signal reception leading to missed emergencies.
* Owlet Smart Sock Monitor Lawsuit – $6.5 million awarded due to inaccurate heart rate readings that caused false alarms and panic among parents.
* Graco Digital Monitor Lawsuit – $5 million settlement after a lawsuit citing defective monitors that stopped functioning during critical moments.
* Philips Avent Baby Monitor Lawsuit – $4.2 million after several reports of overheating and potential fire hazards.
* Samsung Baby Monitor Fire Hazard Settlement – $3.5 million awarded due to incidents of overheating leading to home fires.
* Infant Optics Monitor Class Action – $4 million settlement after claims of faulty batteries and wiring causing sudden shutdowns during use.
https://www.personalinjurysandiego.org/product-liability/saf...
Just checked Infant Optics Monitor Class Action and also didn't find anything.
Maybe get a proper crib then?!
But you can still run the inference remotely, changing that should be just a matter of changing the address.
Might be a cool project to do with a cheap microphone and an SBC.
I have had regular first responder refresher courses at work every two years - and i have to say: I always relearned something every two years, because not having done it (luckily no-one needed my first aid) meant I had forgotten quite a bit in those two years.
Especially how it feels to administer CPR. and how to position the person.
So not sure where you are located, but in Germany, you can volunteer to become a first aid responder for your company - they love that, because from a certain size on they must have enough of us. And you get a certificate every time you retrain (you need to do this regularly every two years, as said).
We even once hat a baby puppet for training, but I was not able to test it that time (as it was not mandatory, I wanted that the parents actually had the chance - or those planning for kids).
Thank you for volunteering.
No comments yet
Also, if I accept your premise that a baby sleeping is inherently dangerous, then this is just an added layer of safety. It doesn’t remove safety.
But really safety is a highly lame way of framing this. what you want is more hours asleep so you have this thing try to hypnotize the baby with lights, sound, vibration, and it only alerts the parent(s) if it fails Eventually it could just straight up talk to your three year-old read it a second bedtime story, dispense a cup of water convince it not to hang out with the bad kids at school etc
It's not because it's going to be good for us.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
p.s. If your comment was intended as a joke on the word 'nanny' then I take this back.
(Thank you, yes, baby monitor -> nanny state)
Nap time? Overnight?
https://news.ycombinator.com/item?id=44087499
I am the developer :)
That is a demo of course but I think what sets LLM tools like this apart from what came before is that implemented correctly, the user gets to decide what it is, and can change the meaning at any time, in other words what it should be looking for at any time.
That is of course if the solution is implementation correctly.
There is immense potential for these type of capabilities if they are done in a way that leaves the specific use case implementation up to users.
Surely you didn’t have the baby in the same room as an awake adult 24 hours per day for 730 straight days, right?
(Of course you should compare this to humans, but in any case, do the math! And get a good lawyer.)
In other words, if you want AI to take over tasks of humans you either do it well or not at all.
Tesla autopilot could steer you into a concrete wall, a baby monitor with some LLM attached to it does not directly harm the child.
This is stupid
It is amazing/horrifying to me how many people are intent on reincenting the reasons why we have UL, the FDA, the FCC, traffic laws, seatbelts, electrical codes, fire marshals and unions.
It's not going to solve all problems.
Parent 2: Nah, the baby monitor would have warned us.
(I just can't eat a whole one.)