Launch HN: Embedder (YC S25) – Claude Code for Embedded Software

32 bobwei1 13 8/15/2025, 5:38:05 PM
Hey HN - We’re Bob and Ethan from Embedder (https://embedder.dev), a hardware-aware AI coding agent that can write firmware and test it on physical hardware.

Here’s a demo in which we integrate a magnetometer for the Pebble 2 smartwatch: https://www.youtube.com/watch?v=WOpAfeiFQkQ

We were frustrated by the gap between coding agents and the realities of writing firmware. We'd ask Cursor to, say, write an I2C driver for a new sensor on an STM32, and it would confidently spit out code that used non-existent registers or HAL functions from the wrong chip family. It had no context, so it would just guess and the code is always wrong.

Even when it wrote the right code, the agent had no way of interacting with your board and the developer would have to manually test it and prompt the agent again to fix any bugs they found. Making current solutions not ideal when working in an embedded context.

That’s why we are building Embedder, a hardware-aware coding agent that is optimized for work in embedded contexts. It understands your datasheets and schematics and can also flash and test on your hardware.

First, you give it context by uploading datasheets, reference manuals, schematics, or any other documentation on our web console and our coding agent will automatically have context when it executes tasks in the command line.

Second, Embedder can directly interact with your hardware to close the development loop. The agent is able to use a serial console just like a regular developer to read from your board and verify outputs. To solve more complex bugs or identify hardware issues the coding agent is also able to launch a debugging agent optimized for step through debugging workloads and interact with local or remote gbdservers.

You can try it out today. It’s an npm package you can install and run from your terminal:

  npm i -g @embedder/embedder && embedder
It's free for the rest of this month while we're in beta. After that, we're planning a usage based model for individual developers and a team plan with more advanced features.

We’d love to get feedback from the community, or hear about your experiences of embedded development. We’ll be in the comments to respond!

Comments (13)

btown · 33m ago
You’ll probably hear people say things like “this could just be an MCP server and a prompt to use it.” To that I’d say: just remember that infamous 2007 Dropbox comment: https://news.ycombinator.com/item?id=9224

If you can make the developer experience simple enough that it becomes standard practice, you can go really far. Good luck!

0x457 · 17m ago
Well, it depends if it does anything novel under the hood and not just Model + System Prompt + Tools + User Input...like 99% of agents being build right now.
etgibbs · 11m ago
users seem to like the dashboard for managing projects. hoping we can further differentiate ourselves through UX. thanks for the kind words!
NotBoolean · 32m ago
I’ve found AI agents always a bit lacking in embedded but I’ll test this out.

You said in your demo that by uploading the data sheet you completely remove hallucinations. How have you achieved this as I found AI’s still hallucinate even when given documentation.

etgibbs · 23m ago
strict grounding protocol + planning phase, mostly by prompting and forcing attention through citations. it tends to think longer than other coding agents but the results are usually better. let me know what you think.
bangaladore · 43m ago
This is a more general question, but:

What company would be comfortable with giving out schematics, source code, etc... to third parties like this or AI Model providers like Anthropic, etc...

Privacy policy aside, this just seems like a statistical guarantee at some point to leaks sensitive IP (not specifically pointing at this company, but in this space in general). Or does nobody care?

NotBoolean · 17m ago
Embedder’s privacy policy is very clear that they keep your information.

https://embedder.dev/privacy-policy

“Content Data

When you use our services, we collect:

Any files or data you upload Any generated code or data”

etgibbs · 8m ago
we store the files you upload for indexing

we don't store generated code / data, but it does pass through our API to the model provider. we store the usage meta data for billing purposes

etgibbs · 25m ago
great question. we've found most code/docs are offline in the embedded space (for good reason) so our approach going forward is going to be more FDE/on-prem for enterprise users. they asked for self-hosted, BYOK, local indexing, etc. and I think this is something that can differentiate us

for consumer users we have a zero-retention policy with the model providers, and we use repo mapper to index your code locally, but as you pointed out these APIs are a black box so no guarantees

Terretta · 9m ago
Your policy and this comment conflict.
etgibbs · 5m ago
need to update the policy, this was for the original version of the app, which used a deterministic algorithm to generate driver code, which we stored on cloud so you could download the files. thanks for pointing this out.
lennxa · 44m ago
how are you going about this? do you intend to train/finetune your own models, or scaffold frontier models with prompts+tools?
etgibbs · 30m ago
currently we scaffold frontier models. the product is basically a context layer with custom tools that enable hardware interaction. we've tossed around the idea of pre-training/fine tuning but new models are being released so fast it doesn't make sense to build anything other than a wrapper