Launch HN: BitBoard (YC X25) – AI agents for healthcare back-offices
We were early employees at Forward, which provided primary care across the US. To scale this, we relied on thousands of remote contractors to do repetitive administrative work like reconciling patient records, and scheduling follow-ups based on care plans. It was a huge bottleneck—expensive, error-prone, and always pulling attention away from clinical care. Our software solutions were always too brittle, never managing to handle the variance of clinical data we oversaw.
AI, when applied well, is capable of performing a lot of the tasks we manually did. So we decided to take another crack at the problem by building today what we would have liked to have back then, and to help clinics use it.
Clinics send us their SOPs (Standard Operating Procedures—for example, “prep a patient chart from these records before a visit”), and we turn them into AI agents that do the work. These agents act like remote contractors: they log into EHRs, navigate internal tools, and do the work in the background. Unlike classical RPA, we build in verification and deterministic checks, so customers can confirm it was done right. Unlike low-code tools, there’s nothing new to learn. Customers don’t have to touch a UI or maintain logic. They just hand us the task, and we do it. Clinicians don’t want more screens! They erode attention and cause weird bottlenecks in operations because someone has to drive them. Our product is built to address this.
Here’s a demo video: https://www.youtube.com/watch?v=t_tQ0fYo85g. We’re not self-serve yet, but we deploy with customers in days after onboarding them. We’re working on speeding that up.
One of our early customers is a fast-growing obesity medicine group. Their MAs were spending 15 to 20 minutes per patient just entering intake form data into the EHR. That one task was taking up 30% of their MA time. We took it over in a week. It’s now fully automated, and they’ve cleared the backlog and sped up visits.
A few technical problems are specifically relevant to building healthcare agents:
- Unreliable interfaces: many EHRs and clinic tools don’t follow modern web standards, making automation brittle. We’ve forked browser-use to solve some of these challenges. We’re working on analogous infrastructure to let agents operate on desktops and across a wide range of APIs.
- Verification: in healthcare, tasks need to be provably correct. We embed deterministic checks into each workflow so agents can confirm the task was completed as expected and the output is accurate.
- Workflow generation: clinic SOPs are written in natural language and vary widely, yet still describe the actual process that works for clinics.
We charge per task, based on complexity. We’re HIPAA compliant, audit-logged, and operate in a zero-retention environment unless auditing requires otherwise.
A meaningful part is building trust in a high-stakes environment like healthcare. Part of that is making the product reliable. But another educational part is learning how to introduce a new concept like “agents” to clinics. We’re working on the right ways to describe them, to onboard them, to measure them. Endearingly, one of our customers’ agents is named “Robert Ott”, and they refer to him by name in their weekly updates like he’s a member of the team :) We’re learning a lot and have a long way to go.
We’d love to meet other folks who 1. work in medical groups or health systems and want to offload repetitive work, and 2. are building in this space and want to trade notes. We’re happy to share everything we’ve learned so far.
And this is a big space, with a lot of learnings from personal stories, from clinicians, technologists, administrators, and more. What do you make of it? We’d love to hear from you.
I didn't see this in your demo, how is this being implemented? You're entering fairly important information into EHRs like allergies and medications, that's more than just high-stakes, that's medical malpractice territory. I think you folks are really pushing the limits of the FDA's medical device exemptions for administrative work here. Are you working with the FDA on any kind of AI medical device certification?
What EHRs do you integrate with?
We're also not operating autonomously: 100% of our outputs are reviewed by the clinical team shortly after entry, as part of their regular process. That feedback loop is essential, both for safety and for evolving the system responsibly.
Amongst EHRs we currently work with Athena, though we do a lot of work on isolated file stores that our customers create for us.
Is it just
In Rails?That all expected fields were entered correctly.
That no unexpected or extraneous data was added.
When we have access to a direct data source (like an API or file store), verification is simpler — we can do deterministic checks and directly confirm the values.
We're also building a validation library for common field types (dates, contact info, etc.) to enforce constraints and avoid propagating unverifiable or out-of-range values.
This is one of the most grueling, labor intensive, boring, error prone, hate my job areas possible. It makes perfect sense for agents to perform these tasks, but as with everything else there will be a labor impact.
What we currently are looking at is scheduling of staff, because somehow that involves different software (dr. vs nurse), and the staff builds a spreadsheet, and then enter it into other software. Totally whack how much time and effort they spend on that.
Are there specific kinds of clinics that are an especially good fit for you? Are you seeing any patterns in the kinds of clinics that are relatively eager to adopt an AI product like yours?
I don't have any feedback on what you're up to, I just think it's interesting!
Compared to 3 or 4 years ago, clinicians are much more open to AI. They've heard of ChatGPT or ambient scribes, and they often come to us with specific ideas of what they want AI to solve. Talking to them is one of my favorite parts of the job.
That said, we also hear a lot of of requests from groups that we have to turn down. Sometimes we can't guarantee success, or the product just isn’t ready for that use case. As an example, a ton of clinical interfaces only work on desktops, which we'd like to support but don't yet. We're hoping to grow into those over time.
We're evaluating Cua (https://www.ycombinator.com/companies/cua) to containerize our agents; am a fan so far. We're also putting Computer Use agents from (OAI and Anthropic) to the test. Many legacy ERPs don't run in the browser and we have to meet them there. I think we're a few months away from things working reliably and efficiently.
We're evaluating several of the top models (both open and closed) for browser navigation (claude's winning atm) and PDF extraction. Since we're performing repetitive tasks, the goal is make our workflows RL-able. Being able to rely on OSS models will help a lot here.
We're building our own data sets and evaluations for many of the subtasks. We're using openai's evals (https://github.com/openai/evals) as a framework to guide our own tooling.
Apart from that, we write in Typescript, Python, and Golang. We use Postgres for persistence (nothing fancy here). We host on AWS, and might go on premises for some customers. We plan on investing a lot into our workflow system as the backbone of our product.
I prefer open source when possible. Everything's new and early, and many things require source changes that others might not be able to prioritize.
Edit - one thing I'd love to find a good solution for is reliably extracting handwriting from PDF documents. Clinicians have to do this a ton to keep the trains running on time, and being able to digitize that knowledge on the go will be huge.
Very open to ideas here. We're seeing great tools and products come up by the day, including from our own YC batch.
Another is that once you free up clinician time, they will quickly find higher-leverage tasks. It shows how overloaded the system is, and that there's pent-up demand to make it better.