Launch HN: WorkDone (YC X25) – AI Audit of Medical Charts
We got interested in this problem when we saw how often small documentation slip-ups can snowball into huge financial, legal, and even life-threatening outcomes. Sometimes it’s just a mistyped medication time or a missing discharge note - basic stuff - but when you’re dealing with claims and regulatory rules, a minor error can trigger an automatic denial. Wrong copy-pasting on a discharge note will be uncovered by the insurance provider and will cost stressful appeal. By the time an overworked clinical or compliance team discovers it, it’s usually too late to just fix it. Our own experiences hit close to home: Dmitry’s family member faced grave consequences from a misread lab result, and Sergey comes from a full medical family that’s battled these issues up close.
Here’s our demo if you’d be interested to take a look - https://www.loom.com/share/add16021bb29432eba7f3254dd5e9a75
Our solution is a set of AI agents that plug directly into a clinic or hospital EHR/EMR system. As clinicians go about their daily routines, WorkDone continuously monitors the records. If it spots something that looks off-like a missing signature or a suspicious timestamp- it asks the responsible staff member to double-check and correct it on the spot. We want to prevent errors from becoming big headaches and wasted hours down the road. Technically, this involves running a secure event listener on top of EHR APIs and applying a group of coordinated AI agents that’s been loaded with clinical protocols, payor rules, and finetuned on historical claim denials and regulatory guidelines. The moment the model flags a potential error, an agent nudges the user to clarify or confirm. If it’s a genuine mistake, we request correction approval from the provider and fix it right away and store an audit trail for compliance. We are extending the approach to finding conflicting medication or prescribed treatments.
What’s different about our approach from AI tools for hospital revenue management is this focus on near-real-time intervention. Most tools detect errors after the claim has already been submitted, so compliance teams end up firefighting. We think the best place to fix something is in the flow of work itself. One common question about the use of AI in the medical/health field is: what if the AI hallucinates or gets something wrong? In our case, since the tool is flagging possible errors and its primary effect is to get extra human review, there’s no impact on anything health-critical like treatments. Rather the risk is that too many false positives could waste staff members’ valuable time. For pilots, we are starting with read-only mode in which we use API only to retrieve the data, and we are able to see that the QA we built into the agent orchestration layer does a pretty good job for spotting common documentation mistakes even in lengthy charts (for instance, multi-day hospital stay).
We’re in the early stages of refining our system, and we’d love feedback from the community. If you have ideas on integrating with EHRs, experiences with compliance tools, or just general insights about working in healthcare environments, we’re all ears. We’re also on the lookout for early users - particularly rehabs, small clinics and hospitals - willing to give our AI a try and tell us where it needs improvement.
Thanks for reading, and let us know what you think!
I have extensive experience with it and am willing to help.
information is at https://worldvista.org and https://hardhats.org and https://va.gov/vdl
They have been aware for a few years that many clinicians aren’t documenting their work in the best way for billing. The current solution is to have an annual talk given by the one billing expert in their department pointing out where people often lose revenue due to poor documentation.
Not all the doctors attend this talk. There is no internal process for measuring subsequent improvements quantitatively. There are 85 doctors in her group.
Anyway, this is just to say that something automated to help doctors document their work in a billing friendly way seems powerful. But for my wife’s group, the issue doesn’t seem to be denied claims or “errors” per se. More omissions/sub optimal documentation due to lack of knowledge. Or lack of follow through on knowledge which is only occasionally communicated.
The goal here of your wife's hospital is to try to increase revenue and the outcome, either AI assisted or not, is more accurate visit notes which leads to more accurate billing, which would lead to higher costs to the patient for the same medical care.
If that's right, I suppose a truer reflection of the medical care provided is a good unto itself, but I have to say I don't love the outcome as someone who's a patient and not a shareholder (401k notwithstanding).
To start, we integrate with Kipu and Athena, just happened that our first clients are rehabs and clinics that use these 2.
Good point on the desire to stay in HR for the review workflow, this is our vision that could be achieved with widgets in specific EHRs but this is down the road. Once mistake is identified, we notify clinical professionals via standard communication like emails and also keep a dashboard with the list of 'topics' inside our portal.
Oops - it's my job to catch that as the editor. I've added it above now. Thanks for the heads-up!
1. How often is the cause of a denied insurance claim a documentation error vs an intentional denial from an insurance company (either an automated system or medical reviewer)?
2. This feels very conceptually similar to an AI review bot, but the threshold for false positives feels higher. What does the process look like for double checking a false positive in the agent orchestration layer?
1. It really depends on the clinical specialty, but the average is around 25% (e.g. 250M claims denied a year because of documentation mistakes). We work with rehabs where this ratio is above 50%
2. It's triple checking -tun the analysis twice and then verify the conclusion, 3+ separate agent calls
Medical offices where more than 50% of the denied claims are because of documentation mistakes? I'm confused why they are still operating. Is this not malpractice of some kind?
"a cost of doing business" means they do not care that over half their denials are not real. You are working with business that do not care about humans and will use your tool to more extract profit. I'd rather them go out of business if they cannot get 50% of their claims done in a way that doesn't get them auto rejected.
How'd you guys find your initial users and figure out rehab clinics were a good place to start?
Glad to see something like that taking shape, and hopefully helping other small businesses (and larger too) get a leg up on these insurance companies. As well, of course, avoiding real-world medical dangers as you've mentioned.
Kudos on launching.
Is this even possible in the EU with the GDPR and its stricter rules on medical data?
(1) Don't confuse medical errors with claims errors. Your claims-amplifying customers don't really care about medical errors; they're mainly just optimizing their extraction from government and insurance payment systems. (And the vast majority of medical errors take significant skill to detect - beyond even complicated decision support systems.)
For claims errors, I would rather the system provided feedback to the EHR Epic engineers than trying to block providers. The Epic IT should be getting regular reports that prompt them to fix their UI issues.
But then I care more about fixing the Epic UI than claims.
(2) Epic/EHR's are an epic UI failure (not surprising since it was not driven by user need but via top-down requirements). It has random and super-complicated form UI's forcing users into complicated multi-step workflows to say something trivial.
Today in medicine the logistics of interfacing with Epic and other EHR's takes longer than the actual care. (Just imagine having to use a compiler that took longer than you did to write the code.) It's the scourge of medical care today.
In that context, imagine: you want to build a system that argues with providers when they're done, based on AI logic completely separate from the Epic system logic? It's hard to imagine a better way to make a bad situation worse.
What would be huge benefit instead is an AI tester for Epic. Something where you could generate all the ways users might see their UI and need to use it, and quantify all the confusing+unnecessary visuals and steps, to actually measure usability. Think user modeling and fuzzing coupled with progressive pruning for workflows, with actual metrics of system and workflow complexity.
That usability testing would probably be useful in other domains, too. But starting with Epic would be good because it has so many UI errors (high signal to noise) and saving time for highly-paid, highly-blocked users translates directly to dollars. You could sell it to every Epic integrator in the US. Those customers are easy find and target, they have strong needs, they can work with you as your technology evolves (and filter the inaccuracies). By giving them objective measures of usability/complexity, you simplify their design space and give them a clean way to measure system improvement, reducing the level of politics they endure.
Along the way you would build AI models over user interaction (instead of tokens). Then you could build interactive auto-completing UI's that work based on session observation and voice alone.
Unless I'm searching or researching, I don't want AI to replicate what's been said. I want AI to anticipate what I'm doing, and afford me the choices I need to make. That's exactly the model of diagnosis and treatment.
As for usability feedback to Epic engineers, they generally aren't interested in what you have to say. I mean Epic does make product changes based on customer feedback but they don't listen to input from random other companies and they certainly don't pay for it. Their culture is more that they know the correct way to do things and customers should change their processes to fit the software.
https://www.acquired.fm/episodes/epic-systems-mychart