Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection

36 bpcrd 20 8/18/2025, 3:16:34 PM realitydefender.com ↗
Hi HN! This is Ben from Reality Defender (https://www.realitydefender.com). We build real-time multimodal and multi-model deepfake detection for Fortune 100s and governments all over the world. (We even won the RSAC Innovation Showcase award for our work: https://www.prnewswire.com/news-releases/reality-defender-wi...)

Today, we’re excited to share our public API and SDK, allowing anyone to access our platform with 2 lines of code: https://www.realitydefender.com/api

Back in W22, we launched our product to detect AI-generated media across audio, video, and images: https://news.ycombinator.com/item?id=30766050

That post kicked off conversations with devs, security teams, researchers, and governments. The most common question: "Can we get API/SDK access to build deepfake detection into our product?"

We’ve heard that from solo devs building moderation tools, fintechs adding ID verification, founders running marketplaces, and infrastructure companies protecting video calls and onboarding flows. They weren’t asking us to build anything new; they simply wanted access to what we already had so they could plug it in and move forward.

After running pilots and engagements with customers, we’re finally ready to share our public API and SDK. Now anyone can embed deepfake detection with just two lines of code, starting at the low price of free.

https://www.realitydefender.com/api

Our new developer tools support detection across images, voice, video, and text — with the former two available as part of the free tier. If your product touches KYC, UGC, support workflows, communications, marketplaces, or identity layers, you can now embed real-time detection directly in your stack. It runs in the cloud, and longstanding clients using our platform have also deployed on-prem, at the edge, or on fully airgapped systems.

SDKs are currently available in Python, Java, Rust, TypeScript, and Go. The first 50 scans per month are free, with usage-based pricing beyond that. If you’re working on something that requires other features or streaming access (like real-time voice or video), email us directly at yc@realitydefender.com

Much has changed since 2022. The threats we imagined back then are now showing up in everyday support tickets and incident reports. We’ve witnessed voice deepfakes targeting bank call centers to commit real-time fraud; fabricated documents and AI-generated selfies slip through KYC and IDV onboarding systems; fake dating profiles, AI-generated marketplace sellers, and “verified” influencers impersonating real people. Political disinformation videos and synthetic media leaks have triggered real-world legal and PR crises. Even reviews, support transcripts, and impersonation scripts are increasingly being generated by AI. Detection remains harder than we first expected since we began in 2021. New generation methods emerge every few weeks that invalidate prior assumptions. This is why we are committed to building every layer of this ourselves. We don’t license or white-label detection models; everything we deploy is built in-house by our team.

Since our original launch, we’ve worked with tier-one banks, global governments, and media companies to deploy detection inside their highest-risk workflows. However, we always believed the need wasn’t limited to large institutions, but everywhere. It showed up in YC office hours, in early bug reports, and in group chats after our last HN post.

We’ve taken our time to make sure this was built well, flexible enough for startups, and battle-tested enough to trust in production. The API you can use today is the same one powering many of our enterprise deployments.

Our goal is to make Reality Defender feel like Stripe, Twilio, or Plaid — an invisible, trusted layer that you can drop into your system to protect what matters. We feel deepfake detection is a key component of critical infrastructure, and like any good infrastructure, it should be modular, reliable, and boring (in the best possible way).

Reality Defender is already in the Zoom marketplace and will be on the Teams marketplace soon. We will also power deepfake detection for identity workflows, support platforms, and internal trust and safety pipelines.

If you're building something where trust, identity, or content integrity matter, or if you’ve run into weird edge cases you can’t explain, we’d love to hear from you.

You can get started here: https://realitydefender.com/api

Or you can try us for free two different ways:

1) 1-click add to Zoom / Teams to try in your own calls immediately.

2) Email us up to 50 files at yc@realitydefender.com and we’ll scan them for you — no setup required.

Thanks again to the HN community for helping launch us three years ago. It’s been a wild ride, and we’re excited to share something new. We live on HN ourselves and will be here for all your feedback. Let us know what you think!

Comments (20)

taneq · 2h ago
Yeah but does it actually work, though? There have been a lot of online tools claiming to be "AI detectors" and they all seem pretty unreliable. Can you talk us through what you look for, the most common failure modes and (at suitably high level) how you dealt with those?
bpcrd · 1h ago
We've actually deployed to several Tier 1 banks and large enterprises already for various use-cases (verification, fraud detection, threat intelligence, etc.). The feedback that we've gotten so far is that our technology is high accuracy and a useful signal.

In terms of how our technology works, our research team has trained multiple detection models to look for specific visual and audio artifacts that the major generative models leave behind. These artifacts aren't perceptible to the human eye / ear, but they are actually very detectable to computer vision and audio models.

Each of these expert models gets combined into an ensemble system that weighs all the individual model outputs to reach a final conclusion.

We've got a rigorous process of collecting data from new generators, benchmarking them, and retraining our models when necessary. Often retrains aren't needed though, since our accuracy seems to transfer well across a given deepfake technique. So even if new diffusion or autoregressive models come out, for example, the artifacts tend to be similar and are still caught by our models.

I will say that our models are most heavily benchmarked on convincing audio/video/image impersonations of humans. While we can return results for items outside that scope, we've tended to focus training and benchmarking on human impersonations since that's typically the most dangerous risk for businesses.

So that's a caveat to keep in mind if you decide to try out our Developer Free Plan.

asail77 · 1h ago
Give it a try for yourself. It's free!

We have been working on this problem since 2020 and have created an trained an ensemble of AI detection models working together to tell you what is real and what is fake!

AlecSchueler · 13m ago
It's sadly not often enough I see a young company doing work that I feel only benefits society, but this is one of those times, so thank you and congratulations.
seanw265 · 1h ago
How do you prevent bad actors from using your tools as a feedback loop to tune models that can evade detection?
bpcrd · 41m ago
We see who signs up for Reality Defender and instantly notice traffic patterns and other abnormalities that allow us to see if an account is in violation of terms of service. Also, our free tier is capped at 50 free scans a month which will not allow for said attackers to discern any tangible learnings or tactics they can use to bypass our detection models.
lja · 51m ago
You would need thousands to tens of thousands of images, not just 50 to produce an adversarial network that could use the API as a check.

If someone wanted to buy it, I'm sure reality defender has protection especially because you can predict adversarial guesses.

It would be trivial for them to build "this user is sending progressively more realistic, rapid responses" if they haven't built that already.

primitivesuave · 1h ago
First want to say that I sincerely appreciate you working on this problem. The proliferation of deepfakes is something that virtually every technology industry is dealing with right now.

Suppose that deepfake technology progressed to the point where it is still detectable by your technology, but is impossible for the naked eye. In that scenario (which many would call an eventuality), wouldn't you also be compelled to serve as an authoritative entity on the detection of deepfakes?

Imagine a future politician who is caught on video doing something scandalous, or a court case where someone is questioning the veracity of some video evidence. Are the creators of deepfake detection algorithms going to testify as expert witnesses, and how could they convince a human judge/jury that the output of their black box algorithm isn't a false positive?

BananaaRepublik · 2h ago
Won't this just become the fitness function for training future models?
bee_rider · 1h ago
Just based on the first post, where they talk about their API a bit, it sounds like a system hosted on their machines(?). So, I assume AI trainers won’t be able to run it locally, to train off it.

Although, I always get a bad smell from that sort of logic, because it feels vaguely similar to security through obscurity in the sense that it relies on the opposition now knowing what you are doing.

chrisweekly · 1h ago
now -> not
bee_rider · 1h ago
True, haha. Although “the opposition now knowing what you are doing” is the big danger for this sort of scheme!
darenfrankel · 1h ago
I worked in the fraud space and could see this being a useful tool for identifying AI generated IDs + liveness checks. Will give it a try.
Grimblewald · 1h ago
I feel like a much easier solution is enforcing data provinence. Ssl for media hash, attach to metadata. The problem with AI isnt the fact its ai, its that people can invest little effort to sway things with undue leverage. A single person can look like 100's with signficantly less effort than previously. The problem with ai content is it makes abuse of public spaces much easier. Forcing people to take credit for work produced makes things easier (not solved) kind of like email. Being able to block media by domain would be a dream, but spam remains an issue.

so, tie content to domains. A domain vouches for content works like that content having been a webpage or email from said domain. Signed hash in metadata is backwards compatible and its easy to make browsers etc display warnings on unsigned content, content from new domains, blacklisted domains, etc.

benefit here is while we'll have more false negatives, unlike something like this tool, it does not cause real harm on false positives, which will be numerous if it wants to be better tham simply making someome accountable for media.

AI detection cannot work, will not work, and will cause more harm than it prevents. stuff like this is irresponsible and dangerous.

m4tthumphrey · 2h ago
I feel like this will be the next big cat and mouse sega after ad-blockers;

1) Produce AI tool 2) Tool gets used for bad 3) Use anti-AI/AI detection to avoid/check for AI tool 4) AI tool introduces anti-anti-AI/detection tools 5) Repeat

No comments yet

abhisek · 2h ago
About time. Much needed. I just wish this was open source and built in public.

On my todo list to build a bot that finds sly AI responses for engagement farming

candiddevmike · 2h ago
On a 2k desktop using Chrome, your website font/layout is way too big, especially your consent banner--it takes up 1/3 of the screen.
viggity · 2h ago
I find that most companies find that to be a feature, not a bug. People are more likely to hit accept if they can only see a small chunk of the content.
colehud · 2h ago
And the scrolling behavior is infuriating
caxco93 · 1h ago
please do not hijack the scroll wheel