Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection
Today, we’re excited to share our public API and SDK, allowing anyone to access our platform with 2 lines of code: https://www.realitydefender.com/api
Back in W22, we launched our product to detect AI-generated media across audio, video, and images: https://news.ycombinator.com/item?id=30766050
That post kicked off conversations with devs, security teams, researchers, and governments. The most common question: "Can we get API/SDK access to build deepfake detection into our product?"
We’ve heard that from solo devs building moderation tools, fintechs adding ID verification, founders running marketplaces, and infrastructure companies protecting video calls and onboarding flows. They weren’t asking us to build anything new; they simply wanted access to what we already had so they could plug it in and move forward.
After running pilots and engagements with customers, we’re finally ready to share our public API and SDK. Now anyone can embed deepfake detection with just two lines of code, starting at the low price of free.
https://www.realitydefender.com/api
Our new developer tools support detection across images, voice, video, and text — with the former two available as part of the free tier. If your product touches KYC, UGC, support workflows, communications, marketplaces, or identity layers, you can now embed real-time detection directly in your stack. It runs in the cloud, and longstanding clients using our platform have also deployed on-prem, at the edge, or on fully airgapped systems.
SDKs are currently available in Python, Java, Rust, TypeScript, and Go. The first 50 scans per month are free, with usage-based pricing beyond that. If you’re working on something that requires other features or streaming access (like real-time voice or video), email us directly at yc@realitydefender.com
Much has changed since 2022. The threats we imagined back then are now showing up in everyday support tickets and incident reports. We’ve witnessed voice deepfakes targeting bank call centers to commit real-time fraud; fabricated documents and AI-generated selfies slip through KYC and IDV onboarding systems; fake dating profiles, AI-generated marketplace sellers, and “verified” influencers impersonating real people. Political disinformation videos and synthetic media leaks have triggered real-world legal and PR crises. Even reviews, support transcripts, and impersonation scripts are increasingly being generated by AI. Detection remains harder than we first expected since we began in 2021. New generation methods emerge every few weeks that invalidate prior assumptions. This is why we are committed to building every layer of this ourselves. We don’t license or white-label detection models; everything we deploy is built in-house by our team.
Since our original launch, we’ve worked with tier-one banks, global governments, and media companies to deploy detection inside their highest-risk workflows. However, we always believed the need wasn’t limited to large institutions, but everywhere. It showed up in YC office hours, in early bug reports, and in group chats after our last HN post.
We’ve taken our time to make sure this was built well, flexible enough for startups, and battle-tested enough to trust in production. The API you can use today is the same one powering many of our enterprise deployments.
Our goal is to make Reality Defender feel like Stripe, Twilio, or Plaid — an invisible, trusted layer that you can drop into your system to protect what matters. We feel deepfake detection is a key component of critical infrastructure, and like any good infrastructure, it should be modular, reliable, and boring (in the best possible way).
Reality Defender is already in the Zoom marketplace and will be on the Teams marketplace soon. We will also power deepfake detection for identity workflows, support platforms, and internal trust and safety pipelines.
If you're building something where trust, identity, or content integrity matter, or if you’ve run into weird edge cases you can’t explain, we’d love to hear from you.
You can get started here: https://realitydefender.com/api
Or you can try us for free two different ways:
1) 1-click add to Zoom / Teams to try in your own calls immediately.
2) Email us up to 50 files at yc@realitydefender.com and we’ll scan them for you — no setup required.
Thanks again to the HN community for helping launch us three years ago. It’s been a wild ride, and we’re excited to share something new. We live on HN ourselves and will be here for all your feedback. Let us know what you think!
so, tie content to domains. A domain vouches for content works like that content having been a webpage or email from said domain. Signed hash in metadata is backwards compatible and its easy to make browsers etc display warnings on unsigned content, content from new domains, blacklisted domains, etc.
benefit here is while we'll have more false negatives, unlike something like this tool, it does not cause real harm on false positives, which will be numerous if it wants to be better tham simply making someome accountable for media.
AI detection cannot work, will not work, amd will cause more harm than it prevents. stuff like this is irresponsible and dangerous.
We have been working on this problem since 2020 and have created an trained an ensemble of AI detection models working together to tell you what is real and what is fake!
1) Produce AI tool 2) Tool gets used for bad 3) Use anti-AI/AI detection to avoid/check for AI tool 4) AI tool introduces anti-anti-AI/detection tools 5) Repeat
No comments yet
On my todo list to build a bot that finds sly AI responses for engagement farming