Launch HN: Reducto Studio (YC W24) – Build accurate document pipelines, fast
We started Reducto when we realized that so many of today’s AI applications require good quality data. Everyone knows that good inputs lead to better outputs, but 80% of the world’s data is still trapped inside of things like messy PDFs and spreadsheets. Raunak and I launched a really early MVP of parsing and extracting from unstructured documents, and were lucky to have a lot of interest from technical teams when they realized that the accuracy was something they hadn’t seen before.
We started by just releasing an API for engineers to build with, but over time we realized that an accurate API was only part of the puzzle. Our customers wanted to be able to easily set up multi step pipelines, evaluate and iterate on performance within their use case, and work with non-engineering teammates that were also involved in the real world document processing flow.
That’s why we’re launching Reducto Studio, a web platform that sits on top of our APIs for users to build and iterate on end-to-end document pipelines.
With Studio, you can:
- Drop an entire file set and get per-field and per-document accuracy scores against your eval data.
- Auto-generate and continuously optimize extraction schemas to hit production-grade quality fast.
- Save every run, iterate on parse/extract configs, and compare results side-by-side.
You can see some examples here (https://studio.reducto.ai) or you can watch this walkthrough: https://www.loom.com/share/b243551741c642c6a594c00353fcecb3.
If you’d like to upload your own document you can log in and do so as well - we don’t make you book a demo or put a payment down to try it.
Thanks for reading and checking it out! This is only the first step for Studio, so we’d love feedback on anything: UX rough edges (we know they’re there!), features that would make evaluations better for you, hard documents you’ve had trouble with, or anything else about wrangling with unstructured data.
but the real pain is always in the second and third batch. when formats change subtly. if reducto becomes the system that adapts without you babysitting it, that's where it may win. continuity's the moat imo among the competitors
From the typography and layout to the line-work down to how the gradients in the, in fashion, large logotype at the bottom of the footer are tied in by using texture.
Was it in house, or an agency? I'd love to see some more of whoever's work it was
https://www.datalab.to/
If you want to do a side by side with your use case we'd be happy to set you up with free trial access.
We'll edit that to make it more clear
https://github.com/mathpix/mpxpy
Disclaimer: I'm the founder. Reducto does cool stuff on post processing (and other input formats), but some people have told me Mathpix is better at just getting data out of PDFs accurately.
YC seems to fund quite many document extraction companies, even within the same batch:
- Pulse (YC W24): https://www.ycombinator.com/companies/pulse-3
- OmniAI (YC W24): https://www.ycombinator.com/companies/omniai
- Extend (YC W23): https://www.ycombinator.com/companies/extend
How do you differentiate from these? And how do you see the space evolving as LLMs commoditize PDF extraction?
Generally speaking, my view on the space is that this was crowded well before LLMs. We've met a lot of the folks that worked on things like drivers for printers to print PDFs in the 1990s, IDP players from the last few decades, and more recent cloud offerings.
The context today is clearly very different than it was in the IDP era though (human process with semi-structured content -> LLMs are going to reason over most human data), and so is the solution space (VLMs are an incredible new tool to help address the problem).
Given that I don't think it's surprising that companies inside and outside of YC have pivoted into offering document processing APIs over the past year. Generally speaking we don't see differentiation in the sense of just feature set since that'll converge over time, and instead primarily focus on accuracy, reliability, and scalability, all 3 of which have a very substantive impact from last mile improvements. I think the best testament I have to that is that the customers we've onboarded are very technical, and as a result are very thorough when choosing the right solution for them. That includes a company wide roll out at one of the 4 biggest tech companies, one of the 3 biggest trading firms, and a big set of AI product teams like Harvey, Rogo, ScaleAI etc.
At the end of the day I don't see VLM improvements as antagonistic to what we're doing. We already use them a lot for things like an agentic OCR (correcting mistakes from our traditional CV pipeline). On some level our customers aren't just choosing us for PDF->markdown, they're onboarding with us because they want to spend more of their time on the things that are downstream from having accurate data, and I expect that there'll be room for us to make that even more true as models improve.
I assume y'all launched before this to select partners? Or perhaps this is a new product on top of the core product?
Congrats! Keep at it!
To clarify, our API was already fully launched and in prod with customers when we raised our series A. This launch is specifically for the platform we're building around the API :)
In this case, the Reducto team seems to have cloned us down to the small details [1][2], which is a bit disappointing to see. But imitation is the best form of flattery I suppose! We thought deeply about how to build an ergonomic configuration experience for recursive type definitions (which is deceptively complex), and concluded that a recursive spreadsheet-like experience would be the best form factor (which we shipped over a year ago).
> "How do you see the space evolving as LLMs commoditize PDF extraction?"
Having worked with a ton of startups & F500s, we've seen that there's still a large gap for businesses in going from raw OCR outputs —> document pipelines deployed in prod for mission-critical use cases. LLMs and VLMs aren't magic, and anyone who goes in expecting 100% automation is in for a surprise.
The prompt engineering / schema definition is only the start. You still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it takes time and effort — and that's where we come in. Our goal is to give AI teams all of that tooling on day 1, so they hit accuracy quickly and focus on the complex downstream post-processing of that data.
[1] https://dub.sh/ojv9b7p
[2] https://dub.sh/X7GFlDd
A schema builder with nested array fields has been part of our playground (and nearly every structured extraction solution) for a very long time and is just not something that we even view as a defining part of the platform.
It's not a big deal at the end of the day, and excited to see what we can both deliver for customers. congrats on the launch!