Show HN: Airweave – Let agents search any app
A couple of months ago we were building agents that interacted with different apps and were frustrated when they struggled to handle vague natural language requests like "resolve that one Linear issue about missing auth configs", "if you get an email from an unsatisfied customer, reimburse their payment in Stripe", or "what were the returns for Q1 based on the financials sheet in gdrive?", only to have the agent inefficiently chain together loads of function calls to find the data or not find it at all and hallucinate.
We also noticed that despite the rise of MCP creating more desire for agents to interact with external resources, the majority of agent dev tooling focused on function calling and actions instead of search. We were annoyed by the lack of tooling that enabled agents to semantically search workspace or database contents, so we started building Airweave first as an internal solution. Then we decided to open-source it and pursue it full time after we got positive reactions from coworkers and other agent builders.
Airweave connects to productivity tools, databases, or document stores via their APIs and transforms their contents into searchable knowledge bases, accessible through a standardized interface for the agent. The search interface is exposed via REST or MCP. When using MCP, Airweave essentially builds a semantically searchable MCP server on top of the resource. The platform handles the entire data pipeline from connection and extraction to chunking, embedding, and serving. To ensure knowledge is current, it has automated sync capabilities, with configurable schedules and change detection through content hashing.
We built it with support for white-labeled multi-tenancy to provide OAuth2-based integration across multiple user accounts while maintaining privacy and security boundaries. We're also actively working on permission-awareness (i.e., RBAC on the data) for the platform.
So happy to share learnings and get insights from your experiences. looking forward to comments!
Don’t they just adjust existing apis to mcp protocol basically just wrapping them?
You can compare it to how coding agents like Cursor work. This is the usual pattern you see: - The first step is reading your prompt - Then it goes through all the attached files and searches your codebase - The last step is to make code file edits.
Non-coding agents that use "regular" MCP servers completely miss the second part. It's very hard to go from natural language instruction, to a chain of API calls that actually work and don't end up in hallucination