Show HN: EnrichMCP – A Python ORM for Agents
EnrichMCP is built on top of [MCP][2] and acts like an ORM, but for agents instead of humans. You define your data model using SQLAlchemy, APIs, or custom logic, and EnrichMCP turns it into a type-safe, introspectable interface that agents can discover, traverse, and invoke.
It auto-generates tools from your models, validates all I/O with Pydantic, handles relationships, and supports schema discovery. Agents can go from user → orders → product naturally, just like a developer navigating an ORM.
We use this internally to let agents query production systems, call APIs, apply business logic, and even integrate ML models. It works out of the box with SQLAlchemy and is easy to extend to any data source.
If you're building agentic systems or anything AI-native, I'd love your feedback. Code and docs are here: https://github.com/featureform/enrichmcp. Happy to answer any questions.
That doesn't actually solve the problem. What you really need is access to internal systems. The agent should be able to look up the order, check the courier status, pull the restaurant's delay history, and decide whether to issue a refund. None of that lives in documentation. It lives in your APIs and databases.
LLMs aren't limited by reasoning. They're limited by access.
EnrichMCP gives agents structured access to your real systems. You define your internal data model using Python, similar to how you'd define models in an ORM. EnrichMCP turns those definitions into typed, discoverable tools the LLM can use directly. Everything is schema-aware, validated with Pydantic, and connected by a semantic layer that describes what each piece of data actually means.
You can integrate with SQLAlchemy, REST APIs, or custom logic. Once defined, your agent can use tools like get_order, get_restaurant, or escalate_if_late with no additional prompt engineering.
It feels less like stitching prompts together and more like giving your agent a real interface to your business.
- what tables are there
- table schemas and relationships
Based on that, the agent could easily query the tables to extract info. Not sure why we need a "framework" for this.
Time-to-solution and quality would be my guess. In my experience, adding high level important details about the way information is organized to the beginning of the context and then explaining the tools to further explore schema or access data produces much more consistent results rather than each inference having to query the system and build its own world view before trying to figure out how to answer your query and then doing it.
It's a bit like giving you a book or giving you that book without the table of contents and no index, but you you can do basic text search over the whole thing.
Obviously, it can (and sometimes will) hallucinate and make up why its using a tool. The thing is, we don't really have true LLM explainability so this is the best we can really do.
what is your experience with non trivial db schemas?
We also generate a few tools for the LLM specifically to explain the data model to it. It works quite well, even on complex schemas.
The use case is more transactional than analytical, though we've seen it used for both.
I recommend running the openai_chat_agent in examples/ (also supports ollama for local run) and connect it to the shop_api server and ask it a question like : "Find and explain fraud transactions"
If I explain the semantic graph, entities, relationships, etc. with proper documentations and descriptions you'd be able to reason about it much faster and more accurately.
A postgres schema might have the data type and a name and a table name vs. all the rich metadata that would be required in EnrichMCP.
Also, generic db question, but can you protect against resource overconsumption? Like if the junior/agent makes a query with 100 joins, can a marshall kill the process and time it out?
If you use the lower-level enrichMCP API (without SQLAlchemy) you can fully control all retrieval logic and add things like rate limiting, not dissimilar to how you'd solve this problem with a traditional API.
MCP is the new IoT, where S stands for security /s
I guess you also need per user contexts, such that you depend on the user auth to access user data, and the agent can only access that data.
But this same concern exists for employees in big corps. If I work at google, I probably am not able to access arbitrary data, so I can't leak it.
Auth/Security is interesting in MCP. As of yesterday a new spec was released with MCP servers converted to OAuth resource servers. There's still a lot more work to do on the MCP upstream side, but we're keeping up with it and going to have a deeper integration to have AuthZ support once the upstream enables it.
You could also build an EnrichMCP server that calls your Django server manually
It's also Python.
How do you handle PII or other sensitive data that the LLM shouldn’t know or care about?
It's also addressed directly in the README. https://github.com/featureform/enrichmcp?tab=readme-ov-file#...
I know LLMs can be scary, but this is the same problem that any ORM or program that handles user data would deal with.