Nexus: An Open-Source AI Router for Governance, Control and Observability

86 mitchwainer 23 8/12/2025, 2:41:12 PM nexusrouter.com ↗

Comments (23)

mitchwainer · 20h ago
Grafbase just launched Nexus, an open-source AI Router that unifies MCP servers and LLMs through a single endpoint. Designed for enterprise-grade governance, control, and observability, Nexus helps teams manage AI complexity, enforce policies, and monitor performance across their entire stack. Built to work with any MCP server or LLM provider out-of-the-box, Nexus is designed for developers who want to integrate AI with the same rigor as production APIs.
CptanPanic · 19h ago
Sounds like litellm which I use, I wonder how it compares?
vid · 19h ago
There is also https://github.com/maximhq/bifrost which apparently overcomes some performance issues of litellm and is easy to get going.
tomhoule · 18h ago
Yeah they definitely belong in the same space. Nexus is an LLM Gateway, but early on, the focus has been on MCP: aggregation, authentication, and a smart approach to tool selection. There is that paper, and a lot of anecdotal evidence, pointing to LLMs not coping well with a selection of tools that is too large: https://arxiv.org/html/2411.09613v1

So Nexus takes a tool search based approach to solving that, among other cool things.

Disclaimer: I don't work on Nexus directly, but I do work at Grafbase.

fbjork · 19h ago
Founder of Grafbase here.

Here are a few key differentiators vs LiteLLM today:

- Nexus does MCP server aggregation and LLM routing - LiteLLM only does LLM routing

- The Nexus router is a standalone binary that can run with minimal TOML configuration and optionally Redis - LiteLLM is a whole package with dashboard, database etc.

- Nexus is written in Rust - LiteLLM is written in Python

That said, LiteLLM is an impressive project, but we're just getting started with Nexus so stay tuned for a steady barrage of feature launches the coming months:)

SparkyMcUnicorn · 18h ago
What's the difference between "MCP Server Aggregation" and the litellm_proxy endpoint described here?

https://docs.litellm.ai/docs/mcp

tomhoule · 17h ago
The main difference is that while you can get Nexus to list all tools, by default the LLM accesses tools by semantic search — Nexus returns only the relevant tools for the what the LLM is trying to accomplish. Also, Nexus speaks MCP to the LLM, it doesn't translate like litellm_proxy seems to do (I wasn't familiar with it previously).
altcognito · 8h ago
johntash · 10h ago
It looks like you're planning on monetizing this (which is totally fine!), do you have any plans on what the enterprise version would do differently?
echelon · 9h ago
And isn't OpenRouter already open source?
evolve2k · 19h ago
As in Torment Nexus? Wow.
fbjork · 19h ago
Ha
makita34 · 17h ago
Seems quite similar to the commercial nexos.ai platform, which also focuses on routing, governance, and observability for AI workloads, but as a proprietary solution rather than open source
fbjork · 17h ago
From what I can tell they don’t offer a self-hosted router?
mbrumlow · 19h ago
I thought it was a phone :/, for developers.
fbjork · 19h ago
That phone was discontinued:)
bentogrizz · 19h ago
This is cool
fbjork · 19h ago
What are you building?
owenthejumper · 18h ago
Another proxy?
fbjork · 16h ago
MCP aggregation is one of the big differentiators
barbazoo · 11h ago
I'm curious, what issue does that solve? I'm only working on agents that make tool calls via HTTP in a home baked way but I can't imagine how resolving the tools from 2 MCP servers is harder than 1.
fbjork · 2h ago
The issue is when you have many MCP tools the context becomes too large for the LLM. So Nexus indexes all the tools and lets you search for the right tool and then execute it.
barbazoo · 11h ago
> There is no problem that can't be solved by another level of indirection.

David Wheeler