Seems quite similar to the commercial nexos.ai platform, which also focuses on routing, governance, and observability for AI workloads, but as a proprietary solution rather than open source
fbjork · 19m ago
From what I can tell they don’t offer a self-hosted router?
mitchwainer · 3h ago
Grafbase just launched Nexus, an open-source AI Router that unifies MCP servers and LLMs through a single endpoint. Designed for enterprise-grade governance, control, and observability, Nexus helps teams manage AI complexity, enforce policies, and monitor performance across their entire stack.
Built to work with any MCP server or LLM provider out-of-the-box, Nexus is designed for developers who want to integrate AI with the same rigor as production APIs.
CptanPanic · 2h ago
Sounds like litellm which I use, I wonder how it compares?
vid · 2h ago
There is also https://github.com/maximhq/bifrost which apparently overcomes some performance issues of litellm and is easy to get going.
tomhoule · 1h ago
Yeah they definitely belong in the same space. Nexus is an LLM Gateway, but early on, the focus has been on MCP: aggregation, authentication, and a smart approach to tool selection. There is that paper, and a lot of anecdotal evidence, pointing to LLMs not coping well with a selection of tools that is too large: https://arxiv.org/html/2411.09613v1
So Nexus takes a tool search based approach to solving that, among other cool things.
Disclaimer: I don't work on Nexus directly, but I do work at Grafbase.
fbjork · 2h ago
Founder of Grafbase here.
Here are a few key differentiators vs LiteLLM today:
- Nexus does MCP server aggregation and LLM routing - LiteLLM only does LLM routing
- The Nexus router is a standalone binary that can run with minimal TOML configuration and optionally Redis - LiteLLM is a whole package with dashboard, database etc.
- Nexus is written in Rust - LiteLLM is written in Python
That said, LiteLLM is an impressive project, but we're just getting started with Nexus so stay tuned for a steady barrage of feature launches the coming months:)
SparkyMcUnicorn · 1h ago
What's the difference between "MCP Server Aggregation" and the litellm_proxy endpoint described here?
The main difference is that while you can get Nexus to list all tools, by default the LLM accesses tools by semantic search — Nexus returns only the relevant tools for the what the LLM is trying to accomplish. Also, Nexus speaks MCP to the LLM, it doesn't translate like litellm_proxy seems to do (I wasn't familiar with it previously).
So Nexus takes a tool search based approach to solving that, among other cool things.
Disclaimer: I don't work on Nexus directly, but I do work at Grafbase.
Here are a few key differentiators vs LiteLLM today:
- Nexus does MCP server aggregation and LLM routing - LiteLLM only does LLM routing
- The Nexus router is a standalone binary that can run with minimal TOML configuration and optionally Redis - LiteLLM is a whole package with dashboard, database etc.
- Nexus is written in Rust - LiteLLM is written in Python
That said, LiteLLM is an impressive project, but we're just getting started with Nexus so stay tuned for a steady barrage of feature launches the coming months:)
https://docs.litellm.ai/docs/mcp