> Server authors working on large systems likely already have an OAuth 2.0 API.
I think this biases towards sufficiently large engineering organizations where OAuth 2.0 was identified as necessary for some part of their requirements. In most organizations, they're still using `x-<orgname>-token` headers and the like to do auth.
I'm not sure that there's a better / easier way to do Auth with this use case, but it does present a signficant hurdle to adoption for those who have an API (even one ready for JSON-RPC!) that is practically ready to be exposed via MCP.
motorest · 2h ago
> I think this biases towards sufficiently large engineering organizations where OAuth 2.0 was identified as necessary for some part of their requirements. In most organizations, they're still using `x-<orgname>-token` headers and the like to do auth.
I don't think that's it. Auth is a critical system in any organization, and larger organizations actually present more resistance to change, particularly in business critical areas. If anything, smaller orgs gave an easier time migrating critical systems such as authentication.
nip · 4h ago
> Further, one of the issues with remote servers is tenancy
Excellent write-up and understanding of the current state of MCP
I’ve been waiting for someone to point it out. This is in my opinion the biggest limitation of the current spec.
What is needed is a tool invocation context that is provided at tool invocation time.
Such tool invocation context allows passing information that would allow authorizing, authentication but also tracing the original “requester”: think of it as “tool invoked on behalf of user identity”
This of course implies an upstream authnz that feeds these details and more.
If you’re interested in this topic, my email is in my bio: I’m of the architect of our multi-tenant tool calling implementation that we’ve been running in production for the past year with enterprise customers where authnz and auditability are key requirements.
jensneuse · 4h ago
The way we've solved this in our MCP gateway (OSS) is that the user first needs to authenticate against our gateway, e.g. by creating a valid JWT with their identity provider, which will be validated using JWKS. Now when they use a tool, they must send their JWT, so the LLM always acts in their behalf. This supports multiple tenants out of the box. (https://wundergraph.com/mcp-gateway)
Yoric · 2h ago
Is this really hard to code?
I mean, converting a tool-less LLM into a tool-using LLM is a few hundred lines of code, and then you can plug all your tools, with whichever context you want.
tomrod · 4h ago
I wish this were critical, but it is an ad for MCP.run.
palmfacehn · 1h ago
Personally, I'm not a fan. I thought the proponent's view might stimulate a discussion.
nip · 4h ago
It’s both in my opinion and discussions can stem from the linked article
Many come to HN also for the comments
hirsin · 2h ago
Touching on tenancy and the "real" gaps in the spec does help push the discussion in a useful direction.
https://vulnerablemcp.info/ is a good collection of the immediately obvious issues with the MCP protocol that need to be addressed. A couple low blows in there, that feel a bit motivated to make MCP look worse, but generally a good starting point overall.
smitty1e · 30m ago
Serious question:
If doing an extended, service-level session (like a GPT interaction) with a server known beforehand, would it make sense to set up a keypair and manage the interaction over SSH?
Restated: are we throwing away a lot of bandwidth establishing TLS trust for the more general HTTP?
owebmaster · 2h ago
This post has too many "shameless plugs" to be taken seriously.
I think this biases towards sufficiently large engineering organizations where OAuth 2.0 was identified as necessary for some part of their requirements. In most organizations, they're still using `x-<orgname>-token` headers and the like to do auth.
I'm not sure that there's a better / easier way to do Auth with this use case, but it does present a signficant hurdle to adoption for those who have an API (even one ready for JSON-RPC!) that is practically ready to be exposed via MCP.
I don't think that's it. Auth is a critical system in any organization, and larger organizations actually present more resistance to change, particularly in business critical areas. If anything, smaller orgs gave an easier time migrating critical systems such as authentication.
Excellent write-up and understanding of the current state of MCP
I’ve been waiting for someone to point it out. This is in my opinion the biggest limitation of the current spec.
What is needed is a tool invocation context that is provided at tool invocation time.
Such tool invocation context allows passing information that would allow authorizing, authentication but also tracing the original “requester”: think of it as “tool invoked on behalf of user identity”
This of course implies an upstream authnz that feeds these details and more.
If you’re interested in this topic, my email is in my bio: I’m of the architect of our multi-tenant tool calling implementation that we’ve been running in production for the past year with enterprise customers where authnz and auditability are key requirements.
I mean, converting a tool-less LLM into a tool-using LLM is a few hundred lines of code, and then you can plug all your tools, with whichever context you want.
Many come to HN also for the comments
https://vulnerablemcp.info/ is a good collection of the immediately obvious issues with the MCP protocol that need to be addressed. A couple low blows in there, that feel a bit motivated to make MCP look worse, but generally a good starting point overall.
If doing an extended, service-level session (like a GPT interaction) with a server known beforehand, would it make sense to set up a keypair and manage the interaction over SSH?
Restated: are we throwing away a lot of bandwidth establishing TLS trust for the more general HTTP?