MCP Specification – version 2025-06-18 changes

94 owebmaster 38 6/18/2025, 11:59:47 PM modelcontextprotocol.io ↗

Comments (38)

neya · 2h ago
One of the biggest lessons for me while riding the MCP hype was that if you're writing backend software, you don't actually need to do MCP. Architecturally, they don't make sense. Atleast not on Elixir anyway. One server per API? That actually sounds crazy if you're doing backend. That's 500 different microservices for 500 APIs. After working with 20 different MCP servers, it then finally dawned on me, good ole' function calling (which is what MCP is under the hood) works just fine. And each API can be just it's own module instead of a server. So, no need to keep yourself updated on the latest MCP spec nor need to update 100s of microservices because the spec changed. Needless complexity.
aryehof · 1h ago
It really is a standard protocol for connecting clients to models and vice versa. It’s not there to just be a container for tool calls.

No comments yet

leonidasv · 2h ago
I always saw MCPs as a plug-and-play integration for enabling function calling without incurring API costs when using Claude.

If you're using the API and not in a hurry, there's no need for it.

mindwok · 1h ago
“each API can just be its own module instead of a server”

This is basically what MCP is. Before MCP, everyone was rolling their own function calling interfaces to every API. Now it’s (slowly) standardising.

neya · 45m ago
If you search for MCP integrations, you will find tons of MCP "servers", which basically are entire servers for just one vendor's API (sometimes just for one of their products, eg. YouTube). This is the go to default right now, instead of just one server with 100 modules. The MCP protocol itself is just to make it easier to communicate with the LLM clients that users can use and install. But, if you're doing backend code, there is no need to use MCP for it.
throwaway314155 · 1h ago
> One server per API? That actually sounds crazy if you're doing backend

Not familiar with elixir, but is there anything prohibiting you from just making a monolith MCP combining multiple disparate API's/backends/microservices as you were doing previously?

Further, you won't get the various client-application integrations with MCP merely using tool-calling; which to me is the "killer app" of MCP (as a sibling comment touches on).

(I do still have mixed feelings about MCP, but in this case MCP sorta wins for me)

neya · 50m ago
> just making a monolith MCP combining multiple disparate API

This is what I ended up doing.

The reason I thought I must do it the "MCP way" was because of the tons of YouTube videos about MCP which just kept saying how much of an awesome protocol it is, an everyone should be using it, etc. Once I realized it's actually more consumer facing than the backend, it made much more sense as why it became so popular.

swalsh · 1h ago
Elicitation is a big win. One of my favorite MCP servers is an SSH server I built, it allows me to basically automate 90% of the server tasks I need done. I handled authentication via a config file, but it's kind of a pain to manage if I want to access a new server.
eadmund · 3h ago
It is mostly pointless complexity, but I’m going to miss batching. It was kind of neat to be able to say ‘do all these things, then respond,’ even if the client can batch the responses itself if it wants to.
lsaferite · 3h ago
I agree. JSON-RPC batching has always been my "gee, that's neat" feature and seeing it removed from the spec is sad. But, as you said, it's mostly just adding complexity.
dend · 4h ago
I am just glad that we now have a simple path to authorized MCP servers. Massive shout-out to the MCP community and folks at Anthropic for corralling all the changes here.
jjfoooo4 · 3h ago
What is the point of a MCP server? If you want to make an RPC from an agent, why not... just use an RPC?
dend · 9m ago
The analogy that was used a lot is that it's essentially USB-C for your data being connected to LLMs. You don't need to fight 4,532,529 standards - there is one (yes, I am familiar with the XKCD comic). As long as your client is MCP-compatible, it can work with _any_ MCP server.
antupis · 1h ago
It is easier to communicate and sell that we have this MCP server that you can just plug and play vs some custom RPC stuff.
refulgentis · 3h ago
Not everyone can code, and not everyone who can code is allowed to write code against the resources I have.
nsonha · 2h ago
you have to write code for MCP server, and code to consume them too. It's just the LLM vendor decide that they are going to have the consume side built-in, which people question as they could as well do the same for open API, GRPC and what not, instead of a completely new thing.
elliotbnvl · 4h ago
Fascinated to see that the core spec is written in TypeScript and not, say, an OpenAPI spec or something. I suppose it makes sense, but it’s still surprising to see.
lovich · 2h ago
Why is it surprising? I use typescript a lot, but I would have never even thought to have this insight so I am missing some language design decisions
gotts · 2h ago
Very glad to see MCP Specification rapid improvement. With each new release I notice something that I was missing in my MCP integrations.
Aeolun · 1h ago
Funny that changes to the spec require a single approval before being merged xD
jjfoooo4 · 3h ago
It's very hard for me to understand what MCP solves aside from providing a quick and dirty way to prototype something on my laptop.

If I'm building a local program, I am going to want tighter control over the toolsets my LLM calls have access to.

E.g. an MCP server for Google Calendar. MCP is not saving me significant time - I can access the same API's the MCP can. I probably need to carefully instruct the LLM on when and how to use the Google Calendar calls, and I don't want to delegate that to a third party.

I also do not want to spin up a bunch of arbitrary processes in whatever runtime environment the MCP is written in. If I'm writing in Python, why do I want my users to have to set up a typescript runtime? God help me if there's a security issue in the MCP wrapper for language_foo.

On the server, things get even more difficult to justify. We have a great tool for having one machine call a process hosted on another machine without knowing it's implementation details: the RPC. MCP just adds a bunch of opinionated middleware (and security holes)

lsaferite · 3h ago
> It's very hard for me to understand what MCP solves

It's providing a standardized protocol to attach tools (and other stuff) to agents (in an LLM-centric world).

No comments yet

8n4vidtmkvmk · 1h ago
What I don't get is why all the MCPs I've seen so far are commands instead of using the HTTP interface. Maybe I'm not understanding something, but with that you could spin up 1 server for your org and everyone could share an instance without messing around with different local toolchains
refulgentis · 3h ago
I agree vehemently, I'm sort of stunned how...slow...things are in practice. I quit my job 2 years ago to do LLM client stuff and I still haven't made it to Google calendar. It's handy as a user to have something to plug holes in the interim.

In the limit, I remember some old saw about how every had the same top 3 rows of apps on their iPhone homescreen, but the last row was all different. I bet IT will be managing, and dev teams will make, their own bespoke MCP servers for years to come.

throwaway314155 · 1h ago
If I understand your point correctly - the main bottleneck for tool-calling/MCP is the models themselves being relatively terrible at tool-calling anything but the tools they were finetuned to work with until recently. Even with the latest developments, any given MCP server has a variable chance of success just due to the nature of LLM's only learning the most common downstream tasks. Further, LLM's _still_ struggle when you give them too many tools to call. They're poor at assessing the correct tool to use when given tools with overlapping functionality or similar function name/args.

This is what people mean when they say that MCP should maybe wait for a better LLM before going all-in on this design.

refulgentis · 1h ago
Not in my opinion, works fine in general, wrote 2500 lines of tests for me over about 30 min tonight.

To your point that this isn't trivial or universal, there's a sharp gradient that you wouldn't notice if you're just opining on it as opposed to coding against it -- ex. I've spent every waking minute since mid-December on MCP-like territory, and it still bugs me out how worse every model is than Claude at it. It sounds like you have similar experience, though, perhaps not as satisfied with Claude as I am.

cubancigar11 · 3h ago
It is a protocol. If I have to list a bunch of files on my system I don't call a rest server. Same way mcp is not for you doing your stuff. It is for other people do to stuff on your server by the way of tools.
freed0mdox · 4h ago
What MCP is missing is a reasonable way to do async callbacks where you can have the mcp query the model with a custom prompt and results of some operation.
lherron · 4h ago
lsaferite · 3h ago
That was my thought as well.

My main disappointment with sampling right now is the very limited scope. It'd be nice to support some universal tool calling syntax or something. Otherwise a reasonably complicated MCP Server is still going to need a direct LLM connect.

refulgentis · 3h ago
Dumb question: in that case, wouldn't it not be an MCP server? It would be an LLM client with the ability to execute tool calls made by the LLM?

I don't get how MCP could create a wrapper for all possible LLM inference APIs or why it'd be desirable (that's an awful long leash for me to give out on my API key)

lsaferite · 3h ago
An MCP Server can be many things. It can be as simple as an echo server or as complex as a full-blown tool-calling agent and beyond. The MCP Client Sampling feature is an interesting thing that's designed to allow the primary agent, the MCP Host, to offer up some subset of LLM models that is has access to for the MCP Servers it connects with. That would allow the MCP Server to make LLM calls that are mediated (or not, YMMV) by the MCP Host. As I said above, the feature is very limited right now, but still interesting for some simpler use cases. Why would you do this? So you don't have to configure every MCP Server you use with LLM credentials? And the particulars of exactly what model gets used are under your control. That allows the MCP Server to worry about the business logic and not about how to talk to a specific LLM Provider.
refulgentis · 3h ago
I get the general premise but am uncertain as to if it's desirable to invest more in inverting the protocol, where the tool server becomes an LLM client. "Now you have 2 protocols", comes to mind - more concretely, it upends the security model.
atlgator · 3h ago
The async callbacks are in your implementation. I wrote an MCP server so customers could use an AI model to query a databricks sql catalog. The queries were all async.
ashwinsundar · 3h ago
Why does MCP need to support this explicitly? Is it hard to write a small wrapper than handles async callbacks? (Serious question)
TOMDM · 4h ago
Maybe the Key Changes page would be a better link if we're concerned with a specific version?

https://modelcontextprotocol.io/specification/2025-06-18/cha...

tomhow · 4h ago
OK we've changed the URL and the title to that, thanks!
lyjackal · 4h ago
Agree, thanks for the link. I was wondering what actually changed. The resource links and elicitation look like useful functionality.