MCP's Disregard for 40 Years of RPC Best Practices

101 yodon 40 8/9/2025, 2:42:10 PM julsimon.medium.com ↗

Comments (40)

ComplexSystems · 30m ago
I thought this article was going to be a bunch of security theater nonsense - maybe the relatively bland title - but after reading I found it to be incredibly insightful, particularly this:

> MCP discards this lesson, opting for schemaless JSON with optional, non-enforced hints. Type validation happens at runtime, if at all. When an AI tool expects an ISO-8601 timestamp but receives a Unix epoch, the model might hallucinate dates rather than failing cleanly. In financial services, this means a trading AI could misinterpret numerical types and execute trades with the wrong decimal precision. In healthcare, patient data types get coerced incorrectly, potentially leading to wrong medication dosing recommendations. Manufacturing systems lose sensor reading precision during JSON serialization, leading to quality control failures.

Having worked with LLMs every day for the past few years, it is easy to see every single one of these things happening.

I can practically see it playing out now: there is some huge incident of some kind, in some system or service with an MCP component somewhere, with some elaborate post-mortem revealing that some MCP server somewhere screwed up and output something invalid, the LLM took that output and hallucinated god knows what, its subsequent actions threw things off downstream, etc.

It would essentially be a new class of software bug caused by integration with LLMs, and it is almost sure to happen when you combine it with other sources of bug: human error, the total lack of error checking or exception handling that LLMs are prone to (they just hallucinate), a bunch of gung-ho startups "vibe coding" new services on top of the above, etc.

I foresee this being followed by a slew of Twitter folks going on endlessly about AGI hacking the nuclear launch codes, which will probably be equally entertaining.

tomrod · 2m ago
We already have PEBKAC - problem exists between chair and keyboard.

LLMs are basically automating PEBKAC

throwawaymaths · 24m ago
i mean isnt all this stuff up to the mcp author to return a reasonable error to the agent and ask for it to repeat the call with amendments to the json?
dotancohen · 9m ago
Yes. And this is where culture comes in. The culture of discipline of the C++ and the JavaScript communities are at extreme odds of the spectrum. The concern here is that the culture of interfacing with AI tools, such as MCP, is far closer to the discipline of the JavaScript community than to the C++ community.
nativeit · 6m ago
What's your point? It's up to a ship's captain to keep it afloat, doesn't mean the hundreds of holes in the new ship's hull aren't relevant.
GeneralMayhem · 11m ago
> MCP promises to standardize AI-tool interactions as the “USB-C for AI.”

Ironically, it's achieved this - but that's an indictment of USB-C, not an accomplishment of MCP. Just like USB-C, MCP is a nigh-universal connector with very poorly enforced standards for what actually goes across it. MCP's inconsistent JSON parsing and lack of protocol standardization is closely analogous to USB-C's proliferation of cable types (https://en.wikipedia.org/wiki/USB-C#Cable_types); the superficial interoperability is a very leaky abstraction over a much more complicated reality, which IMO is worse than just having explicitly different APIs/protocols.

rickcarlino · 1h ago
> SOAP, despite its verbosity, understood something that MCP doesn’t

Unfortunately, no one understood SOAP back.

(Additional context: Maintaining a legacy SOAP system. I have nothing good to say about SOAP and it should serve as a role model for no one)

pjmlp · 24m ago
I have plenty of good stuff to say, especially since REST (really JSON-RPC in practice), and GraphQL, seem to always being catching up to features the whole SOAP and SOA ecosystems already had.

Unfortunately as usual when a new technology cycle comes, everything gets thrown away, including the good parts.

SoftTalker · 1h ago
I have found that any protocol whose name includes the word "Simple" is anything but. So waiting for SMCP to appear....
sirtaj · 4m ago
I recall two SOAP-based services refusing to talk to each other because one nicely formatted the XML payload and the other didn't like that one bit. There is a lot we lost when we went to json but no, I don't look back at that stuff with any fondness.
yjftsjthsd-h · 27m ago
I dunno, SMTP wasn't bad last time I had to play with it. In actual use it wasn't entirely trivial, but most of that happened at layers that weren't really the mail transfer protocol's fault (SPF et al.). Although, I'm extremely open to that being one exception in flood of cases where you are absolutely correct:)
divan · 39m ago
No, letter S in MCP is reserved for "Security")
cyberax · 53m ago
This is a very hilarious but apt SOAP description: https://harmful.cat-v.org/software/xml/soap/simple

And I actually like XML-based technologies. XML Schema is still unparalleled in its ability to compose and verify the format of multiple document types. But man, SOAP was such a beast for no real reason.

Instead of a simple spec for remote calls, it turned into a spec that described everything and nothing at the same time. SOAP supported all kinds of transport protocols (SOAP over email? Sure!), RPC with remote handles (like CORBA), regular RPC, self-describing RPC (UDDI!), etc. And nothing worked out of the box, because the nitty-gritty details of authentication, caching, HTTP response code interoperability and other "boring" stuff were just left as an exercise to the reader.

AnotherGoodName · 17m ago
I'll give a different viewpoint and it's that I hate everything about XML. In fact one of the primary issues with SOAP was the XML. It never worked well across SOAP libraries. Eg. The .net and Java SOAP libraries have huge threads on stackoverflow "why is this incompatible" and a whole lot of needing to very tightly specify the schema. To the point it was a flaw; it might sound reasonable to tightly specify something but it got to the point there were no reasonable common defaults hence our complaints about SOAP verbosity and the work needed to make it function.

Part of this is the nature of XML. There's a million ways to do things. Should some data be parsed as an attribute of the tag or should it be another tag? Perhaps the data should be in the body between the tags? HTML, based on XML, has this problem; eg. you can seriously specify <font face="Arial">text</font> rather than have the font as a property of the wrapping tag. There's a million ways to specify everything and anything and that's why it makes a terrible data parsing format. The reader and writer must have the exact same schema in mind and there's no way to have a default when there's simply no particular correct way to do things in XML. So everything had to be very very precisely specified to the point it added huge amounts of work when a non-XML format with decent defaults would not have that issue.

This become a huge problem for SOAP and why i hate it. Every implementation had different default ways of handling even the simplest data structure passing between them and were never compatible unless you took weeks of time to specify the schema down to a fine grained level.

In general XML is problematic due to the lack of clear canonical ways of doing pretty much anything. You might say "but i can specify it with a schema" and to that i say "My problem with XML is that you need a schema for even the simplest use case in the first place".

zorked · 1h ago

  CORBA emerged in 1991 with another crucial insight: in heterogeneous environments, you can’t just “implement the protocol” in each language and hope for the best. The OMG IDL generated consistent bindings across C++, Java, Python, and more, ensuring that a C++ exception thrown by a server was properly caught and handled by a Java client. The generated bindings guaranteed that all languages saw identical interfaces, preventing subtle serialization differences.
Yes, CORBA was such a success.
cortesoft · 41m ago
Yeah, the modern JSON centered API landscape came about as a response to failures of CORBA and SOAP. It didn’t forget the lessons of CORBA, it rejected them.
pjmlp · 23m ago
And then rediscovered why we need schemas in CORBA and SOAP, or orchestration engines.
cyberax · 31m ago
And now we're getting a swing back to sanity. OpenAPI is an attempt to formally describe the Wild West of JSON-based HTTP interfaces.

And its complexity and size now are rivaling the specs of the good old XML-infused times.

antonymoose · 1h ago
To be charitable, you can look at a commercially unsuccessful project and appreciate its technical brilliance.
cyberax · 39m ago
CORBA got a lot of things right. But it was unfortunately a child of the late 80-s telecom networks mixed with OOP-hype.

So it baked in core assumptions that the network is transparent, reliable, and symmetric. So you could create an object on one machine, pass a reference to it to another machine, and everything is supposed to just work.

Which is not what happens in the real world, with timeouts, retries, congested networks, and crashing computers.

Oh, and CORBA C++ bindings had been designed before the STL was standardized. So they are a crawling horror, other languages were better.

zwaps · 31m ago
The author seems to fundamentally misunderstand how MCPs are going to be used and deployed.

This is really obvious when they talk about tracing and monitoring, which seem to be the main points of criticism anyway.

They bemoan that they cant trace across MCP calls, assuming somehow there would be a person administering all the MCPs. Of course each system has tracing in whatever fashion fits its system. They are just not the same system, nor owned by the same people let alone companies.

Same as monitoring cost. Oh, you can’t know who racked up the LLM costs? Well of course you can, these systems are already in place and there are a million of ways to do this. It has nothing to do with MCP.

Reading this, I think its rather a blessing to start fresh and without the learnings of 40 years of failed protocols or whatever

abtinf · 1h ago
I wish someone would write a clear, crisp explanation for why MCP is needed over simply supporting swagger or proto.
dragonwriter · 1h ago
OpenAPI (or its Swagger predecessor) or Proto (I assume by this you mean protobuf?) don't cover what MCP does. It could have layered over them instead of using JSON-RPC, but I don't see any strong reason why they would be better than JSON-RPC as the basis (Swagger has communication assumptions that don't work well with MCP's local use case; protobuf doesn't cover communication at all and would require additional consideration in the protocol layered over it.)

You'd still need basically the entire existing MCP spec to cover the use cases if it replaced JSON-RPC with Swagger or protobuf, plus additional material to cover the gaps and complications that that switch would involve.

vineyardmike · 30m ago
Proto has a full associated spec (gRPC) on communication protocols and structured definitions for them. MCP could easily have built upon these and gotten a lot “for free”. Generally gRPC is better than JsonRPC (see below).

I agree that swagger leaves a lot unplanned. I disagree about the local use case because (1) we could just run local HTTP servers easily and (2) I frankly assume the future of MCP is mostly remote.

Returning back to JSON-RPC, it’s a poorly executed RPC protocol. Here is an excellent HackerNews thread on it, but the TLDR is parsing JSON is expensive and complex, we have tons of tools (eg load balancers) that make modern services, and making those tools parse json is very expensive. Many people in the below thread mention alternative ways to implement J-RPC but that depends on new clients.

https://news.ycombinator.com/item?id=34211796

nurettin · 14m ago
MCP supports streaming responses. You could implement that by polling and a session state, but that's an inefficient hack.
nikanj · 1h ago
MCP is new
zombiwoof · 2m ago
Basically a bunch of vibe coders at a Anthropic hackathon used Claude to poop out MCP
mockingloris · 1h ago
I read this thrice: ...When OpenAI bills $50,000 for last month’s API usage, can you tell which department’s MCP tools drove that cost? Which specific tool calls? Which individual users or use cases?...

It seems to be a game of catch up for most things AI. That said, my school of thought is that certain technologies are just too big for them to be figured out early on - web frameworks, blockchain, ...

- the gap starts to shrink eventually. With AI, we'll just have to keep sharing ideas and caution like you have here. Such very interesting times we live in.

BLanen · 20m ago
As I've been saying.

MCP is not a protocol. It doesn't protocolize anything of use. It's just "here's some symbols, do with them whatever you want.", leaving it there but then advertising that as a feature of its universality. It provides almost just as much of a protocol as TCP, but rebuild on 5 OSI layers, again.

It's not a security issue, it's a ontological issue.

dragonwriter · 13m ago
> MCP discards this lesson, opting for schemaless JSON with optional, non-enforced hints.

Actually, MCP uses a normative TypeScript schema (and, from that, an autogenerated JSON Schema) for the protocol itself, and the individual tool calls also are specified with JSON Schema.

> Type validation happens at runtime, if at all.

That's not a consequence of MCP "opting for schemaless JSON" (which it factually does not), that's, for tool calls, a consequence of MCP being a discovery protocol where the tools, and thus the applicable schemas, are discovered aruntime.

If you are using MCP as a way to wire up highly-static components, you can do discovery against the servers once they are wired up, statically build the clients around the defined types, and build your toolchain to raise errors if the discovery responses change in the future. But that's not really the world MCP is built for. Yes, that means that the toolchain needs, if it is concerned about schema enforcement, use and apply the relevant schemas at runtime. So, um, do that?

btown · 48m ago
If you want the things mentioned in this article, I highly recommend looking at https://github.com/modelcontextprotocol/modelcontextprotocol... and https://modelcontextprotocol.io/community/sep-guidelines and participating in the specification process.

Point-by-point for the article's gripes:

- distributed tracing/telemetry - open discussion at https://github.com/modelcontextprotocol/modelcontextprotocol...

- structured tool annotation for parallelizability/side-effects/idempotence - this actually already exists at https://modelcontextprotocol.io/specification/2025-06-18/sch... but it's not well documented in https://modelcontextprotocol.io/specification/2025-06-18/ser... - someone should contribute to improving this!

- a standardized way in which the costs associated with an MCP tool call can be communicated to the MCP Client and reported to central tracking - nothing here I see, but it's a really good idea!

- serialization issues e.g. "the server might report a date in a format unexpected by the client" - this isn't wrong, but since the consumer of most tool responses is itself an LLM, there's a fair amount of mitigation here. And in theory an MCP Client can use an LLM to detect under-specified/ambiguous tool specifications, and could surface these issues to the integrator.

Now, I can't speak to the speed at which Maintainers and Core Maintainers are keeping up with the community's momentum - but I think it's meaningful that the community has momentum for evolving the specification!

I see this post in a highly positive light: MCP shows promise because you can iterate on these kinds of structured annotations, in the context of a community that is actively developing their MCP servers. Legacy protocols aren't engaging with these problems in the same way.

self_awareness · 1h ago
What's new?

- Electron disregards 40 years of best deployment practices,

- Web disregards 40 years of best GUI practices,

- Fast CPUs and lots of RAM disregards 40 years of best software optimization techniques,

there are probably many more examples.

xg15 · 30m ago
Yeah, and all three have evidently made software more shitty. More profitable and easier to develop, sure, but also much more unpleasant to use.
al2o3cr · 4h ago
IMO worrying about type-safety in the protocol when any string field in the reply can prompt-inject the calling LLM feels like putting a band-aid on a decapitation, but YMMV
ComputerGuru · 3h ago
They’re 100% orthogonal issues.
calvinmorrison · 46m ago
MCP, aka, WSDL for REST
gjsman-1000 · 1h ago
… or we’ll just invent MCP 2.0.

On that note; some of these “best practices” arguably haven’t worked out. “Be conservative with what you send, liberal with what you receive” has turned even decent protocols into a dumpster fire, so why keep the charade going?

jmull · 57m ago
Right...

Failed protocols such as TCP adopted Postel's law as a guiding principle, and we all know how that worked out!

dragonwriter · 9m ago
A generalized guiding principle works in one particular use case, so this proves it is a good generalized guiding principle?
rcarmo · 1h ago
I’d rather we ditched MCP and used something that could leverage Swagger instead….