Show HN: Representing Agents as MCP Servers
Today we're launching a significant update: Agents as MCP servers.
Currently "agentic" behavior exists only on the MCP client side – clients like Claude or Cursor use MCP servers to solve tasks. With this update, Agents can be MCP servers themselves, so that any MCP client can invoke, coordinate and orchestrate agents the same way it does with any other MCP server.
This paradigm shift enables: 1. Agent Composition: Build complex multi-agent systems over the same base protocol (MCP). 2. Platform Independence: Use your agents from any MCP-compatible client 3. Scalability: Run agent workflows on dedicated infrastructure, not just within client environments 4. Customization: Develop your own agent workflows and reuse them across any MCP client.
How an agent server is implemented:
We’ve implemented this in mcp-agent with Workflows. Each workflow is an agent application that can interact with other MCP servers (e.g. summarizing GitHub issues → Slack message). mcp-agent exposes workflows as MCP tools on an MCP Agent Server [5]:
- workflows/list – list available workflows - workflows/{WorkflowName}/run – Execute the workflow (async) - workflows/{WorkflowName}/get_status – Check workflow status - workflows/{WorkflowName}/resume – Resume paused workflow (e.g. with human input) - workflows/{WorkflowName}/cancel – Terminate workflow
We’ve also implemented Temporal for durable execution [6], so agent workflows can be paused, resumed and retried in production settings.
This demo [7] shows Claude invoking an MCP agent server, running workflows when appropriate, and polling for status. It basically shows agentic behavior on both the MCP client and MCP server side.
We're excited about the potential this unlocks—especially as more applications become MCP-compatible clients. We'd love your feedback and ideas!
[1] - https://news.ycombinator.com/item?id=42867050
[2] - https://github.com/lastmile-ai/mcp-agent
[3] - https://www.anthropic.com/research/building-effective-agents
[4] - https://github.com/github/github-mcp-server
[5] - https://github.com/lastmile-ai/mcp-agent/tree/main/examples/...
[6] - https://github.com/lastmile-ai/mcp-agent/tree/main/examples/...
[7] - https://youtu.be/pLe2GAjEoYs [DEMO]
This paradigm feels like the obvious next step for agents. It more closely models human interaction (to the degree that this is desirable) and unlocks a lot of optimizations + powerful functionality.
It is going to be an exciting rest of the year!
Our thoughts here are to handle auth the same way that the MCP spec outlines auth (https://modelcontextprotocol.io/specification/2025-03-26). The key thing is to send authorization requests back to the user in a structured way. For example, if Agent A invokes Agent B, which requires user approval for executing a tool call, that authorization request needs to be piped back to the client, and then propagated back to the agent.
This is technically possible to do with the MCP protocol as it exists today, but I think we will want to add that support in mcp-agent itself so it is easy to pause an agent workflow waiting for authentication/authorization.
One nice property of representing agents as MCP servers is that Agent discovery is the same as server discovery.
I don't know of a usecase where there are such deep recursive agent chains that it becomes unmanageable.
I almost think of mcp-agents as a modern form of scripting – we have agent workflows (e.g. generating a summary of new GitHub issues and posting on Slack), and exposing them as MCP servers has enabled us to use them in our favorite MCP clients.
The nice thing about representing agents as MCP servers is we can leverage distributed tracing via OTEL to log multi-agent chains. Within the agent application, mcp-agent tracing follows the LLM semantic conventions from OpenTelemetry (https://opentelemetry.io/docs/specs/semconv/gen-ai/). For any MCP server that the agent uses, we propagate the trace context along.