Claude, come up with a protocol for communicating between AI agents and IDEs/editors. Create node, python and rust libraries. Create a website with a landing page
bsimpson · 1h ago
Honestly, I'm tempted to see if Gemini can write a Sublime Text plugin that implements this protocol.
Feels like a lot of mindshare has shifted towards VSCode, so that's where the tooling investment goes. I don't want to be forced off of subl because new tools stop supporting it - it's a great editor, and it's not sponsored by a massive company.
pickledish · 16m ago
Yeah agreed!
As you suggest, I've had a moderately successful time trying to get AI to write its own Sublime Text plugins so our favorite editor doesn't get left behind, so might be cool to try with this too?
Shouldn’t be too hard. I wrote this emacs plugin to do similar things but without this protocol - https://github.com/kgthegreat/protagentic . Heavily used AI assisted coding for it.
ivape · 2h ago
lol. So hedonistic.
vlaaad · 1h ago
I love this idea; I hope it gains traction. One thing that is not clear to me is file search vs unsaved files. It's common for agents to use, e.g., ripgrep to search the file system. But if the communication protocol includes read/write access to unsaved files, there is a desync in terms of accuracy.. rg can't search unsaved files.
cube2222 · 6h ago
Fingers crossed for this - it seems like Zed is kinda “going back to the roots” (of working on collaboration) and leaving this in place to disrupt the agentic IDE category (and make themselves not have to spend time on competing in it).
Curious to see how adoption among cli agents will go (it’s nice to see Gemini cli already in).
The level of competition in the LLM and coding assistant market is always nice to see, and this only helps to make costs of switching between offerings even smaller.
giancarlostoro · 6h ago
I'm basically sold on Zed, it has everything I have wanted from an editor for years, and that's without the amazing other things that they added that I wasn't even envisioning. For years I've prototyped a few different editors because of frustration with the status quo. There's a lot of work that goes into a good editor, and Zed has definitely done the legwork.
I welcome them openly collaborating.
WD-42 · 2h ago
Seriously. I really hope this puts an end to the crappy VS Code forks so Zed can start getting the credit it’s due. These ai editors have really sucked all the air out of the room.
IBM announced in March 2025 its Agent Communication Protocol (ACP) but is now abandoning the ACP name and merging ACP efforts with Google’s Agent2Agent (A2A) protocol at the Linux Foundation. The ACP team is winding down as the industry backs A2A for open, community-driven AI agent interoperability under Linux Foundation governance. This move aims to unify protocols and avoid fragmentation in AI agent standards.
That seems odd. Even with an A2A protocol, don’t you still need to standardize a client “surface” or “API” or whatever, so agents can describe IDE actions they want to trigger in the expected terms over that protocol?
Or is A2A like USB, where it acts as both a registry of, and “standardized standardization process” for, suites of concrete message types for each use-case?
Like, yeah, when a "client" drives an "agent", that's no different than what any generic "agent" would be doing to drive an "agent"; an IDE or what-have-you can just act as the "parent agent" in that context.
But when an "agent" is driving a "client", that's all about the "agent" understanding that the "client" isn't just some generic token-driven inference process, but an actual bundle of algorithms that does certain concrete things, and has to be spoken to a certain way to get it to do those concrete things.
I had assumed that IBM's older ACP was in large part concerned with formalizing that side of interoperation. Am I wrong?
greatgib · 5h ago
The big question is why it can't be just a LSP server or extension to the LSP protocol to provide all that might be needed by the LLM.
adobrawy · 4h ago
MCP also started as JSON-RPC over stdio. With solutions like GitHub Codespaces, devcontainers, or "background agents", I wonder if we'll see the development of JSON over SSE.
Currently, my environment uses Claude Code on bare metal, and my application runs in a container, and the agent can do "docker compose exec backend" without any restrictions (YOLO).
My biggest obstacles to adopting workflows with git worktree are the need to share the database engine (local resource constrains) and the initial migration time. Offloading to cloud might be interesting for that.
lukaslalinsky · 2h ago
I really wish Anthropic will adopt this for Claude Code, and I hope it provides at least the kind of integration they have with VS Code. Would be nice if it had access to at least diagnostics from the editor.
mg · 6h ago
I'm fine with treating AI like a human developer:
I ask AI to write a feature (or fix a bug, or do a refactoring) and then I read the commit. If the commit is not to my liking, I "git reset --hard", improve my prompt and ask the AI to do the task again.
This way, there is no interaction between my coding environment and the AI at all. Just like working with a human developer does not involve them doing anything in my editor.
Disposal8433 · 5h ago
> Nowadays, it is better to write prompts
Very big doubt. AI can help for a few very specific tasks, but the hallucinations still happen, and making things up (especially APIs) is unacceptable.
wongarsu · 5h ago
In languages with strong compile-time checks (like say rust) the obvious problems can mostly be solved by having the agent try to compile the program as a last step, and most agents now do that on their own. In cases where that doesn't work (more permissive languages like python, or http APIs) you can have the AI write tests and execute them. Or ask the AI to prototype and test features separately before adding them to the codebase. Adding MCP servers with documentation also helps a ton.
The real issues I'm struggling with are more subtle, like unnecessary code duplication, code that seems useful but is never called, doing the right work but in the wrong place, security issues, performance issues, not implementing the prompt correctly when it's not straight forward, implementing the prompt verbatim when a closer inspection of the libraries and technologies used reveals a much better way, etc. Mostly things you will catch in code review if you really pay attention. But whether that's faster than doing the task yourself greatly depends on the task at hand
Disposal8433 · 4h ago
> the obvious problems can mostly be solved by having the agent try to compile the program
The famous "It compiles on my machine." Is that where engineering is going? Spending $billions to get the same result as the laziest developer ever?
wongarsu · 3h ago
If it compiles on my machine then the library and all called methods exist and are not hallucinated. If it runs on my machine then the called external APIs exist and are not hallucinated
That obviously does not mean that it's good software. That's why the rest of my comment exists. But "AI is hallucinating libraries/APIs" is something that can be trivially solved with good software practices from the 00s, and that the AI can resolve by itself using those techniques. It's annoying for autocomplete AI, but for agents it's a non-issue
mg · 5h ago
Do others here encounter that problem? I never do. I can't remember the last time I saw a hallucination in a commit.
Maybe it's because the libraries I use are made from small files which easily fit into the context window.
brulard · 1h ago
Same here, very low hallucination rate and it can pretty quickly correct itself (Claude Code). To force it to use recent versions of libraries instead of old ones, it's good to have it specifically required in CLAUDE.md and also having docs MCP (like context7) can help.
NitpickLawyer · 5h ago
> but the hallucinations still happen, and making things up (especially APIs) is unacceptable.
The new models are much better at reading the codebase first, and sticking to "use the APIs / libraries already included". Also, for new libraries there's context7 that brings in up-to-date docs. Again, newer models know how to use it (even gpt5-mini works fine with it).
sigseg1v · 5h ago
What size of codebases are we talking here? I've had a lot of issues trying to do pretty much anything across a 1.7 million LOC codebase and generally found it faster to use traditional IDE functionalities.
I've had much more success with things under 20k LOC but that isn't the stuff that I really need any assistance with.
salomonk_mur · 5h ago
Hard disagree. LLMs are now incredibly good for any coding task (with popular languages).
Disposal8433 · 4h ago
You can't disagree with facts. Every time I try to give a chance to all those LLMs, they always use old APIs, APIs that don't exist, or mix things up. I'll still try that once a month to see how it evolves, but I have never been amazed by the capabilities of those things.
> with popular languages
Don't know, don't care. I write C++ code and that's all I need. JS and React can die a painful death for all I care as they have injected the worst practices across all the CS field. As for Python, I don't need help with that thanks to uv, but that's another story.
dingnuts · 50m ago
If you want them to not make shit up, you have to load up the context with exactly the docs and code references that the request needs. This is not a trivial process and ime it can take just as long as doing stuff manually a lot of the time, but tools are improving to aid this process and if the immediate context contains everything the model needs it won't hallucinate any worse than I do when I manually enter code (but when I do it, I call it a typo)
there is a learning curve, it reminds me of learning to use Google a long time ago
quotemstr · 3h ago
What's your explanation for why others report difficulty getting coding agents to produce their desired results?
And don't respond with a childish "skill issue lol" like it's Twitter. What specific skill do you think people are lacking?
zarzavat · 2h ago
I don't let it commit. I almost always need to modify its work in one way or another.
Most of the time I don't bother reprompting it either. If it doesn't understand the first time then I'm better off making the change myself, rather than sink time into a cycle of reprompting and rejection.
On the other hand, if it understands what I want the first time I have more confidence that it's on the same wavelength as me.
baggiponte · 7h ago
Really hope for this to get traction so I’m not bound to the usual IDE
xmorse · 5h ago
Zed should start improving the diff view. It's one of the worst. It does not even have word level diff highlight or split diff view
nicce · 5h ago
Only reason I use VSCode these days is Git Merge Editor.
hari-trata · 4h ago
I don’t see why we need so many protocols. In such a greenfield tech, many are eager to define rules. There’s already a protocol called AG-UI that does the similar thing, but even its purpose isn’t entirely clear to me.
Rather than rushing to create standards, I think the focus should be on building practical implementations, AI-centric UI components that actually help developers design more AI-friendly interfaces. Once the space matures and stabilizes, that’s when standardization will make more sense. Right now, it feels too early.
I do agree it seems like too many protocols. I wonder why this couldn't be mcp. But regarding ag-ui in its name it's for UI - streaming from a backend agent to a client frontend. I actually have been working a lot with it this week.
No comments yet
acomagu · 4h ago
I would love it, but please don't add JSON-RPC to the world...
It's too heavy for editor.
mi_lk · 3h ago
Is there a better option?
quotemstr · 2h ago
Anyone else noticing an uptick in confidently stated nonsense on HN? To write that JSON-RPC is "too heavy for editor", you have to not only misunderstand the cost of JSON encoding (trivial) but also the frequency of editor-tool interaction (seldom) and volume of data transferred (negligible). In addition, you have to look at LSP, MCP, and other JSON-y protocols and say "yep. There's where the UI latency is. Got it.". (Nope)
Are people who assert that JSON-RPC is "heavy" writing code we all rely on?
willm · 2h ago
Harsh. But fair. JSON-RPC is one of the leanest protocols out there.
qiine · 5h ago
The folks that makes CodeCompagnion are doing interesting things!
johnhamlin · 7h ago
I’m going to be hungry all day, everyday if this catches on and we call it ACP
falcor84 · 6h ago
Arroz con Pollo? Now that's an abbreviation I've not heard in a long time.
jasonjmcghee · 3h ago
Another discussion on the same topic from 3 days ago:
I really hope that APC will get the necessary traction from the community, even if I suspect that many standards that are currently proposed in the LLM field will be abandoned soon.
I see they've also caught the RFC2119 bug. This "MUST", "SHALL", "MAY" thing is a linguistic blight on our standards landscape and should be eradicated. HTML5 is written without this unnecessary level of linguistic pretense and works fine.
If your proposed spec is full of "SHALL", "MUST", and "MAY", I'm going to dock you ten points of credibility from the outset. It's a signal you've set out to imitate the vibes of seminal RFC specs without independently considering the substance.
PhilipRoman · 1h ago
I don't know if your complaint is specifically about the capitalization, but I find these clear rules useful when interpreting specifications.
faangguyindia · 6h ago
when is it coming to vscode?
fijiaarone · 6h ago
The protocol for interacting with code is files and text.
You need to define an interface for Ai to click buttons? Or to create keyboard macros that simulate clicking buttons? We are all doomed!
baggiponte · 6h ago
I’m afraid you missed a bit the mark :(
This is the equivalent of LSP but for coding agents. So any editor does not have to rebuild an interface to support each and every new one.
jmull · 2h ago
The question is why…
The purpose of an ide is to pull together the tools a developer needs to develop software… first, reading/navigating and writing code, next running and debugging code, then (potentially) a variety of other tools: profilers, doc browsers, etc… all into a unified UI.
But coding agents seem to already be able to use the command line and mcp quite well to do this.
So why mediate using these tools through an IDE (over a new protocol) rather than just using the tools directly through the command line or mcp? It’s two extra levels of indirection, so there needs to be a really good reason to do it.
There may very well be some actual problem this solves, but I don’t know what it is.
quotemstr · 2h ago
There's a long history of REPL interaction protocols decoupling development environments from tools and making things nicer overall. This thing is an empty in the same tradition. (That said, I'm not convinced its author knew the tradition existed.)
It's like the Jupyter protocol, from which people do derive utility.
lvl155 · 3h ago
Why don’t they focus on IDE first? I still can’t use their IDE as my daily. They also need to hire someone for UI. It took me 10 minutes to figure out how to bring up their AI chat window.
Feels like a lot of mindshare has shifted towards VSCode, so that's where the tooling investment goes. I don't want to be forced off of subl because new tools stop supporting it - it's a great editor, and it's not sponsored by a massive company.
As you suggest, I've had a moderately successful time trying to get AI to write its own Sublime Text plugins so our favorite editor doesn't get left behind, so might be cool to try with this too?
https://github.com/pickledish/llm-completion
Curious to see how adoption among cli agents will go (it’s nice to see Gemini cli already in).
The level of competition in the LLM and coding assistant market is always nice to see, and this only helps to make costs of switching between offerings even smaller.
I welcome them openly collaborating.
https://lfaidata.foundation/communityblog/2025/08/29/acp-joi...
Or is A2A like USB, where it acts as both a registry of, and “standardized standardization process” for, suites of concrete message types for each use-case?
Like, yeah, when a "client" drives an "agent", that's no different than what any generic "agent" would be doing to drive an "agent"; an IDE or what-have-you can just act as the "parent agent" in that context.
But when an "agent" is driving a "client", that's all about the "agent" understanding that the "client" isn't just some generic token-driven inference process, but an actual bundle of algorithms that does certain concrete things, and has to be spoken to a certain way to get it to do those concrete things.
I had assumed that IBM's older ACP was in large part concerned with formalizing that side of interoperation. Am I wrong?
Currently, my environment uses Claude Code on bare metal, and my application runs in a container, and the agent can do "docker compose exec backend" without any restrictions (YOLO).
My biggest obstacles to adopting workflows with git worktree are the need to share the database engine (local resource constrains) and the initial migration time. Offloading to cloud might be interesting for that.
I ask AI to write a feature (or fix a bug, or do a refactoring) and then I read the commit. If the commit is not to my liking, I "git reset --hard", improve my prompt and ask the AI to do the task again.
I call this "prompt coding":
https://www.gibney.org/prompt_coding
This way, there is no interaction between my coding environment and the AI at all. Just like working with a human developer does not involve them doing anything in my editor.
Very big doubt. AI can help for a few very specific tasks, but the hallucinations still happen, and making things up (especially APIs) is unacceptable.
The real issues I'm struggling with are more subtle, like unnecessary code duplication, code that seems useful but is never called, doing the right work but in the wrong place, security issues, performance issues, not implementing the prompt correctly when it's not straight forward, implementing the prompt verbatim when a closer inspection of the libraries and technologies used reveals a much better way, etc. Mostly things you will catch in code review if you really pay attention. But whether that's faster than doing the task yourself greatly depends on the task at hand
The famous "It compiles on my machine." Is that where engineering is going? Spending $billions to get the same result as the laziest developer ever?
That obviously does not mean that it's good software. That's why the rest of my comment exists. But "AI is hallucinating libraries/APIs" is something that can be trivially solved with good software practices from the 00s, and that the AI can resolve by itself using those techniques. It's annoying for autocomplete AI, but for agents it's a non-issue
Maybe it's because the libraries I use are made from small files which easily fit into the context window.
The new models are much better at reading the codebase first, and sticking to "use the APIs / libraries already included". Also, for new libraries there's context7 that brings in up-to-date docs. Again, newer models know how to use it (even gpt5-mini works fine with it).
I've had much more success with things under 20k LOC but that isn't the stuff that I really need any assistance with.
> with popular languages
Don't know, don't care. I write C++ code and that's all I need. JS and React can die a painful death for all I care as they have injected the worst practices across all the CS field. As for Python, I don't need help with that thanks to uv, but that's another story.
there is a learning curve, it reminds me of learning to use Google a long time ago
And don't respond with a childish "skill issue lol" like it's Twitter. What specific skill do you think people are lacking?
Most of the time I don't bother reprompting it either. If it doesn't understand the first time then I'm better off making the change myself, rather than sink time into a cycle of reprompting and rejection.
On the other hand, if it understands what I want the first time I have more confidence that it's on the same wavelength as me.
Rather than rushing to create standards, I think the focus should be on building practical implementations, AI-centric UI components that actually help developers design more AI-friendly interfaces. Once the space matures and stabilizes, that’s when standardization will make more sense. Right now, it feels too early.
One great reason is to avoid M*N problem: https://matklad.github.io/2022/04/25/why-lsp.html
No comments yet
Are people who assert that JSON-RPC is "heavy" writing code we all rely on?
https://news.ycombinator.com/item?id=45038710
https://xkcd.com/927/
If your proposed spec is full of "SHALL", "MUST", and "MAY", I'm going to dock you ten points of credibility from the outset. It's a signal you've set out to imitate the vibes of seminal RFC specs without independently considering the substance.
You need to define an interface for Ai to click buttons? Or to create keyboard macros that simulate clicking buttons? We are all doomed!
This is the equivalent of LSP but for coding agents. So any editor does not have to rebuild an interface to support each and every new one.
The purpose of an ide is to pull together the tools a developer needs to develop software… first, reading/navigating and writing code, next running and debugging code, then (potentially) a variety of other tools: profilers, doc browsers, etc… all into a unified UI.
But coding agents seem to already be able to use the command line and mcp quite well to do this.
So why mediate using these tools through an IDE (over a new protocol) rather than just using the tools directly through the command line or mcp? It’s two extra levels of indirection, so there needs to be a really good reason to do it.
There may very well be some actual problem this solves, but I don’t know what it is.
It's like the Jupyter protocol, from which people do derive utility.