The idea of swarming multiple agents on a task isn't new. And honestly we haven't seen it really work in practice. We've tested multi-agent systems a bunch with OpenHands [1] and have never really seen a bump on benchmark scores despite the massive increase in complexity. There's nothing that many different agents can do that a single generalist can't accomplish on its own.
That said, they can potentially get you a speedup if you have a neatly separable task, and can parallelize the work. But it doesn't lead to some quantum leap in what agents are able to accomplish unsupervised.
I do think some form of multi-agent workflow is going to become important over the next few years, but more because it fits our mental model of the world rather than being some big technological unlock.
This isn't multi-agents at all. Infact if you read the article in detail, you will realize that the author goes in detail to explain how this system is different from multi-agents. And this is exactly why the author calls it "Agency" because it is fundamentally different from multi-agents.
I agree that multi-agent doesnt work in practice. But this isnt that.
ajskxbdbndd · 46m ago
How is this different from multiple agents? Are you saying using different models for different parts of the task is a fundamental difference from using one model for different parts of the task?
Using different models for different things isn’t new at all. The article seems like an excuse to get some marketing out there (and it’s poor at that - they got me looking at what was built with their product but I can’t see the actual code. Feels scammy.)
resiros · 7h ago
Same observations we had talking to many of our users. The trick to build reliable systems, is minimize the complexity to the max, not the other way around.
I think the theoretical value of multi-agents is collaboration with external agents (outside your code base). Other than that, there is a very little use cases where it make sense (e.g. https://www.anthropic.com/engineering/built-multi-agent-rese... ), and building it / debugging them take much much longer and is much harder. So unless you have the ressources, not worth the trouble
bsenftner · 7h ago
Sounds like 3 card monte, sounds like the fast but short thinkers are running out of analogies, they might actually have to think and realize that the common assessment of AI as an automation technology is not correct. It's a muse and a Socratic mentor, a lobotomy when tasked to think for you, and a Rube Goldberg Machine when applied to automation.
resiros · 7h ago
1. Why is the author calling this agencies. He is talking about multi-agent systems, a space with research spanning decades (check https://en.wikipedia.org/wiki/Multi-agent_system ). Renaming this to agencies is weird.
2. Creating single agent systems is already quite tricky. The best practices and LLMOps workflows are far from mature. Jumping to multi-agent systems is very early imo. My suggestion to any builder in this space is to start simple, very simple, and then add complexity, instead of building a house of cards.
suninsight · 4h ago
He is NOT talking about multi-agent systems, which is exactly why he is calling it an Agency. The author goes to great length to explain why this is NOT a multi-agent system because it can be easily misunderstood to be that.
ygritte · 7h ago
Can we stop it with the AI spam ads on HN already?
ColinEberhardt · 7h ago
until some of the significant flaws of agents are addressed (hallucination, explainability, bias), I'm not really all that interested in extending this model further.
Agentic AI definitely works for software engineering because we have suitable mitigations for its limitations. It is unclear what those mitigations might be in other fields of application.
That said, they can potentially get you a speedup if you have a neatly separable task, and can parallelize the work. But it doesn't lead to some quantum leap in what agents are able to accomplish unsupervised.
I do think some form of multi-agent workflow is going to become important over the next few years, but more because it fits our mental model of the world rather than being some big technological unlock.
[1] https://github.com/All-Hands-AI/OpenHands
I agree that multi-agent doesnt work in practice. But this isnt that.
Using different models for different things isn’t new at all. The article seems like an excuse to get some marketing out there (and it’s poor at that - they got me looking at what was built with their product but I can’t see the actual code. Feels scammy.)
I think the theoretical value of multi-agents is collaboration with external agents (outside your code base). Other than that, there is a very little use cases where it make sense (e.g. https://www.anthropic.com/engineering/built-multi-agent-rese... ), and building it / debugging them take much much longer and is much harder. So unless you have the ressources, not worth the trouble
2. Creating single agent systems is already quite tricky. The best practices and LLMOps workflows are far from mature. Jumping to multi-agent systems is very early imo. My suggestion to any builder in this space is to start simple, very simple, and then add complexity, instead of building a house of cards.
Agentic AI definitely works for software engineering because we have suitable mitigations for its limitations. It is unclear what those mitigations might be in other fields of application.