Show HN: Agno – A full-stack framework for building Multi-Agent Systems

70 bediashpreet 19 6/2/2025, 1:18:40 AM github.com ↗

Comments (19)

fcap · 9h ago
In my opinion to really lift off here you need to make sure we can use these agents in production. That means the complete supply chain has to be considered. The deployment part is the heavy part and most people can run it locally. So if you close that gap people will be able to mass adopt. I am totally fine if you monetize it as a cloud service but give a full docs from code, test monitoring to deployment. And one more thing. Show what the framework is capable of. What can I do. Lots of videos and use cases here. Every single second needs to be pushed out.
maxtermed · 18h ago
I've been using this framework for a while, it's really solid IMO. It abstracts just enough to make building reliable agents straightforward, but still leaves lots of room for customization.

The way agent construction is laid out (with a clear path for progressively adding tools, memory, knowledge, storage, etc.) feels very logical.

Definitely lowered the time it takes to get something working.

bediashpreet · 14h ago
Thank you for using Agno and the kind words!
lerchmo · 19h ago
One thing I don’t understand about these agent frameworks… cursor, Claude, Claude code, cline, v0… all of the large production agents with leaked prompts use xml function calling, and it seems like these frameworks all only support native json schema function calling. This is maybe the most important decision and from my experience native tool calling is just about the worst option.
JimDabell · 1d ago
> At Agno, we're obsessed with performance. Why? because even simple AI workflows can spawn thousands of Agents. Scale that to a modest number of users and performance becomes a bottleneck.

This strikes me as odd. Aren’t all these agents pushing tokens through LLMs? The number of milliseconds needed to instantiate a Python object and the number of kilobytes it takes up in memory seem irrelevant in this context.

bediashpreet · 14h ago
You’re right, inference is typically the bottleneck and it’s reasonable to think the framework’s performance might not be critical. But here’s why we care deeply about it:

- High Performance = Less Bloat: As a software engineer, I value lean, minimal-dependency libraries. A performant framework means the authors have kept the underlying codebase lean and simple. For example: with Agno, the Agent is the base class and is 1 file, whereas with LangChain you'll get 5-7 layers of inheritance. Another example: when you install crewai, it installs the kubernetes library (along with half of pypi). Agno comes with a very small (i think <10 required dependencies).

- While inference is one part of the equation, parallel tool executions, async knowledge search and async memory updates improve the entire system's performance. Because we're focused on performance, you're guaranteed top of the line experience without thinking about it, its a core part of our philosophy.

- Milliseconds Matter: When deploying agents in production, you’re often instantiating one or even multiple agents per request (to limit data and resource access). At moderate scale, like 10,000 requests per minute, even small delays can impact user experience and resource usage.

- Scalability and Cost Efficiency: High-performance frameworks help reduce infrastructure costs, enabling smoother scaling as your user base grows.

I'm not sure why you would NOT want a performant library, sure inference is a part of it (which isn't in our control) but I'd definitely want to use libraries from engineers that value performance.

gkapur · 16h ago
If you are running things locally (I would think especially on the edge, whether on not the LLM is local or in the cloud) this would matter. Or if you are running some sort of agent orchestration where the output of LLMs is streaming it could possibly matter?
sippeangelo · 21h ago
I'm really curious what simple workflows they've seen that span THOUSANDS of agents?!
bediashpreet · 14h ago
In general we instantiate one or even multiple agents per request (to limit data and resource access). At moderate scale, like 10,000 requests per minute, even small delays can impact user experience and resource usage.

Another example: there a large, fortune 10 company that has built an agentic system to sift through data in spreadsheets, they create 1 agent per row to validate everything in that row. You might be able to see how that would scale to thousands of agents per minute.

onebitwise · 1d ago
I feel the cookbook is a little messy. I would love to see an example using collaborative agents, like an editorial team that write articles based on searches and expert of topics (just as example)

Can be better to have a different repo for examples?

Btw great project! Kudos

bediashpreet · 14h ago
Thank you for the feedback and the kind words.

Agree that the cookbooks have gotten messy. Not an excuse but sharing the root case behind it: we're building very, very fast and putting examples out for users quickly. We maintain backwards compatibility so sometimes you see 2 examples doing the same thing.

I'll make it a point to clean up the cookbooks and share more examples under this comment. Here are 2 to get started:

- Content creator team: https://github.com/agno-agi/agno/blob/main/cookbook/examples...

- Blog post generator workflow: https://github.com/agno-agi/agno/blob/main/cookbook/workflow...

Both are easily extensible. Always available for feedback at ashpreet[at]agno[dot]com

maxtermed · 18h ago
Good point. The cookbook can be hard to navigate right now, but that's mostly because the team is putting out a tremendous amount of work and updating things constantly, which is a good problem to have.

This example might be close to what you're describing: https://github.com/agno-agi/agno/blob/main/cookbook/workflow...

It chains agents for web research, content extraction, and writing with citations.

I used it as a starting point for a couple projects that are now in production. It helped clarify how to structure workflows.

bosky101 · 11h ago
Your first 2 examples on your readme involve single agents. These are a waste of time. We don't need yet another llm api call wrapper. An agentic system with just 1 tool / agent is pointless.

Thankfully your third example half way down does have an eg with 3 agents. May have helped to have a judge/architect agent.

Not clear about the infra required or used.

Would help to have helper functions to get and set session state/memory. Being able to bootstrap from json could be a good feature.

Would help to have diff agents with diff llms to show that you have thought things through.

Why should spawning 1000's of agents even be in your benchmark. Since when did we start counting variables. Maybe saying each agent would take X memory/ram would suffice - because everything is subjective, can't be generalized.

Consider a rest api that can do what the examples did via curl?

Good luck!

idan707 · 19h ago
Over the past few months, I've transitioned to using Agno in production, and I have to say, the experience has been nothing short of fantastic. A huge thank you for creating such an incredible framework!
bediashpreet · 14h ago
Thank you for the kind words <3
ElleNeal · 23h ago
I love Agno, they make it so easy to build agents for my Databutton application. Great work guys!!
bediashpreet · 14h ago
Thank you for the kind words <3
LarsenCC · 21h ago
This is awesome!
bediashpreet · 14h ago
<3