Building Effective AI Agents

273 Anon84 53 6/17/2025, 5:50:05 PM anthropic.com ↗

Comments (53)

simonw · 7h ago
This article remains one of the better pieces on this topic, especially since it clearly defines which definition of "AI agents" they are using at the start! They use: "systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks".

I also like the way they distinguish between "agents" and "workflows", and describe a bunch of useful workflow patterns.

I published some notes on that article when it first came out: https://simonwillison.net/2024/Dec/20/building-effective-age...

A more recent article from Anthropic is https://www.anthropic.com/engineering/built-multi-agent-rese... - "How we built our multi-agent research system". I found this one fascinating, I wrote up a bunch of notes on it here: https://simonwillison.net/2025/Jun/14/multi-agent-research-s...

kodablah · 58m ago
I believe the definition of workflows in this article is inaccurate. Workflows in modern engines do not take predefined code paths, and agents are effectively the same as workflows in these cases. The redefinition of workflows seems to be an attempt to differentiate, but for the most part an agent is nothing more than a workflow that is a loop that dynamically invokes things based on LLM responses. Modern workflow engines are very dynamic.
sothatsit · 16m ago
I think the distinction is more about the "level of railroading".

Workflows have a lot more structure and rules about information and control flow. Agents, on the other hand, are often given a set of tools and a prompt. They are much more free-form.

For example, a workflow might define a fuzzy rule like "if customer issue is refund, go to refund flow," while an agent gets customer service tools and figures out how to handle each case on its own.

To me, this is a meaningful distinction to make. Workflows can be more predictable and reliable. Agents have more freedom and can tackle a greater breadth of tasks.

swyx · 3h ago
one half of the authors of Building Effective Agents also came by AIE to do a well received talk version of this article: https://www.youtube.com/watch?v=D7_ipDqhtwk
smoyer · 6h ago
The article on the multi-agent research is awesome. I do disagree with one statement in the building effective AI agents article - building your initial system without a framework sounds nice as an educational endeavor but the first benefit you get from a good framework is the easy ability to try out different (and cross-vendor) LLMs
miki123211 · 3h ago
This is why you use a library (not a framework) that provides an abstraction over different LLMs.

I'm personally a fan of litellm, but I'm sure alternatives exist.

koakuma-chan · 4h ago
Does anyone know which AI agent framework Anthropic uses? It doesn't seem like they ever released one of their own.
ankit219 · 2h ago
From what it looks like, it's one main LLM (you are sending query to - orchestrator) which calls other LLMs via tool calls. The tools are capable of calling llms too, and can have specific instructions, but mostly just the orchestrator deciding what they should be researching on, and assigns them specific subqueries. There is a limited depth / levels of search queries too, you should see the prompt they use[1]

One cool example of this in action is seen when you use claude code and ask it to search something. In a verbose setting, it calls an MCP tool to help with search. The tool returns summary of the results with the relevant links (not the raw search result text). A similar method, albeit more robust, is used when Claude is doing deep research as well.

[1]: https://github.com/anthropics/anthropic-cookbook/blob/main/p...

rockwotj · 3h ago
Just write the for loop to react to tool calls? It’s not very much code.
koakuma-chan · 2h ago
They mentioned hand offs, sub agents, concurrent tool calls, etc. You could write that yourself, but you would be inventing your own framework.
juddlyon · 7h ago
Thank you for the extra notes, this is top of mind for me.
chaosprint · 5h ago
Half a year has passed, and it feels like a long time in the field of AI. I read this article repeatedly a few months ago, but now I think the development of Agent has obviously reached a bottleneck. Even the latest gemini seems to have regressed.
jsemrau · 4h ago
(1) Running multiple agents is expensive, decreasing RoI. My DeepSearch agent for stocks uses 6 agents, and each query costs about 2 USD.

(2) Multi-agent orchestration is difficult to control.

(3) The more capable the model, the lower the need for multi-agents.

(4) The less capable the model, the higher the business case for narrow AI.

EGreg · 5h ago
What exactly makes them regress?

Why can’t they just fork swarms of themselves, work 24/7 in parallel, check work and keep advancing?

amelius · 5h ago
Because they are not intelligent. (And this is a good definition of it).
m3kw9 · 5h ago
They have hard time solving prompt issues injection and that’s a one of the bottle necks
AvAn12 · 6h ago
How do agents deal with task queueing, race conditions, and other issues arising from concurrency? I see lots of cool articles about building workflows of multiple agents - plus what feels like hand-waving around declaring an orchestrator agent to oversee the whole thing. And my mind goes to whether there needs to be some serious design considerations and clever glue code. Or does it all work automagically?
simonw · 6h ago
The standard for "agents" is that tools run in sequence, so no need to worry about concurrency. Several models support parallel tool calls now where the model can say "Run these three tools" and your harness can chose to run them in parallel or sequentially before passing the results back to the model as the next step in the conversation.

Anthropic are leaning more into multi-agent setups where the parent agent might delegate to one or more sub-agents which might run in parallel. They use that trick for Claude Code - I have some notes on reverse-engineering that here https://simonwillison.net/2025/Jun/2/claude-trace/ - and expand on that in their write-up of how Claude Research works: https://simonwillison.net/2025/Jun/14/multi-agent-research-s...

It's still _very_ early in figuring out good patterns for LLM tool-use - the models only got really great at using tools in about the past 6 months, so there's plenty to be discovered about how best to orchestrate them.

jsemrau · 4h ago
"The standard for "agents" is that tools run in sequence"

I don't think that this correct. Agents benefit is that they can use tools on the fly. Ideally the right tool at the right time.

I.e., Which number is bigger 9.11 or 9.9 -> Agent uses calculator tool. or What is the annual 2020-2023 revenue for Apple -> Financial Statements MCP

samtheprogram · 3h ago
Nothing you said contradicts the quote. When they say in sequence, they don’t mean “in a previously defined order”, they mean “not in parallel”.
svachalek · 5h ago
I'm not sure we're at "great" yet. Gemini 2.5 pro fails maybe 50% of the time for me at even generating a syntactically successful tool call.
simonw · 4h ago
Are you using Gemini's baked in API tool calling mechanisms or are you prompting it and telling it to produce specific XML/JSON?
mediaman · 3h ago
What do you recommend for this? I've actually had good luck having them create XML, even though you're "supposed" to use the native tool calling in a JSON schema. There seems to be far fewer issues with getting JSON syntax correct.
simonw · 2h ago
I'm using their native tool calling: https://github.com/simonw/llm-gemini/commit/a7f1096cfbb73301... - it's been working really well for me so far.
0x457 · 5h ago
I can only talk about Codex web interface, I had a very detailed refactoring plan for a project it was too long to complete in one go, so used "ask" feature to split it up into multiple task and group them by "which tasks can be executed concurrently".

It split them up in a way they would be split up in real life, but in real life there is an assumption that people working on tasks going to communicate with each other. The way it generates tasks resulted in HUGE loss of context (my plan was hella detailed).

I was willing to spend a few more hours trying to make it work rather than doing the work myself. I've opened another chat and split it up into multiple sequential tasks, with a detailed prompt for each task (why, what, how, validation, update documentation reminder etc).

Anyway, orchestrator might work on some super simple tasks, much smaller tasks than those articles make you believe.

daxfohl · 6h ago
Nothing works automagically. You still have to build in all the operational characteristics that you would for any traditional system. It's deceptively easy to look at some AI agent demos and think "oh, I can replace my team's huge mess of spaghetti code with a few clever AI prompts!" And it may even work for the first couple use cases. But all that code is there for a reason, and eventually it'll have to be reckoned with. Once you get to the point where you're translating all that code directly into the AI prompt and hoping for no hallucinations, you know you've lost the plot.
whattheheckheck · 14m ago
Then wtf is the point of this?
rdedev · 5h ago
This is why I am leaning towards making the llm generate code that calls operates on took calls instead of having everything in JSON.

Huggingfaces's smolagents library makes the llm generate python code where tools are just normal python functions. If you want parallel tools calls just prompt the llm to do so. It should take care of synchronizing everything. Ofcourse there is the whole issue around executing llm generated code but we have a few solutions for that

gk1 · 6h ago
In at least the case for coding agents the emerging pattern is to have the agents use containers for isolating work and git for reviewing and merging that work neatly.

See for example the container use MCP which combines both: https://github.com/dagger/container-use

That’s for parallelizing coding work… I’m not sure about other kinds of work. I still see people using workflow builder tools like n8n, Zapier, and maybe CrewAI.

cmsparks · 6h ago
Frankly, it's pretty difficult. Though, I've found that the actor model maps really well onto building agents. An instance of an actor = an instance of an agent. Agent to agent communication is just tool calling (via MCP or some other RPC)

I use Cloudflare's Durable Objects (disclaimer: I'm biased, I work on MCP + Agent things @ Cloudflare). However, I figure building agents probably maps similarly well onto any actor style framework.

pyman · 6h ago
Should the people developing AI agent protocols be exploring decentralised architectures, using technologies like blockchain and peer-to-peer networks to distribute models and data? What are the trade-offs of relying on centralised orchestration platforms owned by large companies like Amazon, Cloudfare or NVIDIA? Thanks
daxfohl · 5h ago
That's more of a hobbyist thing I'd say. Corporations developing these things will of course want to use some centralized system that they trust. It's more efficient, they have more control over it, it's easier for average people to use, etc.

A decentralized thing would be more for individuals who want more control and transparency. A decentralized public ledger would make it possible to verify that your agent, the agents it interacts with, and the contents of their interactions have not been altered or compromised in any way, whereas a corporate-owned framework could not provide the same level of assurance.

But technically, there's no advantage I can think of for using a public distributed ledger to manage interactions. Agent tasks are pretty ephemeral, so unlike digital currency, there's not really a need to maintain a complete historical log of every action forever. And as far as providing tools for dealing with race conditions, blockchain would be about the least efficient way of creating a mutex imaginable. So technically, just like with non-AI apps, cetralized architecture is always going to be a lot more efficient.

pyman · 4h ago
Good points. I agree that for most companies using centralised systems offers more advantages because of efficiency, control and user experience, but I wasn't arguing that decentralisation is better technically, just wondering if it might be necessary in the long run.

If agents become more autonomous and start coordinating across platforms owned by different companies, it might make sense to have some kind of shared, trustless layer (maybe not blockchain but something distributed, auditable and neutral).

I agree that agent tasks are ephemeral, but what about long lived multi-agent workflows or contracts between agents that execute over time? In those cases transparency and integrity might matter more.

I don't think it's one or the other. Centralised systems will dominate in the short term, no doubt about that, but if we're serious about agent ecosystems at scale, we might need more open coordination models too.

No comments yet

nurettin · 6h ago
If I had to deal with "AI agent concurrency", I would get them to submit their requests to a queue and process those sequentially.
spenczar5 · 7h ago
(December 2024, which somehow feels an eternity ago)
nico · 2h ago
> Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024

https://news.ycombinator.com/item?id=44260988

nahsra · 7h ago
Yes, but it's held up really well in my opinion! I use this piece constantly as a reference and I don't feel it's aged. It reframed Anthropic as "the practical partner" in the development of AI tools.
mellosouls · 5h ago
Discussed at the time:

https://news.ycombinator.com/item?id=42470541

Building Effective "Agents", 763 points, 124 comments

suyash · 7h ago
I think the Agent hype has come down now
kevinventullo · 6h ago
Now it’s all about AI Agencies
bredren · 3h ago
It’s helpful but I think Anthropic should be offering non technical versions of this.

For example, a marketing group is interested in agents but needs a guide on how to spec them at a basic level.

There is a figure toward the end and an appendix that starts to drive at this.

Even though it’s new, “how to build them” is an implementation concern.

btbuildem · 5h ago
> use simple, composable patterns

It's somehow incredibly reassuring that the "do one thing and do it well" maxim has held up over decades. Composability ftw.

evertedsphere · 2h ago
in case someone from anthropic is reading this: could you please add a bit of padding on the outside of the page? at least on a phone screen, the text covers the entire width of the screen from edge to edge
NetRunnerSu · 4h ago
The entire discussion around agent orchestration, whether centralized or multi-agent, seems to miss the long-term economic reality. We're debating architectural patterns, but the real question is who pays for the agent's continuous existence.

Today, it's about API calls and compute. Tomorrow, for any truly autonomous, long-lived agent, it will be about a continuous "existence tax" levied by the platform owner. The orchestrator isn't just a technical component; it's a landlord.

The alternative isn't a more complex framework. It's a permissionless execution layer—a digital wilderness where an agent's survival depends on its own resources, not a platform's benevolence. The debate isn't about efficiency; it's about sovereignty.

simonw · 4h ago
Which definition of "AI agent" are you talking about here? This sounds like some kind of replacement for a human in a position of authority?
sixhobbits · 3h ago
this is just AI slop, what's the point of posting stuff like this here?
bgwalter · 2h ago
They are so desperate that they start writing about LLM patterns now. Is an agentic LLM framework a Code Factory? Or perhaps a Code Factory Factory?

Or is it like a burrito (meme explanation of Monads when they were the latest hype)?

gregorymichael · 7h ago
One of my favorite AI How-tos in the last year. Barry and Erik spend 80% of the post saying ~”eh, you probably don’t need agents. Just build straightforward deterministic workflows with if-statements instead.”

And then, when you actually do need agents, don’t over complicate it!

This post also introduced the concept of an Augmented LLM — a LLM hooked up to tools, memory, data — which is a useful abstraction for evolving LLM use beyond fancy autocomplete.

“An augmented LLM running in a loop” is the best definition of an agent I’ve heard so far.

iLoveOncall · 5h ago
> These frameworks make it easy to get started by simplifying standard low-level tasks like calling LLMs, defining and parsing tools, and chaining calls together. However, they often create extra layers of abstraction that can obscure the underlying prompts and responses, making them harder to debug. They can also make it tempting to add complexity when a simpler setup would suffice.

> We suggest that developers start by using LLM APIs directly

Best advice of the whole article by far.

It's insane that people use whole frameworks to send what is essentially an array of strings to a webservice.

We've removed LangChain and LangGraph from our project at work because they are literally worthless, just adding complexity and making you write MORE code than if you didn't use them because you have to deal with their whole boilerplate.

fennecbutt · 2h ago
I suppose langflow also falls into this bucket.

I still think it has a definite use case in regularising all of your various flows into a common format.

Sure, I could write some code to get SD to do all the steps to generate an image, or write some shader code. But it's so much more organised to use comfy-UI, or a shader graph, especially if I have n>1 flows/tasks, and definitely while experimenting with what I'm building.

deadbabe · 6h ago
When an AI agents completes a task, why not have the AI agent save the workflow used to accomplish that task so the next time it sees a similar input it feeds it to a predefined series of tools to avoid any LLM decision making in between tool calls?

And then eventually, with enough sample inputs, create simple functions that can recognize what tools should be used to process a type of input? And only fallback to an LLM agent if the input is novel?

0x457 · 5h ago
You somewhat can do this. I use neo4j as a knowledge database for agents, and it has processes and tasks described.
revskill · 7h ago
So an agent is just a monoid in the category of monads ?