There are series of agents recently (claude code, manus, deep research) which execute tasks over longer time horizons particular well
At the core of it, it's just an LLM running in a loop calling tools... but when you try to do this naively (or at least, when I try to do it) the LLM struggles with doing long/complex tasks
So how do these other agents accomplish it?
These agents all do similar things, namely:
1. They use a planning tool
2. They use sub agents
3. They use a file system like thing to offload context
4. They have a detailed system prompt (prompting isn't dead!)
I don't think any of these things individually is novel... but I also think that they are not super common place to do when building agents. And the combination of them is (I think) an interesting insight!
Would love any feedback :)
web-cowboy · 5h ago
As I think through this, I agree with others mentioning that "deep agents" still sounds a lot like agents+tools. I guess the takeaway for me is:
1. You need a good LLM for base knowledge.
2. You need a good system prompt to guide/focus the LLM (create an agent).
3. If you need some functionality that doesn't make any decisions, create a tool.
4. If the agent + tools flows get too wily, break it down into smaller domains by spawning sub agents with focused prompts and (less?) tools.
_andrei_ · 6h ago
ah, deep agents = agents with planning + agents as tools => so regular agents.
i hate how LangChain has always tried to make things that are simple seem very complicated, and all the unnecessary new terminology and concepts they've pushed, but whatever sells LangSmith.
noodletheworld · 4h ago
This matches my expectations.
Now that its increasingly clear that writing MCP servers isn't a winning strategy, people need a new way to jump on the band wagon as easily as possible.
Writing your own agent like geminin and claude code is the new hotness right now.
- low barrier to entry (tick)
- does something reasonably useful (tick)
- doesnt require any deep ai knowledge or skill (tick)
- easy to hype (tick)
Its like “cursor but for X” but easier to ship.
Were going to see a tonne of coding agents built this way, but my intuition is, and what Ive seen so far, is theyre not actually introducing anything novel.
Maybe having a quick start like this is good, because it drops the value of an unambitious direct claude code clone to zero.
Still work in progress, but I'm already using it to code itself. Feedback welcome.
revskill · 22m ago
Weird. The most interesting part is hidden totally. It is how u manage tool call from parsing to exection.
shmatt · 7h ago
At least from what I noticed - Junie from Jetbrains was the first to use a very high quality to do list, and it quickly became my favorite
I haven't used it since it became paid, but back then Junie was slow and thoughtful, while Cursor was constantly re-writing files that worked fine, and Claude was somewhere in the middle
tough · 6h ago
Cursor added a UI for todo list and encourages it's agent to use it (its great ux, but you can't really see a file of it)
kiro from amazon does both tasks (in tasks.md) and specs.
Too many tools soon, choose what works for you
gsmt · 4h ago
offloading context to a shared file system sounds good but at what point does it start getting messy when multiple subagents start working in parallel
jayshah5696 · 7h ago
sub agents adding isolating context is the real deal rest is just langgraph react agent
PantaloonFlames · 5h ago
This is valuable but not really a novel idea.
sabakhoj · 3h ago
Do subagents run in parallel?
revskill · 16m ago
No way because they share filesystem
seabass · 8h ago
Is there more info on how the todo list tool is a noop? How exactly does that work?
It is relatively easy to get the agent to use it, most of the work for us is surfacing it in the UI.
JyB · 7h ago
Same question. I don’t understand what they mean by that.
It obviously seem pretty central to how Claude Code is so effective.
kjhughes · 7h ago
I thought they meant that it's a noop as a tool in the sense that it takes no external action. It seems nonetheless effective as a means of organizing reasoning and expressing status along the way.
kobstrtr · 7h ago
just for chain of thought TodoWrite would be sufficient as a tool wouldn‘t it?
lmeyerov · 6h ago
i think he means it's 'just' a thin concat
most useful prompt stuff seems 'simple' to implement ultimately, so it's more impressive to me that such a simple idea of TODO goes so far!
(agent frameworks ARE hard in serious settings, don't get me wrong, just for other reasons. ex: getting the right mix & setup devilishly hard, as are infra layers below like multitenacy, multithreading, streaming, cancellation, etc.)
re: the TODO list, strong agree on criticality. it's flipped how we do louie.ai for stuff like speed running security log analysis competitions. super useful for preventing CoT from going off the rails after only a few turns.
a fun 'aha' for me there: nested todo's are great (A.2.i...), and easy for the LLM b/c they're linearized anyways
You can see how we replace claude code's for our own internal vibe coding usage, which helps with claude's constant compactions as a heavy user (= assuages issue of the ticking timer for a lobotomy): https://github.com/graphistry/louie-py/blob/main/ai/prompts/...
ttul · 7h ago
The context will contain a record that the tool call took place. The todo list is never actually fetched.
TrainedMonkey · 6h ago
My understanding is that it is basically a prompt about making a TODO list.
kobstrtr · 7h ago
if it was a noop, I feel like there wouldn‘t be a need to have TodoRead as a tool, since TodoWrite exists.
Would love to get more info on whether this is really a noop
aabhay · 7h ago
My guess is the todo list is carried across “compress” points where the agent summarizes and restarts with fresh context + the summary
storus · 5h ago
"I hacked on an open source package (deepagents) over the weekend." Thanks but no thanks.
epolanski · 5h ago
Some of the biggest software in use today was hacked over few days in its first versions. Git is a famous one.
owebmaster · 2h ago
Absolutely not. Linus had git in his brain and it took a few days to write a first version but multiple years of learning
yawnxyz · 5h ago
most of these agents are still fundamentally simple while loops; it shouldn't really take longer than a weekend to get one built
SCUSKU · 5h ago
Hacker hacks on project and gets posted to Hacker News.
Commenter on Hacker News: No thanks, no hacking please.
storus · 5h ago
It's on langchain's official page, a framework that looks like it was hacked over the weekend by a fresh grad that brought a lot of pain to the agentic development, and this just feels like piling up more pain on it.
Main takeaways (which I'd love feedback on) are:
There are series of agents recently (claude code, manus, deep research) which execute tasks over longer time horizons particular well
At the core of it, it's just an LLM running in a loop calling tools... but when you try to do this naively (or at least, when I try to do it) the LLM struggles with doing long/complex tasks
So how do these other agents accomplish it?
These agents all do similar things, namely:
1. They use a planning tool
2. They use sub agents
3. They use a file system like thing to offload context
4. They have a detailed system prompt (prompting isn't dead!)
I don't think any of these things individually is novel... but I also think that they are not super common place to do when building agents. And the combination of them is (I think) an interesting insight!
Would love any feedback :)
1. You need a good LLM for base knowledge.
2. You need a good system prompt to guide/focus the LLM (create an agent).
3. If you need some functionality that doesn't make any decisions, create a tool.
4. If the agent + tools flows get too wily, break it down into smaller domains by spawning sub agents with focused prompts and (less?) tools.
i hate how LangChain has always tried to make things that are simple seem very complicated, and all the unnecessary new terminology and concepts they've pushed, but whatever sells LangSmith.
Now that its increasingly clear that writing MCP servers isn't a winning strategy, people need a new way to jump on the band wagon as easily as possible.
Writing your own agent like geminin and claude code is the new hotness right now.
- low barrier to entry (tick)
- does something reasonably useful (tick)
- doesnt require any deep ai knowledge or skill (tick)
- easy to hype (tick)
Its like “cursor but for X” but easier to ship.
Were going to see a tonne of coding agents built this way, but my intuition is, and what Ive seen so far, is theyre not actually introducing anything novel.
Maybe having a quick start like this is good, because it drops the value of an unambitious direct claude code clone to zero.
I like it.
Still work in progress, but I'm already using it to code itself. Feedback welcome.
I haven't used it since it became paid, but back then Junie was slow and thoughtful, while Cursor was constantly re-writing files that worked fine, and Claude was somewhere in the middle
kiro from amazon does both tasks (in tasks.md) and specs.
Too many tools soon, choose what works for you
It is relatively easy to get the agent to use it, most of the work for us is surfacing it in the UI.
most useful prompt stuff seems 'simple' to implement ultimately, so it's more impressive to me that such a simple idea of TODO goes so far!
(agent frameworks ARE hard in serious settings, don't get me wrong, just for other reasons. ex: getting the right mix & setup devilishly hard, as are infra layers below like multitenacy, multithreading, streaming, cancellation, etc.)
re: the TODO list, strong agree on criticality. it's flipped how we do louie.ai for stuff like speed running security log analysis competitions. super useful for preventing CoT from going off the rails after only a few turns.
a fun 'aha' for me there: nested todo's are great (A.2.i...), and easy for the LLM b/c they're linearized anyways
You can see how we replace claude code's for our own internal vibe coding usage, which helps with claude's constant compactions as a heavy user (= assuages issue of the ticking timer for a lobotomy): https://github.com/graphistry/louie-py/blob/main/ai/prompts/...
The author has done a pretty good job of reverse engineering Claude Code and explaining the architecture.
update: changed the link to a better repo
This is a better repo to learn about Claude code internals
https://github.com/ghuntley/claude-code-source-code-deobfusc...