Ask HN: RAG or shared memory for task planning across physical agents?
5 mbbah 0 5/9/2025, 8:46:36 PM
LLM-based software agents are getting pretty good at tool use, memory, and multi-step task planning. But I’m curious if anyone is pushing this further into the physical world; specifically with robots or sensor-equipped agents.
For example:
Imagine Robot A observes that an item is in Zone Z, and Robot B later needs to retrieve it. How do they share that context? Is it via:
- A structured memory layer (like a knowledge graph)?
- Centralized state in a RAG-backed store?
- Something simpler (or messier)?
I’m experimenting with using a shared knowledge graph as memory across agents—backed by RAG for unstructured input, and queryable for planning, dependencies, and task dispatch.Would love to know:
- Is anyone else thinking about shared memory across physical agents?
- How are you handling world state, task context, or coordination?
- Any frameworks or lessons you’ve found helpful?
Exploring this space and would really appreciate hearing from others who are building in or around it.Thanks!
No comments yet