Ask HN: RAG or shared memory for task planning across physical agents?

11 mbbah 2 5/9/2025, 8:46:36 PM
LLM-based software agents are getting pretty good at tool use, memory, and multi-step task planning. But I’m curious if anyone is pushing this further into the physical world; specifically with robots or sensor-equipped agents.

For example:

Imagine Robot A observes that an item is in Zone Z, and Robot B later needs to retrieve it. How do they share that context? Is it via:

  - A structured memory layer (like a knowledge graph)?
  - Centralized state in a RAG-backed store?
  - Something simpler (or messier)?
I’m experimenting with using a shared knowledge graph as memory across agents—backed by RAG for unstructured input, and queryable for planning, dependencies, and task dispatch.

Would love to know:

  - Is anyone else thinking about shared memory across physical agents?
  - How are you handling world state, task context, or coordination?
  - Any frameworks or lessons you’ve found helpful?
Exploring this space and would really appreciate hearing from others who are building in or around it.

Thanks!

Comments (2)

muzani · 32d ago
There's an interesting experiment here, with a related blog: https://github.com/grapeot/devin.cursorrules?tab=readme-ov-f...

Basically instead of a complex layer, it just uses .cursorrules as the memory. This was before MCPs, so it might be capable of more today.

scowler · 35d ago
We’ve been exploring typed task graphs as an alternative to shared memory. Turns coordination into lineage rather than state. Surprisingly scalable. Happy to compare notes.

No comments yet