For AI agents, what's the bigger problem: context sharing or prompting?
1 exclusivewombat 2 5/20/2025, 2:59:36 PM
Been building with LLM-based agents lately and keep running into a few recurring challenges:
1/ Prompting - making sure agents behave how you want without overly long, fragile instructions
2/ Context sharing – passing memory, results, and state across time or between agents w/o flooding the system
3/ Cost – tokens get expensive fast, especially as things scale
Curious what others think is the real bottleneck here, and any tips/tricks for solving this. Are you optimizing around token limits, memory persistence, better prompt engineering?
Would love to hear how you’re thinking about this or if there’s a smarter approach we’re all missing. ty in advance!
Comments (2)
pokerpricey1 · 7h ago
commenting as a beginner here so take this with a grain of salt. but i’ve been running into the same thing. it feels like context sharing is the deeper problem, especially once multiple agents are involved or when you want memory over time. but i spend so muchg time pasting the same prompts etc and it's not like the marketplaces of 'good' prompts have muhc, the amount of time i spend finding those prompts is roughly equal to or greater than me doing it myself over until i get a good llm resp
onlytime2tell · 8h ago
prompting can be annoying and fragile, but at least it’s deterministic. eg you tweak, you retry, you get something. context sharing is trickier because it breaks down coordination between agents and over time. you either have to a) constantly refeed everything, which burns tokens and latency, or b) build some brittle external memory system that doesn’t really map to how the agents think.