Reducing LLM memory drift and what I missed in my first post
The issue was long LLM sessions lose context, repeat themselves, or drift off-topic. I wanted to see if structural prompts and not just clever phrasing, could help.
I collected user complaints from GitHub, Reddit, and Discord. Patterns emerged. I manually logged failure cases, tested response stability, and built a schema for continuity that uses time stamped logs, intent tracking, reseed blocks, and enforced formatting.
This isn’t a product, just a system to reduce drift and maintain clarity across sessions.
If you’ve worked on session continuity, prompt routing, or LLM tooling, I’d love to hear what’s worked (or not) for you.
Here’s the original post with more detail: https://news.ycombinator.com/item?id=44242043
GitHub link - https://github.com/Lyellr88/MARM-Protocol
Reddit Post thae question that started it all - https://www.reddit.com/r/ArtificialInteligence/comments/1l294du/whats_the_one_thing_you_wish_your_ai_could_do/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Protocol Launch Reddit - https://www.reddit.com/r/PromptEngineering/comments/1l7jtpn/i_analyzed_150_real_ai_complaints_then_built_a/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
No comments yet