Ask HN: Can you take your AI's memory with you?

3 Manik_agg 2 8/5/2025, 3:08:29 PM
You use ChatGPT, Claude, Gemini, and Grok for writing, coding, and research. But none of them know what the others learned about you. This is today's reality: your AI memory is vendor-locked.

Currently, only Cursor, Grok, and ChatGPT have memory capabilities, with more following soon. But even when all AI systems develop individual memory, this problem will still exist.

I believe we need an independent 3rd party memory layer for the AI systems, one that can share context with any AI app or agent you use.

Many people on X are already discussing this memory layer problem. Some are building memory MCPs that connect to various providers, but this only partially solves the issue since it's not deeply integrated with these systems.

An open standard for memory should

- Adds context in your memory from everywhere (pdfs, webpage, blogs, linear, notion, document, meetings etc.) - Enable seamless context recall in any AI app you use - No more vendor lock-in, you own your personal memory

Do you think your AI memory should be owned by you, or should it remain vendor-locked with each platform?

Disclaimer - I am building one of such 3rd party memory layer called CORE (https://github.com/RedPlanetHQ/core) hence it is important for me to understand how you folks see about memory ownership.

Comments (2)

Rooster61 · 7h ago
Isn't lugging around all that memory "baggage" going to become cumbersome to the models we use? The more memory you bring along, the larger the footprint of what has to be fed into the context window.

Granted, in my mind, this basically just looks like RAGing in memory from model to model, and I may be looking at this over-simplistically. Is there a technique you have in mind that helps streamline the extra context needed?

Manik_agg · 7h ago
You’re right dumping all memory into the context window doesn’t scale. But with CORE, we don’t do that.

We use a reified knowledge graph for memory, where: Each fact is a first-class node (with timestamp, source, certainty, etc.) - Nodes are typed (Person, Tool, Issue, etc.) and richly linked - Activity (e.g. a Slack message) is decomposed and connected to relevant context

This structure allows precise subgraph retrieval based on semantic, temporal, or relational filters—so only what’s relevant is pulled into the context window. It’s not just RAG over documents. It’s graph traversal over structured memory. The model doesn’t carry memory—it queries what it needs.

So yes, the memory problem is real—but reified graphs actually make it tractable.