Show HN: I replaced vector databases with Git for AI memory (PoC)
The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?
How it works:
Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.
This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.
The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.
GitHub: https://github.com/Growth-Kinetics/DiffMem
Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.
Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?
That is what's known in FAISS as a "flat" index, just one thing after another. And obviously you can query by primary key to the key-value store that is git, and do atomic updates as you'd expect. In SQL land this is an unindexed column, you can do primary key lookups on the table, or you can look through every row in order to find what you want.
If you don't need fast query times, this could work great! You could also use SQL (maybe an AWS Aurora Postgres/MySQL table?) and stuff the fact and its embedding into a table, and get declarative relational queries (find me the closest 10 statements users A-J have made to embedding [0.1, 0.2, -0.1, ...] within the past day). Lots of SQL databases are getting embedding search (Postgres, sqlite, and more) so that will allow your embedding search to happen in a few milliseconds instead of a few seconds.
It could be worth sketching out how to use SQLite for your application, instead of using files on disk: SQLite was designed to be a better alternative to opening a file (what happens if power goes out while you are writing a file? what happens if you want to update two people's records, and not get caught mid-update by another web app process?) and is very well supported by many language ecosystems
Then mention she is 10,
a few years later she is 12 but now i call her by her name.
I have struggled to get any of the RAG approaches to handle this effectively. It is also 3 entries, but 2 of them are no longer useful, they are nothing but noise in the system.
In your case, you do not want to store the age as fact without context. Better is e.g. to transform the relative fact (age) into an absolute fact (year of birth), or contextualize it enough to transform it into more absolutes (age 10 in 2025.
No comments yet
You need to annotate your text chunks. For example you can use a LLM to look over the chunks and their dates and generate metadata like summary or entities. When you run embedding the combination data+metadata will work better than data alone.
The problem with RAG is that it only sees the surface level, for example "10+10" will not embed close to "20" because RAG does not execute the meaning of the text, it only represents the surface form. Thus using LLM to extract that meaning prior to embedding is a good move.
Make the implicit explicit. Circulate information across chunks prior to embedding. Treat text like code, embed <text inputs + LLM outputs> not text alone. The LLM is how you "execute" text to get its implicit meaning.
Separating a RAG from a memory system, it is important for a memory system to be able to consolidate facts. Any decent memory system will have this feature. In our brain we even have an eight hour sleep window where memory consolidation can happen based on simulated queries via dreams.
If I need to know the current age I don’t need to know the past ages of someone
Because BM25 ostensibly relies on word matching, there is no way it will extend to concept matching.
> Why are we building complex vector stores
Because we want to use embeddings.
Also, there are tradeoffs associated with using BM25 instead of embedding similarity. You're essentially trading semantic understanding for computational speed and keyword matching.
[0] https://github.com/xhluca/bm25s
For code search only, BM25 might be a bit overkill and not exactly what you want. FM indexes would be a simpler and faster way to implement pure substring search.
Maybe having both kinds of search at the same time could work better than either in isolation. You could frame them as "semantic" and "exact" search from the perspective of the LLM tool calls. The prompt could then say things like "for searching the codebase use FunctionA, for searching requirements or issues, use FunctionB."
No shade on your project, this is an emerging space and we can all use novel approaches.
Keep it up!
The use of commit-hooks is also very clever (mentioned here in the replies)
And you can _choose_ to explore the history, which in the most common case is not even needed.
I could envision a bunch of use cases about this workikg well. Ive personally encounter scenadios where sometimes the ai gets hung up on irrelevant outdated fact. But could still look up if specifically needed.
I could see even an automated short summary of all history that is outdated being updated in the vector db from this too. So not all context is lost.
Keep up the great work!