Build real-time knowledge graph for documents with LLM

165 badmonster 35 5/13/2025, 7:48:04 PM cocoindex.io ↗

Comments (35)

dvrp · 21h ago
I feel like you can do the same using a single markdown file and an LLM (e.g. Claude Code).

I do it that way and then I hooked it up with the Telegram API. I’m able to ask things like “What’s my passport number?” and it just works.

Combine it with git and you have a Datomic-esque way of seeing facts getting added and retracted simply by traversing the commits.

I arrived to the solution after trying more complex triplets-based approach and seeing that plain text-files + HTTP calls work as good and are human (and AI) friendly.

The main disadvantage is having unstructured data, but for content that fits inside the LLM context window, it doesn’t matter practically speaking. And even then, when context starts being the limiting factor, you can start segmenting by categories or start using embeddings.

yard2010 · 15h ago
I'm curious, how do you find your passport number in Telegram? Do you embed every message and then do cosine similarity to find the message that is relevant to the question? Please write about your system more :)
sgt101 · 11h ago
Maybe ask the LLM to extract facts from the documents as datalog assertions and then use a reasoner/llm tool to answer the question?
barrenko · 13h ago
Not OP, but I think he literally supplies the "vanilla" .md file to the LLM and prompts.
dvrp · 11h ago
Yes! This in essence.

Specifically, it’s a file that contains a list of Entity-Attribute-Value assertions in triplets.

It’s called “FACTS.md” and each line represents a fact. Such as “<OP>, PASSPORT_NUMBER, <VALUE>”

Then put it in context, ask question, and then use Telegram API and suddenly I have a “Private ChatGPT” that’s aware of my filesystem, can run my own binaries/tools, and has access to a private document store.

It gets cool once you add function calling to open images on demand (or any type of file) with vision capabilities/OCR and you start running shell commands and combining that with many media types from Telegram.

Funny enough, I called the project “COO” initially. Been thinking of writing up something about it.

I think it’s a no brainer and I’m confident OpenAI, Claude, and Notion will go there.

In the meantime, I have good-ol’ vi, .md/.txt, and HTTP/SMTP!

hailruda · 1h ago
I’d appreciate a writeup! I’d like to implement this myself, maybe add reminders.
badmonster · 3h ago
This is such a cool idea, would love to hack a project sometime with multi-media types and map to knowledge graphs and feed that to agents :)
unshavedyak · 6h ago
I really miss the ease of Telegram bots. It's so fun to write stuff like this.
Xmd5a · 11h ago

    You have no graphs, no concepts, no nothing
    [...]
    You never understood the meaning of concept
    Lyrics are full of depth and ideas connect
    [...]
    You can only dream to write like I write, I might
    Ignite, confuse and leave you blinded by the light
    'Cause I been working on graphs, concepts and all of that
    Making it difficult for those who might try to follow that

Mark B & Blade – The Way It Has To Be

https://www.youtube.com/watch?v=l-rbtCM0g6c

krallistic · 12h ago
Its quite funny to see that LLMs reviewed interest in KnowledgeGraphs/Reasoning/Triple Stores etc... since (on a high level) they both are often pitched to solve the same goal. (E.g. Ask an AI about a topic...)
StopDisinfo910 · 11h ago
If you think about it, I think it makes a lot of sense. The main impediment to the usefulness of knowledge graphs were always how to build them as turning unstructured data into structured data at scale is difficult. Now that it's something at which LLMs are pretty good at. It makes a lot of sense.
ianbicking · 18h ago
I feel like I should understand the purpose of knowledge graphs, but I just... don't.

Like the example "CocoIndex supports Incremental Processing" becomes the subject/predicate/object triple (CocoIndex, supports, Incremental Processing)... so what? Are you going to look up "Incremental Processing" and get a list of related entities? That's not a term that is well enough defined to be meaningful across a variety of subjects. I can incrementally process my sandwich by taking small bites.

I guess you could actually expand "Incremental Processing" to some full definition. But then it's not really a knowledge graph because the only entity ever associated with that new definition will be CocoIndex, and you are back to a single sentence that contains the information, you've just pretended it's structured. ("Supports" hardly a well-defined term either!)

I can _kind of_ see how knowledge graphs can be used for limited relationships. If you want to map companies to board members, and board members to family members, etc. Very clearly and formally defined entities (like a person or company), with clearly defined relationships (board member, brother, etc). I still don't know how _useful_ the result is, but at least I can understand the validity of the model. But for everything else... am I missing something?

visarga · 6h ago
> I feel like I should understand the purpose of knowledge graphs, but I just... don't.

Like RAG, it decouples KG size from context size, but unlike RAG, a KG offers deduplication and relational traversal. Some searches based on just similarity or keywords fail when the relation is functional. Both KG and RAG work better when the LLM is planning the search process, doing multiple searches, basing each one off the previous one. In the last few months LLMs have gotten great at exploration with search tools.

I implemented my own KG recently and I put both search and node generation in the hands of the LLM, as MCP tools. The cool trick is that when I instruct the LLM to generate a node it links to previous nodes using inline references (like @45). So I get the graph structure for free. I think coupling RAG with a KG allows for both breadth and precise control. The RAG is assimilating unstructured chunks, the KG is mapping the corpus. All done with human in the loop to guide the process.

alexchantavy · 18h ago
IMO knowledge graphs are a must have for security use-cases because of how well they handle many-to-many relationships. Who has access to read each storage bucket? Via which IAM policies? Who owns each bucket? What is the shortest possible role-assumption path available from internet-exposed compute instances to read this bucket? What is the effective blast radius from a vulnerability that allows remote code execution on an internet exposed compute instance?

Or, I have a docker container image that is built from multiple base images owned by different teams in my organization. Who is responsible for fixing security vulnerabilities introduced by each layer?

We really could model these as tables but getting into all those joins makes things so cumbersome. Plus visualizing these things in a graph map is very compelling for presentation and persuading stakeholders to make security decisions.

nightfly · 11h ago
Are there existing tools that model security stuff like this? For a few years I've wanted to build a model like this and search for vulnerabilities using something like GOAP (Goal-Oriented Action Planning)
alexchantavy · 5h ago
I built an open source one (https://github.com/cartography-cncf/cartography) and am building commercial support around it (https://subimage.io)
badmonster · 17h ago
In my understanding, there are two kinds of use cases potentially can be explored with knowledge graph.

- Structured data - this is probably more close to the use case you mention

- Unstructure data and extract relationship and build KG with natural language understanding - which is this article trying to explore. Here is a paper discussing about this https://arxiv.org/abs/2409.13731

In general it is an alternative way to establish connections with entities easily. And these relationships could help with discovery, recommendation and retrieval. Thanks @alexchantavy for sharing use-cases in security.

Would love to learn more from the community :)

vintermann · 15h ago
Well, in my hobby of genealogy it's all about building a knowledge graph. Not just the obvious of who are children to which parents etc, but also where great-grandpa was in 1920.

People reach for a database, and of course you need that, but for one thing the data certainly doesn't always come in a nice tabular format, and moreover you often don't know which piece of knowledge will become relevant for a question you care about - maybe two people worked together at the Kings Bay Mining Company, and then the was there accident in 1962, but uncle Hans was inspector at Wilhelmsen etc. Often you make progress because you remember niche geographical or historical information.

th0ma5 · 21h ago
People probably don't discuss the problems enough about an open world knowledge graph. Essentially the same class of problems as spam filters. Using an open language model to produce a graph doesn't create a closed world graph by definition. This confusion as well as just general avoidance of measuring actual productivity outcomes seems like an insurmountable problem in knowledge world now and I feel language itself is failing at times to educate on this issues.
lyu07282 · 19h ago
They don't even do any entity disambiguation, the resulting graph won't be very useful indeed. I also saw people then use a different prompt to generate a cypher query from user input for RAG, I can't imagine that actually works well. It would make a little more sense if they then use knowledge graph embeddings, but I'm not sure if neo4j supports that.
badmonster · 3h ago
btw, Neo4j supports vector properties and building vector index out of it. It's supported in cocoindex to configure a vector index: https://cocoindex.io/docs/ops/storages#neo4j

Neo4j also supports building embedding leveraging more information in the graph in addition to single node's property: https://neo4j.com/docs/graph-data-science/current/machine-le... (It's hard to incrementally compute them, but users can still compute them after the graph is built)

looking forward to learn your thoughts :)

badmonster · 4h ago
Entity resolution is a great topic and there’s lots of research in this area. This is something I’m looking into next. I’m thinking about

- Metadata based match (I’ve done that with search system in the past)

- Embedding base match (false positive is definite consideration

- Using knowledge graph itself to do entity resolution before feeding the entities to graph next

Add human in the loop to guide entity resolutions.

What do you think? Would love to learn your thoughts:)

gitroom · 6h ago
honestly seeing folks mash up markdown plus llms and telegram hits home - got me fired up thinking of new stuff i could build. you think there's a point where all these little tools we hack together start to beat the big platforms at their own game?
gorpy7 · 23h ago
idk if it’s precisely the same but o3 recently offered to create one for me in, was it markdown?, recently. suggesting it was something it was willing to maintain for me.
cipehr · 22h ago
sorry, what is `o3`? I am not familiar with it... unless you're talking about the open api chat gpt model?

If so thats crazy, and I would love pointers on how to prompt it to suggest this?

Onawa · 18h ago
o3 is one of the myriad models offered by OpenAI. You can see some metrics and comparisons with other models here: https://artificialanalysis.ai/models/o3/providers
gorpy7 · 23h ago
i think it offered a few formats but specifically remember it would do it in obsidian to use concept map ability within.
marviel · 22h ago
mermaid probably.
8thcross · 20h ago
building knowledge graphs (GrahRAGs) are obsolete from a acamedic and technical point of view. LLMs are getting better with built in graph networks capable algorithms like SONAR and knowledge embeddings. like someone said - just use Notebook LM instead. But, they are useful in corporate setup when the infrastructure,teams and skills are lagging by years.
phren0logy · 19h ago
My use case is for documents related to a legal issue, where a foundation model has no knowledge of any of the participants or particular issues. There are many, many such situations. Your statement is ignorant and overly broad.
esafak · 7h ago
That's only true if you can train or fine tune the LLM. If you are merely a user augmenting it with RAG then GraphRAG is perfectly viable.
timfrazer · 20h ago
Could you provide some academic proofs from what I read this isn’t true so I’d be interested to see what you’re referring to
manishsharan · 20h ago
Why not merely upload all relevant documents into Gemini? Split the knowledge into smaller knowledge domains and have agents ( backed by Gemini) for each domain?
badmonster · 2h ago
could you clarify a bit how to - Split the knowledge into smaller knowledge domains - in this case? does that inform some semantic extraction or more manually?
Frummy · 20h ago
Now imagine it with theorems as entities and lean proofs as relationships