So if I understand this correctly, this works on a single large document whose size exceeds what you can or want to put into a single context frame for answering a question? It first "indexes" the document by feeding successive "proto-chunks" to an LLM, along with an accumulator, which is like a running table of contents into the document with "sections" that the indexer LLM decides on and summarizes, until the table of contents is complete. (What we're calling "sections" here - these are still "chunks", they're just not a fixed size and are decided on by the indexer at build time?)
Then for the retrieval stage, it presents the table of contents to a "retriever" LLM, which decides which sections are relevant to the question based on the summaries the indexer LLM created. Then for the answer generation stage, it just presents those relevant sections along with the question.
That's pretty clever - does it work with a corpus of documents as well, or just a single large document? Does the "indexer" know the question ahead of time, or is the creation of sections and section summarization supposed to be question-agnostic? What if your table of contents gets too big? Seems like then it just becomes normal RAG, where you have to store the summaries and document-chunk pointers in some vector or lexical database?
mingtianzhang · 4h ago
Exactly — thanks for the insightful comments! The goal is to generate an “LLM-friendly table of contents” for retrieval, rather than relying on vector-based semantic search. We think it’s closer to how humans approach information retrieval. The table of contents also naturally produces semantically coherent sections instead of arbitrary fixed-size chunks.
- Corpus of documents: Yes, this approach can generalize. For multiple documents, you can first filter by metadata or document-level summaries, and then build indexes per document. The key is that the metadata (or doc-level summaries) helps distinguish and route queries across documents. We have some examples here: https://docs.pageindex.ai/doc-search
- Question-agnostic indexing: The indexer does not know the question in advance. It builds the tree index once, and that structure can then be stored in a standard SQL database and reused at query time. In practice, we store the tree structure in JSON, and also keep (node_id, node_text) in a separate table. When we get the node_id from the LLM, we look up the corresponding node_text to form the context. There is no need for Vector DBs.
- Handling large tables of contents: If the TOC gets too large, you can traverse the tree hierarchically — starting from the top level and drilling down only into relevant branches. That’s why we use a tree structure rather than just a flat list of sections. This is what makes it different from traditional RAG with flat chunking. In spirit, it’s closer to a search-over-tree approach, somewhat like how AlphaGo handled large search spaces.
Really appreciate the thoughtful questions again! We’re actually preparing some upcoming notebooks that will address them in more detail— stay tuned!
nikishuyi · 1h ago
The idea sounds very natural. I remember that some wiki webpages and AI agents also use this idea: they look at the ToC first and then decide which page to visit next. It makes retrieval feel like function calling. I'm curious about how good the generated ToC is for generic documents.
mingtianzhang · 1h ago
Thanks, that’s a good point. Yeah, it makes retrieval look like function calling or tool selection, which I guess makes the idea more generic and better suited to current AI systems like MCP.
For the ToC generation quality, you can try our API: https://docs.pageindex.ai/ or the open-sourced version: https://github.com/VectifyAI/PageIndex. I didn’t realize other people were working on similar ideas or had similar packages. It would be great if you could share the links to the AI agent you mentioned. Thanks!
Then for the retrieval stage, it presents the table of contents to a "retriever" LLM, which decides which sections are relevant to the question based on the summaries the indexer LLM created. Then for the answer generation stage, it just presents those relevant sections along with the question.
That's pretty clever - does it work with a corpus of documents as well, or just a single large document? Does the "indexer" know the question ahead of time, or is the creation of sections and section summarization supposed to be question-agnostic? What if your table of contents gets too big? Seems like then it just becomes normal RAG, where you have to store the summaries and document-chunk pointers in some vector or lexical database?
- Corpus of documents: Yes, this approach can generalize. For multiple documents, you can first filter by metadata or document-level summaries, and then build indexes per document. The key is that the metadata (or doc-level summaries) helps distinguish and route queries across documents. We have some examples here: https://docs.pageindex.ai/doc-search
- Question-agnostic indexing: The indexer does not know the question in advance. It builds the tree index once, and that structure can then be stored in a standard SQL database and reused at query time. In practice, we store the tree structure in JSON, and also keep (node_id, node_text) in a separate table. When we get the node_id from the LLM, we look up the corresponding node_text to form the context. There is no need for Vector DBs.
- Handling large tables of contents: If the TOC gets too large, you can traverse the tree hierarchically — starting from the top level and drilling down only into relevant branches. That’s why we use a tree structure rather than just a flat list of sections. This is what makes it different from traditional RAG with flat chunking. In spirit, it’s closer to a search-over-tree approach, somewhat like how AlphaGo handled large search spaces.
Really appreciate the thoughtful questions again! We’re actually preparing some upcoming notebooks that will address them in more detail— stay tuned!
For the ToC generation quality, you can try our API: https://docs.pageindex.ai/ or the open-sourced version: https://github.com/VectifyAI/PageIndex. I didn’t realize other people were working on similar ideas or had similar packages. It would be great if you could share the links to the AI agent you mentioned. Thanks!