Is there a way to run an LLM as a better local search engine?
2 oblio 12 6/18/2025, 7:29:49 AM
Basically, I was thinking that a way I could actually use LLMs would be to point them at my hard drive, with hundreds of images, PDFs, XLS' and other random files, and start asking it questions to easily find things in there. Can a local LLM run OCR software on its own?
I'm on Windows, if it matters. Is there anything like that out there, already (mostly) built?
In this case LLMs with their ability to find semantic equivalence might be a great help. And with the current state of affairs I even think that an LLM with a sufficiently large context window might absorb some kind of the file system dump with directory paths and file names and answer a question about some obscure file from the past.
For pure search you're almost certainly better off building an index of CLIP embeddings and then doing cosine similarity with a query embedding to find things. I have gigabytes of reaction images and memes I've been thinking about doing this with.
Is your data organized or is just a dump of unrelated content?
- If you have a bag of files without any metadata the best option is to create something like a RAG, with a pre OCR step for image files (or even some multimodal model call).
- If the content is well organized with a logic structure an agent could extract information with a little look around.
Is static or varies day by day?
- If is static you could index all at once, if not, an agent that pick what to reindex would be a better call.
I'm not aware of a solution like this, but seems doable as an MCP server. But the cost will scale quiclky.
I have zounds of old invoices, spreadsheets created to quickly figure something out, etc.
I'd also want the tool to run in the background to update the index.
I've found something potentially interesting:
https://anythingllm.com/
https://github.com/icereed/paperless-gpt
https://docs.paperless-ngx.com/#features
These options seem far from... user friendly. Another concern is resource usage, I wonder how low LLMs will go (especially as far RAM and GPU requirements are concerned).
I'd basically want Everything as an LLM: https://www.voidtools.com/support/everything/, but also with file content indexing.
Don't mean to be snarky, apologies if it comes across like that. I'm genuinely curious