Is there a way to run an LLM as a better local search engine?

2 oblio 12 6/18/2025, 7:29:49 AM
Basically, I was thinking that a way I could actually use LLMs would be to point them at my hard drive, with hundreds of images, PDFs, XLS' and other random files, and start asking it questions to easily find things in there. Can a local LLM run OCR software on its own?

I'm on Windows, if it matters. Is there anything like that out there, already (mostly) built?

Comments (12)

Agraillo · 5h ago
What you described would be a great solution for plenty of tasks, but maybe solving some fallacies one at a time would also be great. For example, when we're sure by placing a properly named file at a directory location, we will find it by recalling the folder name or the name of the file itself while in reality we're often surprised that after months or years this won't work, the expected path to the file either not exists or doesn't contain what we're looking for. The same fallacy is true also for different hierarchical notes organizers.

In this case LLMs with their ability to find semantic equivalence might be a great help. And with the current state of affairs I even think that an LLM with a sufficiently large context window might absorb some kind of the file system dump with directory paths and file names and answer a question about some obscure file from the past.

msgodel · 4h ago
Multimodal Qwen is pretty good at OCR although it's pretty slow without a GPU.

For pure search you're almost certainly better off building an index of CLIP embeddings and then doing cosine similarity with a query embedding to find things. I have gigabytes of reaction images and memes I've been thinking about doing this with.

DHRicoF · 8h ago
You need to provide more information.

Is your data organized or is just a dump of unrelated content?

- If you have a bag of files without any metadata the best option is to create something like a RAG, with a pre OCR step for image files (or even some multimodal model call).

- If the content is well organized with a logic structure an agent could extract information with a little look around.

Is static or varies day by day?

- If is static you could index all at once, if not, an agent that pick what to reindex would be a better call.

I'm not aware of a solution like this, but seems doable as an MCP server. But the cost will scale quiclky.

oblio · 3h ago
I want the LLM to search my hard drives, including for file contents.

I have zounds of old invoices, spreadsheets created to quickly figure something out, etc.

I'd also want the tool to run in the background to update the index.

I've found something potentially interesting:

https://anythingllm.com/

aliasmaya · 7h ago
Seems that you're looking for a RAG System, and you may have a try RAGFlow
oblio · 4h ago
Interesting, I had found Paperless:

https://github.com/icereed/paperless-gpt

https://docs.paperless-ngx.com/#features

These options seem far from... user friendly. Another concern is resource usage, I wonder how low LLMs will go (especially as far RAM and GPU requirements are concerned).

Iolaum · 8h ago
Have you tried asking this question at an LLM? :p
oblio · 6h ago
I did and there isn't anything out of the box out there.

I'd basically want Everything as an LLM: https://www.voidtools.com/support/everything/, but also with file content indexing.

cranberryturkey · 8h ago
local as in on your filesystem?
yen223 · 7h ago
What other forms of "local" are there?

Don't mean to be snarky, apologies if it comes across like that. I'm genuinely curious

cranberryturkey · 6h ago
like my neighbors server lol
oblio · 3h ago
Filesystem, yes.