Show HN: Use local LLMs to organize your files
Just wanted to share a use case where local LLMs are genuinely helpful for daily workflows: file organization.
I've been working on a C++ desktop app called AI File Sorter ā it uses local LLMs via `llama.cpp` to help organize messy folders like `Downloads` or `Desktop`. Not sort files into folders solely based on extension or filename patterns, but based on what each file actually is supposed to do or does. Basically: what would normally take me a great deal of time for dragging and sorting can now be done in a few.
It's cross-platform (Windows/macOS/Linux), and fully open-source.
[GitHub repo](https://github.com/hyperfield/ai-file-sorter)
[Screenshot 1](https://i.imgur.com/HlEer13.png) - LLM selection and download
[Screenshot 2](https://i.imgur.com/KCxk6Io.png) - Select a folder to scan
[Screenshot 3](https://i.imgur.com/QTUG5KB.png) - Review, edit and confirm or continue later
You can download the installer for Windows in [Releases](https://github.com/hyperfield/ai-file-sorter/releases) or the Standalone ZIP from the [app's website](https://filesorter.app/download/).
Installers for Linux and macOS are coming up. You can, however, easily [build the app from source](https://github.com/hyperfield/ai-file-sorter/blob/main/READM...) for Linux or macOS.
---
### How it works
1. You choose which model you want the app to interface with. The app will download the model for you. You can switch models later on.
2. You point the app at a folder, and it feeds a prompt to the model.
3. It then suggests folder categories like `Operating Systems / Linux distributions`, `Programming / Scripts`, `Images / Logos`, etc.
You can review and approve before anything is moved, and you can continue the same sorting session later from where you left off.
Models tested: - LLaMa 3 (3B) - Mistral (7B) - With CUDA / OpenCL / OpenBLAS support - Other GPU back-ends can also be enabled on `llama.cpp` compile
--- ### Try it out
* Windows: [SourceForge](https://sourceforge.net/projects/ai-file-sorter/) or [GitHub Releases](https://github.com/hyperfield/ai-file-sorter/releases) * Linux/macOS: build from source (instructions in the [README](https://github.com/hyperfield/ai-file-sorter/blob/main/READM...))
---
Iād love feedback from others using local models, especially around: - Speed and accuracy in categorizing files - Model suggestions that might be more efficient than Mistral/LLaMa - Any totally different way to approach this problem? - Is this local LLM use case actually useful to you or people like you, or should the app shift its focus?
Thanks for reading!
No comments yet