Show HN: WhiteLightning – ultra-lightweight ONNX text classifiers trained w LLMs

8 v_kyba 2 8/1/2025, 1:57:07 PM whitelightning.ai ↗
Hey HN,

We’re Volodymyr and Volodymyr—two developers from Ukraine building WhiteLightning. It’s a tool that turns large LLMs (Claude 4, Grok 4, GPT-4o via OpenRouter) into tiny ONNX text classifiers that run anywhere—even on drones at the edge.

Why we built this: Many developers want custom models (spam filters, sentiment analysis, PII detection, moderation tools), but don’t want to deal with constant API calls or deploy heavy models in production.

How it works: WhiteLightning uses LLMs to generate training data and distills it into KB-sized ONNX models you can run on any device and in any language. Just describe your task in a sentence, grab the ONNX model, and run it locally—Python, JS, Rust, Java, Swift, C++, you name it.

Try it instantly in your browser: https://whitelightning.ai/playground.html

Code & docs: https://github.com/Inoxoft/whitelightning

Community model library: https://github.com/Inoxoft/whitelightning-model-library

We’d love your feedback—what works, what doesn’t, and what to improve.

Comments (2)

ajar8087 · 4h ago
Tiny models that you can just run locally sound pretty sweet. I can see a lot of privacy‑minded folks liking this since you don’t have to phone home to an API for every request. Curious how big the trade‑off is between size and accuracy once you get beyond simple classification tasks, I see you can "bring your own data" too instead of just throwing a bunch of synthetic stuff at it.... I wonder how well that works.
v_kyba · 2h ago
Retraining & Data Generation: You can retrain your own models any time (even after deployment), generate more data, or swap in a different LLM for data synthesis. This lets you tune performance for your use case, whether you want more accuracy or just a smaller model.