I will peruse your learning path when I am done with writing my master's thesis. Thanks for putting it together!
Lots of bullet points and keywords about the "What" : provable recursion, next-token prediction, and formal verification... and all items in "What makes it special". Can you provide a practical motivation, even speculative for people like me who have little time? Not necessarily "What use does it have right now", but "The qualitative difference with other models might enable use case XYZ in the future".
I have noticed it is low power and this is great in itself. What does the more rigorous formalism bring to the table? No snark at all, I am fascinated by formal methods, but still looking at them from afar.
Cheers
dkypuros · 2h ago
Thanks for the thoughtful question— and good luck wrapping up the thesis!
Here’s the shortest road-map I can give for why the heavier formalism matters once you already have low-power execution nailed down.
First, the grammar + proof layer lets you guarantee properties that today’s neural LLMs can only hope to satisfy. Because every production rule carries a machine-checkable proof obligation, you can show that responses will always terminate, stay within a memory budget, or never emit strings outside a whitelisted alphabet. In practice that means the model can be certified for safety-critical or compliance-heavy settings where a probabilistic network is a non-starter.
Second, the same proofs make the system auditable and patchable by domain experts instead of ML engineers. An agronomist can inspect the maize-disease module, see that the recursion proving “all advice paths end with a referenced citation,” and swap in an updated pest table without breaking that guarantee. The edit-compile-prove cycle is minutes, not GPU-months.
Third, formal hooks open the door to hybrid workflows. You can embed the micro-LM inside a larger pipeline—say, a standard transformer model proposes a draft, and our verified core acts as a “lint pass” that repairs grammar, checks facts against a local SQLite cache, and signs the result with a proof artifact. That could be huge for regulated industries that want the creativity of big models and the certainty of formal methods.
Finally, on the speculative side, once responses are proof-carrying you can imagine device-to-device marketplaces of small, composable skills: my weather module proves bounds on forecast error, your SMS gateway proves it redacts PII, we link them and the combined proof still holds. That’s hard to do with opaque neural weights.
So the low-power story gets us in the door; the rigorous formalism is what keeps the door open when reliability, certification, or composability become the bottleneck. Hope that gives you a clearer picture—and when the thesis dust settles I’d love to hear your perspective on how formal methods could push this even further.
katosteven · 12h ago
For the last few years, the AI world has been dominated by a single idea: bigger is better. But what if the future of AI isn't just about scale, but about precision, efficiency, and accessibility?
This is the story of the Atomic Language Model (ALM), a project that challenges the "bigger is better" paradigm. It’s a language model that is not just millions of times smaller than the giants, but is also formally verified, opening up new frontiers for AI.
The result of our work is a capable, recursive language model that comes in at under 50KB.
This project is led by David Kypuros of Enterprise Neurosystem, in a vibrant collaboration with a team of Ugandan engineers and researchers: myself (Kato Steven Mubiru), Bronson Bakunga, Sibomana Glorry, and Gimei Alex. Our ambitious, shared goal is to use this technology to develop the first-ever language architecture for a major Ugandan language.
From "Trust Me" to "Prove It": Formal Verification
Modern LLMs are opaque black boxes validated empirically. The ALM is different. Its core is formally verified using the Coq proof assistant. We have mathematically proven the correctness of its recursive engine. This shift from experimental science to mathematical certainty is a game-changer for reliability.
The Team and the Mission: Building Accessible AI
This isn't just a technical exercise. The ALM was born from a vision to make cutting-edge AI accessible to everyone, everywhere. By combining the architectural vision from Enterprise Neurosystem with the local linguistic and engineering talent in Uganda, we are not just building a model; we are building capacity and pioneering a new approach to AI development—one that serves local needs from the ground up.
Unlocking New Frontiers with a Lightweight Architecture
A sub-50KB footprint is a gateway to domains previously unimaginable for advanced AI:
Climate & Environmental Monitoring: The ALM is small enough to run on low-power, offline sensors, enabling sophisticated, real-time analysis in remote locations.
2G Solutions: In areas where internet connectivity is limited to 2G networks, a tiny, efficient model can provide powerful language capabilities that would otherwise be impossible.
Space Exploration: For missions where power, weight, and computational resources are severely constrained, a formally verified, featherweight model offers unparalleled potential.
Embedded Systems & Edge Devices: True on-device AI without needing a network connection, from microcontrollers to battery-powered sensors.
A Pragmatic Hybrid Architecture
The ALM merges the best of both worlds:
A formally verified Rust core handles the grammar and parsing, ensuring correctness and speed.
A flexible Python layer manages probabilistic modeling and user interaction.
What's Next?
This project is a testament to what small, focused, international teams can achieve. We believe the future of AI is diverse, and we are excited to build a part of that future—one that is more efficient, reliable, and equitable.
We've launched with a few key assets:
The Research Paper: For a deep dive into the theory , we are working on it.
The GitHub Repository: The code is open-source. We welcome contributions!
A Live Web Demo: Play with the model directly in your browser (WebAssembly).
We'd love to hear your thoughts and have you join the conversation.
icodar · 11h ago
The next token prediction appears to be predicted based on fixed grammatical rules. However, modern LLMs learn the rules themselves. Did I misunderstand?
NitpickLawyer · 12h ago
Could you add a link for the web demo? Couldn't find it in the repo.
dkypuros · 10h ago
We’re working on it. Great feedback
dkypuros · 10h ago
We use a deliberately small, hand‑written grammar so that we can prove properties like grammaticality, aⁿbⁿ generation, and bounded memory. The price we pay is that the next‑token distribution is limited to the explicit rules we supplied. Large neural LMs reverse the trade‑off: they learn the rules from data and therefore cover much richer phenomena, but they can’t offer the same formal guarantees. The fibration architecture is designed so we can eventually blend the two—keeping symbolic guarantees while letting certain fibres (e.g. embeddings or rule weights) be learned from data.
dkypuros · 10h ago
We’re eventually headed toward completely externalized data that feeds into the system
Lots of bullet points and keywords about the "What" : provable recursion, next-token prediction, and formal verification... and all items in "What makes it special". Can you provide a practical motivation, even speculative for people like me who have little time? Not necessarily "What use does it have right now", but "The qualitative difference with other models might enable use case XYZ in the future".
I have noticed it is low power and this is great in itself. What does the more rigorous formalism bring to the table? No snark at all, I am fascinated by formal methods, but still looking at them from afar.
Cheers
Here’s the shortest road-map I can give for why the heavier formalism matters once you already have low-power execution nailed down.
First, the grammar + proof layer lets you guarantee properties that today’s neural LLMs can only hope to satisfy. Because every production rule carries a machine-checkable proof obligation, you can show that responses will always terminate, stay within a memory budget, or never emit strings outside a whitelisted alphabet. In practice that means the model can be certified for safety-critical or compliance-heavy settings where a probabilistic network is a non-starter.
Second, the same proofs make the system auditable and patchable by domain experts instead of ML engineers. An agronomist can inspect the maize-disease module, see that the recursion proving “all advice paths end with a referenced citation,” and swap in an updated pest table without breaking that guarantee. The edit-compile-prove cycle is minutes, not GPU-months.
Third, formal hooks open the door to hybrid workflows. You can embed the micro-LM inside a larger pipeline—say, a standard transformer model proposes a draft, and our verified core acts as a “lint pass” that repairs grammar, checks facts against a local SQLite cache, and signs the result with a proof artifact. That could be huge for regulated industries that want the creativity of big models and the certainty of formal methods.
Finally, on the speculative side, once responses are proof-carrying you can imagine device-to-device marketplaces of small, composable skills: my weather module proves bounds on forecast error, your SMS gateway proves it redacts PII, we link them and the combined proof still holds. That’s hard to do with opaque neural weights.
So the low-power story gets us in the door; the rigorous formalism is what keeps the door open when reliability, certification, or composability become the bottleneck. Hope that gives you a clearer picture—and when the thesis dust settles I’d love to hear your perspective on how formal methods could push this even further.
This is the story of the Atomic Language Model (ALM), a project that challenges the "bigger is better" paradigm. It’s a language model that is not just millions of times smaller than the giants, but is also formally verified, opening up new frontiers for AI.
The result of our work is a capable, recursive language model that comes in at under 50KB.
This project is led by David Kypuros of Enterprise Neurosystem, in a vibrant collaboration with a team of Ugandan engineers and researchers: myself (Kato Steven Mubiru), Bronson Bakunga, Sibomana Glorry, and Gimei Alex. Our ambitious, shared goal is to use this technology to develop the first-ever language architecture for a major Ugandan language.
https://github.com/dkypuros/atomic-lang-model/tree/main
From "Trust Me" to "Prove It": Formal Verification Modern LLMs are opaque black boxes validated empirically. The ALM is different. Its core is formally verified using the Coq proof assistant. We have mathematically proven the correctness of its recursive engine. This shift from experimental science to mathematical certainty is a game-changer for reliability.
The Team and the Mission: Building Accessible AI This isn't just a technical exercise. The ALM was born from a vision to make cutting-edge AI accessible to everyone, everywhere. By combining the architectural vision from Enterprise Neurosystem with the local linguistic and engineering talent in Uganda, we are not just building a model; we are building capacity and pioneering a new approach to AI development—one that serves local needs from the ground up.
Unlocking New Frontiers with a Lightweight Architecture A sub-50KB footprint is a gateway to domains previously unimaginable for advanced AI:
Climate & Environmental Monitoring: The ALM is small enough to run on low-power, offline sensors, enabling sophisticated, real-time analysis in remote locations. 2G Solutions: In areas where internet connectivity is limited to 2G networks, a tiny, efficient model can provide powerful language capabilities that would otherwise be impossible. Space Exploration: For missions where power, weight, and computational resources are severely constrained, a formally verified, featherweight model offers unparalleled potential. Embedded Systems & Edge Devices: True on-device AI without needing a network connection, from microcontrollers to battery-powered sensors. A Pragmatic Hybrid Architecture The ALM merges the best of both worlds:
A formally verified Rust core handles the grammar and parsing, ensuring correctness and speed. A flexible Python layer manages probabilistic modeling and user interaction. What's Next? This project is a testament to what small, focused, international teams can achieve. We believe the future of AI is diverse, and we are excited to build a part of that future—one that is more efficient, reliable, and equitable.
We've launched with a few key assets:
The Research Paper: For a deep dive into the theory , we are working on it. The GitHub Repository: The code is open-source. We welcome contributions! A Live Web Demo: Play with the model directly in your browser (WebAssembly). We'd love to hear your thoughts and have you join the conversation.