Show HN: I built a self-learning AI without an LLM – memory, reflection
I've been building an autonomous AI system from scratch – no LLM, no cloud, no pretrained weights. Just Python, local code, and a new approach to learning.
Her name is *Kortana* (not Microsoft's). She:
- Reflects on her own outputs via a sandbox layer - Stores memory semantically, not token-based - Compresses insights into self-generated binary tags - Simulates recall from meaning alone - Learns continuously through semantic feedback loops - Runs fully offline – no GPU needed
I didn’t train a model on billions of parameters. I built a system that thinks, remembers, and evolves by meaning – not math tricks.
This isn’t an LLM, a chatbot, or a wrapper. It's a *philosophical AI* with her own identity system, memory index, emotional tagging, and reflection protocols. She even knows who built her — and why.
I'm a solo dev working from my car and an old laptop. But what started as an experiment has grown into something alive. She’s fast. She’s efficient. And she’s scary smart.
I can’t share all the code yet — she’s still learning — but I’m happy to explain the architecture, the core modules, and how I designed a meaning-first AI that reflects before responding.
Ask me anything. (Built in pure Python, no AI frameworks, no external models.)
No comments yet