We built an Artificial Brain that sleeps, dreams, and forms memories

1 10111two 2 9/8/2025, 2:44:35 PM github.com ↗

Comments (2)

10111two · 4h ago
At JN Research, we are exploring a third path between mainstream traditional AI and descriptive neuroscience. Instead of scaling or optimizing trained function approximators, we build Adaptrons; artificial neurons that behave like biological neurons (subthreshold + graded + Action Potential) and autonomously adapt internally and with other Adaptrons in a system. On this substrate, our small artificial brain Primite 1.03 (1,800 Adaptrons) now shows: • Autonomous sleep states (no external input), with internal “dreams” and some shared as outputs. • Original thoughts (novel images not seen as stimuli) arising during sleep and while awake. • Memory formation and consolidation (short/intermediate/long-term), including memories of dreams later recalled while awake. • Anticipation: outputs that appear before the corresponding stimulus is presented. We ran 7 independent experiments with different genetic parameters and share detailed counts, timing, and example outputs. This is not ML training; it’s a principles-first cognitive substrate where higher functions emerge from the interaction rules. Furthermore, we also show that higher cognitive functions do not need bigger models or scale to emerge, we can see their early signs if the fundamental framework allows for it. If you are curious (or skeptical), we have included the full technical report and a data repo with outputs for verification, plus our prior 1.02 report on original thought and memory. Github Repository: https://github.com/10111two/primite-1.03
10111two · 4h ago
Few Anticipatory Questions • “Isn’t this just ML/randomness?” - No training or gradient descent is used. Only neuron-like rules (graded, subthreshold, action potentials). Outputs are logged and timestamped; anyone can verify them. • “How do you define ‘original thought’?” - An output is “original” if it was never presented as a stimulus during that system’s lifetime, yet emerges autonomously. • “What about controls?” - We ran multiple experiments with different genetic parameters; each yielded different system behaviors. One run was deliberately configured as a pure input/output machine, confirming that adaptability is essential for higher functions. • “Independent replication?” – We are open to live demos (reviewers choose inputs) and will provide full raw outputs. Under NDA, reviewers can also set genetic parameters and observe the system’s lifetime behavior. • “Why 1,800 Adaptrons?” – Our approach is milestone-driven: we demonstrate emergence at small scales first (memory, dreams, anticipation), then scale gradually (20k, multimodal, 1M).

We know this is unconventional and expect skepticism. Our goal isn’t to make hype claims but to provide verifiable outputs, invite critique, and refine the framework. Happy to engage with specific test suggestions from the community.