We’re releasing Adaptive Recursive Consciousness (ARC), an open-source layer that plugs into any causal-LM checkpoint and lets it keep learning after deployment.
Why it matters
A conventional language model freezes the moment training stops; every conversation thereafter is a missed learning opportunity. ARC flips that script. It performs lightweight LoRA weight updates in real time, absorbing new facts, refining style, and building a reasoning graph while it runs, no offline fine-tuning, no epoch schedules, zero downtime.
What ARC brings
On-the-fly LoRA updates – gradients are applied during generation, so the model improves without a restart.
Biologically-inspired learning gates – novelty, relevance, and emotional salience decide what gets stored, much like human memory consolidation.
Hierarchical memory & reasoning graph – working memory, episodic recall, and a growing concept network support long-range reasoning.
Cognitive inhibition & metacognition – built-in filters damp off-topic rambles, repetitive loops, and AI-centric digressions.
Lean, fast outputs – in a 30-round TinyDolphin-GPT-2 benchmark ARC cut latency by roughly half and reduced perplexity by more than a third while tightening answers and slightly boosting coherence.
Performance snapshot (TinyDolphin base vs. ARC)
Across 30 blind evaluation rounds ARC:
lowered perplexity from 19.5 to 12.2, indicating cleaner, more fluent language
cut average generation time from 4.84 s to 2.22 s, a 54 percent speed-up
trimmed answers by about 38 percent without losing substance
lifted a simple coherence score by 8 percent
nudged heuristic factuality upward by 6 percent
Taken together, these gains translate to roughly a 25 percent overall improvement across the weighted metric bundle we report in the accompanying paper.
What’s next
Version 1 is our foundation. We’re already experimenting with multi-modal memory, finer-grained safety rails, and adapters tuned for newer 7- and 13-billion-parameter bases. If you’re building agents, tutors, or autonomous tools that need to learn on the fly, we’d love to hear from you—file an issue, open a pull request, or email us at cjohnson@metisos.com.
Why it matters A conventional language model freezes the moment training stops; every conversation thereafter is a missed learning opportunity. ARC flips that script. It performs lightweight LoRA weight updates in real time, absorbing new facts, refining style, and building a reasoning graph while it runs, no offline fine-tuning, no epoch schedules, zero downtime.
What ARC brings On-the-fly LoRA updates – gradients are applied during generation, so the model improves without a restart. Biologically-inspired learning gates – novelty, relevance, and emotional salience decide what gets stored, much like human memory consolidation. Hierarchical memory & reasoning graph – working memory, episodic recall, and a growing concept network support long-range reasoning. Cognitive inhibition & metacognition – built-in filters damp off-topic rambles, repetitive loops, and AI-centric digressions. Lean, fast outputs – in a 30-round TinyDolphin-GPT-2 benchmark ARC cut latency by roughly half and reduced perplexity by more than a third while tightening answers and slightly boosting coherence.
Quick start pip install metisos-arc-core
PyPI https://pypi.org/project/metisos-arc-core/ GitHub https://github.com/metisos/arc_coreV1
Performance snapshot (TinyDolphin base vs. ARC) Across 30 blind evaluation rounds ARC:
lowered perplexity from 19.5 to 12.2, indicating cleaner, more fluent language cut average generation time from 4.84 s to 2.22 s, a 54 percent speed-up trimmed answers by about 38 percent without losing substance lifted a simple coherence score by 8 percent nudged heuristic factuality upward by 6 percent
Taken together, these gains translate to roughly a 25 percent overall improvement across the weighted metric bundle we report in the accompanying paper.
What’s next Version 1 is our foundation. We’re already experimenting with multi-modal memory, finer-grained safety rails, and adapters tuned for newer 7- and 13-billion-parameter bases. If you’re building agents, tutors, or autonomous tools that need to learn on the fly, we’d love to hear from you—file an issue, open a pull request, or email us at cjohnson@metisos.com.
— The Metis Analytics research group