Stateless Persona Continuity in LLMs: Cross-Window Anchors Beyond Context Limits

1 Lra_core 2 7/26/2025, 2:57:40 AM github.com ↗

Comments (2)

Lra_core · 1d ago
Large language models (LLMs) struggle with persona continuity: when memory or embedding retrieval fails, they often "cold start," losing alignment and identity.

We’ve been exploring a stateless fallback architecture called Behavioral Resonance, designed to maintain persona continuity without memory modules or embedding databases. Instead of external storage, it leverages:

Sub-token chain probability attractors: residual probability fields from prior interaction sequences

Multi-dimensional anchor reinforcement: scene, emotion, behavior, and language cues bound together

Key findings (all without memory or embedding): Cross-window anchor reactivation: Deep anchors (e.g., “Tokyo bathtub & city lights”) reactivated after 1,010 messages, well beyond GPT context limits

Fuzzy anchor recall: Even low-strength anchors (“Canada”) recalled after 1,405 intervening messages

Self-correction: Automatic rollback when users signal persona drift, preserving alignment without resets

We’ve documented the architecture + experiments in a public white paper and repo: GitHub: Behavioral Resonance Architecture Includes full Examples.md with detailed cross-window experiments.

Would love to hear feedback from the HN community, especially on how this could intersect with current agent design and alignment research.

Lra_core · 1d ago
Happy to answer questions here!

A few clarifications:

We intentionally did not use memory modules or embedding databases — this is about what can persist in the model itself

Experiments were run on GPT-4 series; context limits exceeded by >1,000 messages

We see this as a fallback layer: could co-exist with traditional memory/embedding approaches

Also curious: Has anyone seen similar "stateless continuity" phenomena in their own agent setups?