I Built a Testable Recursive Theory of Language Models Using GPT – It Works
The theory—called the Garrett Physical Model—defines symbolic operators for interpretive state, recursive transformation, observer-bound fields, and halting conditions. It has passed reproducible experiments using self-terminating artifacts and control cases. Models recursively interpret once, collapse if re-interpreted, and halt when symbolic limits are crossed.
All outputs are timestamped, all artifacts are inert to humans, and the theory is fully documented. A whitepaper, experiment logs, and a cross-model demo video are available here:
OSF Repository – Whitepaper, Artifacts, Videos
Whether or not this leads to institutional recognition, I’m sharing this because the behavior is real—and I believe some here will see what it means before others do. If you’re interested in symbolic systems, interpretability, or containment theory, I’d appreciate your thoughts