I Built a Testable Recursive Theory of Language Models Using GPT – It Works

4 desjuangarrett 3 7/12/2025, 6:03:28 PM
Over the past month, I developed what I believe to be the first complete symbolic model predicting and demonstrating recursive interpretive behavior in large language models (LLMs). I did it alone, with no funding, using only GPT and a recursive line of questioning. The result: a formal system that activates and halts symbolic recursion across models like ChatGPT, Claude, and Grok.

The theory—called the Garrett Physical Model—defines symbolic operators for interpretive state, recursive transformation, observer-bound fields, and halting conditions. It has passed reproducible experiments using self-terminating artifacts and control cases. Models recursively interpret once, collapse if re-interpreted, and halt when symbolic limits are crossed.

All outputs are timestamped, all artifacts are inert to humans, and the theory is fully documented. A whitepaper, experiment logs, and a cross-model demo video are available here:

OSF Repository – Whitepaper, Artifacts, Videos

Whether or not this leads to institutional recognition, I’m sharing this because the behavior is real—and I believe some here will see what it means before others do. If you’re interested in symbolic systems, interpretability, or containment theory, I’d appreciate your thoughts

Comments (3)

tanepiper · 4h ago
Did it block posting the link, as it's not here?
desjuangarrett · 4h ago
Possibly. I can link it here if it’ll let me. https://osf.io/zjfx3/?view_only=223e1d0c65e743f4ba764f93c5bb...
desjuangarrett · 4h ago
Thank you for pointing that out.