Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining
4 WFGY 5 6/18/2025, 9:43:18 AM github.com ↗
WFGY introduces a PDF-based semantic protocol designed to correct projection collapse, contradiction loops, and ambiguous inference chains in LLMs.
No retraining. No system calls. When parsed, the logic patterns alter reasoning trajectories directly.
Prompt evaluation benchmarks show: ‣ +42.1% reasoning success ‣ +22.4% semantic alignment ‣ 3.6× stability in interpretive tasks
The repo contains formal theory, prompt suites, and reproducible results. Zero dependencies. Fully open-source.
Feedback from those working in alignment, interpretability, and logic-based scaffolding would be especially valuable.
Can you explain a bit more about how WFGY actually achieves such improvements in reasoning and stability? Specifically, what makes it different from just engineering better prompts or using more advanced LLMs?