Show HN: I open-sourced a $1M engine for closing loops in embedding space

3 WFGY 4 6/28/2025, 7:51:07 AM github.com ↗
Hey folks, I’m the creator of WFGY — a semantic reasoning framework for LLMs.

After open-sourcing it, I did a full technical and value audit — and realized this engine might be worth $8M–$17M based on AI module licensing norms. If embedded as part of a platform core, the valuation could exceed $30M.

Too late to pull it back. So here it is — fully free, open-sourced under MIT.

---

### What does it solve?

Current LLMs (even GPT-4+) lack *self-consistent reasoning*. They struggle with:

- Fragmented logic across turns - No internal loopback or self-calibration - No modular thought units - Weak control over abstract semantic space

WFGY tackles this with a structured loop system operating directly *within the embedding space*, allowing:

- *Self-closing semantic reasoning loops* (via Solver Loop) - *Semantic energy control* using ∆S / λS field quantifiers - *Modular plug-in logic units* (BBMC / BBPF / BBCR) - *Reasoning fork & recomposition* (supports multiple perspectives in one session) - *Pure prompt operation* — no model hacking, no training needed

In short: You give it a single PDF + some task framing, and the LLM behaves as if it has a “reasoning kernel” running inside.

---

### Why is this significant?

Embedding space is typically treated as a passive encoding zone — WFGY treats it as *a programmable field*. That flips the paradigm.

It enables any LLM to:

- *Self-diagnose internal inconsistencies* - *Maintain state across long chains* - *Navigate abstract domains (philosophy, physics, causality)* - *Restructure its own logic strategy midstream*

All of this, in a fully language-native way — without fine-tuning or plugins.

---

### Try it:

No sign-up. No SDK. No tracking.

> Just upload your PDF — and the reasoning engine activates.

MIT licensed. Fully open. No strings attached.

GitHub: github.com/onestardao/WFGY

I eat instant noodles every day — and just open-sourced a $30M reasoning engine. Would love feedback or GitHub stars if you think it’s interesting.

Comments (4)

WFGY · 53m ago
I'm the original author of this open-source reasoning engine.

What it does: It lets a language model *close its own reasoning loops* inside embedding space — without modifying the model or retraining.

How it works: - Implements a mini-loop solver that drives semantic closure via internal ΔS/ΔE (semantic energy shift) - Uses prompt-only logic (no finetuning, no API dependencies) - Converts semantic structures into convergent reasoning outcomes - Allows logic layering and intermediate justification without external control flow

Why this matters: Most current LLM architectures don't "know" how to *self-correct* reasoning midstream — because embedding space lacks convergence rules. This engine creates those rules.

GitHub: https://github.com/onestardao/WFGY

Happy to explain anything in more technical detail!

NetRunnerSu · 1h ago
If you can't even do the prompt engineering to adapt the AI to HN's style, it's hard to believe that you're doing this work in any meaning
WFGY · 55m ago
Actually I am really new to here. I will check the rules
WFGY · 1h ago
welcome to leave any message here