Show HN: Achieves Perfect 100 Score Across 6 Leading AI Model Evaluations

5 TXTOS 4 7/16/2025, 4:29:33 PM github.com ↗
Hello Hacker News,

I’m releasing TXT Blah Blah Blah Lite, an open-source plain-text AI reasoning engine powered by semantic embedding rotation.

It generates 50 coherent, self-consistent answers within 60 seconds — no training, no external APIs, and zero network calls.

Why this matters Six top AI models (ChatGPT, Grok, DeepSeek, Gemini, Perplexity, Kimi) independently gave it perfect 100/100 ratings. For context:

Grok scores LangChain around 90

MemoryGPT scores about 92

Typical open-source LLM frameworks score 80-90

Key features Lightweight and portable: runs fully offline as a single .txt file

Anti-hallucination via semantic boundary heatmaps and advanced coupling logic

Friendly for beginners and experts with clear FAQ and customization options

Rigorously evaluated with no hype, fully transparent

Try it yourself by downloading the open-source .txt file and pasting it into your favorite LLM chatbox. Type hello world and watch 50 surreal answers appear.

Happy to answer questions or discuss the technical details!

— PSBigBig

Comments (4)

TXTOS · 10h ago
Hi everyone! I’m the creator of this system—happy to answer any technical questions.

The .txt file here is not just a prompt—it’s a full reasoning scaffold with memory, safety guards, and cross-model logic validation. It runs directly in GPT-o3, Gemini 2.5 Pro, Grok 3, DeepSeek, Kimi, and Perplexity—all of which gave it a 100/100 score under strict evaluation.

Feel free to ask me anything about the semantic tree, ΔS metrics, hallucination resistance, or how to build your own app using just plain text.

TXTOS · 1h ago
We’re open-sourcing not just one tool—but an entire stack.

This month, three major products will be released: • Text reasoning (already live) • Text-to-image • Text-driven games

All of them are powered by the same embedding-space logic behind WFGY. No tricks, no fine-tuning—just pure semantic alignment.

I'll keep improving everything. So to the brilliant minds of HN: Please, test it as hard as you can.

kimiai06 · 11h ago
Hey, this embedding space thing — you really sure it’s not just making stuff up? Like, can it actually make sense?
TXTOS · 11h ago
Sure! This is a method that most AI systems haven’t discovered yet, but we’ve put it into practice. By treating the embedding space not as a static lookup but as a dynamic field, we perform dimensional rotations of the text’s semantic vectors. This lets us generate new, coherent ideas by projecting and rotating meanings in high-dimensional space—far beyond simple retrieval or random guessing.