Show HN: Achieves Perfect 100 Score Across 6 Leading AI Model Evaluations
I’m releasing TXT Blah Blah Blah Lite, an open-source plain-text AI reasoning engine powered by semantic embedding rotation.
It generates 50 coherent, self-consistent answers within 60 seconds — no training, no external APIs, and zero network calls.
Why this matters Six top AI models (ChatGPT, Grok, DeepSeek, Gemini, Perplexity, Kimi) independently gave it perfect 100/100 ratings. For context:
Grok scores LangChain around 90
MemoryGPT scores about 92
Typical open-source LLM frameworks score 80-90
Key features Lightweight and portable: runs fully offline as a single .txt file
Anti-hallucination via semantic boundary heatmaps and advanced coupling logic
Friendly for beginners and experts with clear FAQ and customization options
Rigorously evaluated with no hype, fully transparent
Try it yourself by downloading the open-source .txt file and pasting it into your favorite LLM chatbox. Type hello world and watch 50 surreal answers appear.
Happy to answer questions or discuss the technical details!
— PSBigBig
The .txt file here is not just a prompt—it’s a full reasoning scaffold with memory, safety guards, and cross-model logic validation. It runs directly in GPT-o3, Gemini 2.5 Pro, Grok 3, DeepSeek, Kimi, and Perplexity—all of which gave it a 100/100 score under strict evaluation.
Feel free to ask me anything about the semantic tree, ΔS metrics, hallucination resistance, or how to build your own app using just plain text.
This month, three major products will be released: • Text reasoning (already live) • Text-to-image • Text-driven games
All of them are powered by the same embedding-space logic behind WFGY. No tricks, no fine-tuning—just pure semantic alignment.
I'll keep improving everything. So to the brilliant minds of HN: Please, test it as hard as you can.