Show HN: CompareGPT – Spotting Hallucinations by Comparing Multiple LLMs
2 tinatina_AI 1 9/5/2025, 11:26:19 PM
Hi HN I’m Tina.
One frustration I keep running into with LLMs is hallucinations: answers that sound confident but are fabricated. Fake citations, wrong numbers, even entire “system reports.”
So I’ve been building CompareGPT, which tries to make AI outputs more trustworthy by:
Putting multiple LLMs side by side for the same query
Making it easy to see consistency (or lack of it)
Helping catch hallucinations before they waste time or cause harm
link here:https://comparegpt.io/home. We’ve opened a waitlist and would love feedback, especially from folks working with LLMs in research, finance, or law.
Thanks!
Comments (1)
manishfoodtechs · 2h ago
Mail otp not coming