Show HN: CompareGPT – Trustworthy AI Answers with Confidence and Sources

3 tinatina_AI 0 9/4/2025, 11:42:03 PM
Hi HN, I’m Tina I’ve been exploring how to make large language models more reliable. One persistent issue is hallucinations — models can produce confident answers that are factually wrong or based on non-existent sources. This is especially risky for fields like finance, law, or research where accuracy matters. To address this, I’ve been building CompareGPT, which focuses on making AI outputs more trustworthy. Key updates we’ve been working on: Confidence scoring: every answer shows how reliable it is. Source validation: highlights whether data can be backed by references. Multi-model comparison: ask one question, see how different models respond side by side. Try it here: https://comparegpt.io/home It currently works best with knowledge-based queries (finance, law, science). We’re still ironing out limitations — for example, image input isn’t supported yet. I’d love to hear what you think, especially where it fails or where it could be most useful. Brutal feedback welcome Thanks!

Comments (0)

No comments yet