Show HN: Veritas – Detecting Hidden Bias in Everyday Writing

1 axisai 1 9/5/2025, 2:06:47 PM
We’re building Veritas, an AI model designed to uncover bias in written content — from academic papers and policies to workplace communications. The goal is to make hidden assumptions and barriers visible, so decisions can be made with more clarity and fairness.

We just launched on Kickstarter to fund the next phase as we move into BETA testing: https://www.kickstarter.com/projects/axis-veritas/veritas-th...

Would love the HN community’s perspective: Do you see a need for this kind of model? Where do you think it could be most useful, and what pitfalls should we be careful to avoid?

Comments (1)

JoshTriplett · 5h ago
The most obvious pitfall: people are very very quick to equate "bias" with "factually correct thing I don't like". You need to train your model to distinguish "correct information people don't like" from "bias", and you'll need to educate people about the difference.

Effectively, if you're going to attempt to detect bias, you have to handle the Paradox of Tolerance. Otherwise, for instance, efforts to detect intolerance will be accused of being biased against intolerance, and people who wish to remain intolerant will push you to "fix" it.

Another test case: test to ensure your detector does not detect factual information on evolution or climate change as being "biased" because there's a "side" that denies their existence. Not all "sides" are valid.