Show HN: I Built a Prompt That Makes LLMs Think Like Heinlein's Fair Witness

4 9wzYQbTYsAIc 4 5/19/2025, 2:38:58 PM fairwitness.bot ↗

Comments (4)

9wzYQbTYsAIc · 5h ago
I'm sharing Fair Witness Bot with HN first because this community understands both the technical and philosophical dimensions of AI. The framework needs people who can critique its assumptions and help evolve the implementation. If you've been thinking about epistemology in AI or are just tired of LLM hallucinations, I'd appreciate your perspectives on whether this approach could become a community standard.

The framework idea and yaml prompt was developed with the assistance of Kagi Assistant and Claude Sonnet 3.7 (Thinking),

The site was vibe coded with Windsurf Cascade and Claude 3.7 Sonnet (Thinking).

PaulHoule · 5h ago
I've been interested in the idea of E-Prime (e.g. write a classifier that can tell if a text is in E-Prime, something that rewrites text in E-Prime, etc.) Eventually I lost interest because you can write just as bad E-Prime as you can in English.

For instance, sci-fi writer Charlie Stross wrote "Keir Starmer is a fascist" which is a clear abuse of "to be" but you can stuff adjectives just fine in E-Prime: "Fascist Keir Starmer never stops pushing fascist policies with his fascist attitudes and fascist friends." You could make the case that E-Prime frequently improves on English but some constructions become terribly tortured.

9wzYQbTYsAIc · 4h ago
I understand where you are coming from - I’ve gone so far as to try to regularly write at work in E-Prime in the past. I definitely found that it forced me to think hard about what I was trying to convey, which ultimately improved the vocabulary I was using. I also definitely found it to be a lot of trouble the maintain with consistency.

The thing is, though, that LLMs don’t appear to trouble themselves at all when following E-Prime!

After a lot of conceptual refinement for the overall idea I had (minimizing hallucinations by prompt alone), it was almost trivial to make the LLM consistently use E-Prime everywhere.

You raise an interesting thought though: how to tweak this prompt such that it gets the LLM to avoid using E-Prime where it significantly reduces readability or dramatically increases cognitive load.

A classifier for “bullshit” detection has been on my mind.

PaulHoule · 3h ago
Truth is the most problematic problem in philosophy, the introduction of the idea of the truth erodes the truth as seen in Godel's theorem or the reaction you get when you hear "9/11 truther." In many cases you can only determine the truth by physical observation, in other cases it is inaccessible. An A.I. that can determine the truth of things is a god.

A emotional tone or hostility detector, on the other hand, is ModernBERT + BiLSTM for the win. I'd argue the problem with fake news is not that it is fake but that it works on peoples emotions and that people prefer it to the real thing.

You can detect common established bullshit patterns and probably new ones that are like the old ones. 30 years from now there will be new bullshit patterns your model doesn't see.