Facts, Arguments, Theses: Building AI Knowledge Retrieval on Meaning, Not Slices

1 nsavage 2 8/23/2025, 11:33:41 AM nsavage.substack.com ↗

Comments (2)

dtagames · 2h ago
Telling an LLM that something is a fact or the thesis doesn't make it one. We can't get around the predictive nature of how models and transformers operate by using different tokens. It's still just tokens, all the way down.

In fact, your complicated prompt will probably lead to summaries that have incorrect "facts" in them and arguments that don't fit your "thesis." That's because that text exists in the training data and you can't hand-wave it away with promoting.

nsavage · 1h ago
I see what you’re saying, this works a little differently in that its asking the LLM what it thinks the writing is trying to say and what the writing uses to support it. Agreed that hallucinations are an issue though.