Why is AI search still bad at trust and context?

2 zyruh 1 8/27/2025, 6:10:59 PM
I’ve been testing a lot of AI search tools lately, and I keep running into the same three issues:

Accuracy with sources — either the responses don’t cite sources at all, or when they do, the citations don’t hold up.

Bias — answers are often skewed toward a certain narrative instead of presenting a balanced perspective.

Context memory — tools forget what you asked a few prompts ago, which makes complex queries tedious.

Individually, some products do one or two of these well. But I haven’t found anything that consistently delivers all three at once.

Why is this such a hard problem to solve? Is it a technical limitation, a product decision, or something else? Curious to hear what others here think.

Comments (1)

PaulHoule · 2h ago
The sources bother me. When I get some answer related to programming and I am not so sure about it I check the docs, a good link to the docs would be a great help.

Microsoft copilot should do better because it is attached to a search engine but it sucks at giving citations. Often it gives the right answer and a totally wrong citation which is frustrating.

Google's AI results in the search are so bad I try not to look at them at all. If I ask some simple question like "Can I use characters in my base to do operations in Arknights?" I get a wrong answer, citation or not.

So far as context my take with agentic coding assistants is that if you let the context get longer it will eventually get confused and start going in circles. Often it seems to code brilliantly at the beginning but pretty soon it loses the thread. The answer to that is just start a new session, if there is something you want to carry over from the old session you should cut and paste it into the documentation and tell it to look at it.

So far as bias I'd say that truth is the most problematic concept in philosophy, simply introducing the idea of "the Truth" (worse than just "the truth") impairs the truth. See 9/11 Truther.

Look at Musk's misadventures with Grok. I'd love to see an AI trained with a viewpoint like "principled conservative" but that's not what Musk wants. One moment he's BFF with Donald Trump, next minute Trump is one of Epstein's pedophiles. To satisfy Musk it would have to always know if we are at war with Eurasia or Eastasia today.