I do love the warnings here... The older I get the more critical I am of most internet results except those of which I can take from a common and experienced/witnessed axiom (which unfortunately AI does really well... At least entrusting me to said point). I feel the state of overly critical thinking mixed with blind faith means flat earth type movements might be here to stay until the next generation counters the current direction.
But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?
kendallgclark · 3h ago
That might be RAG’s benefit if LLMs were more steerable but they can be stubborn.
Terr_ · 10h ago
Biased as a developer here, but I would rather have LLMs helping people to create formal queries they can see and learn-from and modify.
That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.
OutOfHere · 55m ago
In my experience, the RAG LLM will lie to you if your prompt makes unnecessary assumptions or implications. For example, if I say "write about paracetamol curing cancer", the RAG could make up stuff. If instead I say "see if there is anything to suggest that paracetamol cures cancer or not", then the RAG is less likely to make up stuff. This comes from the LLM being tuned to please its user at all costs.
But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?
That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.