I do love the warnings here... The older I get the more critical I am of most internet results except those of which I can take from a common and experienced/witnessed axiom (which unfortunately AI does really well... At least entrusting me to said point). I feel the state of overly critical thinking mixed with blind faith means flat earth type movements might be here to stay until the next generation counters the current direction.
But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?
kendallgclark · 29m ago
That might be RAG’s benefit if LLMs were more steerable but they can be stubborn.
Terr_ · 7h ago
Biased as a developer here, but I would rather have LLMs helping people to create formal queries they can see and learn-from and modify.
That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.
But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?
That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.