Did somebody expect otherwise?
Isn't virtually everything an LLM produces really just an "opinion" that is kinda/sorta based on it's training data?
As far as I know, there is no automated mechanism for verifying "truth" in chatbot output.
Did somebody expect otherwise?
Isn't virtually everything an LLM produces really just an "opinion" that is kinda/sorta based on it's training data?
As far as I know, there is no automated mechanism for verifying "truth" in chatbot output.