Semantic drift: when AI gets the facts right but loses the meaning
1 realitydrift 8 8/26/2025, 1:05:37 PM
Most LLM benchmarks measure accuracy and coherence, but not whether the intended meaning survives. I’ve been calling this gap fidelity, the preservation of purpose and nuance. Has anyone else seen drift like effects in recursive generations or eval setups?
For example, if I say “the meeting is at 3pm” and a model rewrites it as “planning sessions are important,” the words are fine, the grammar is fine, but the purpose (to coordinate time) has been lost. That’s the gap I’m calling fidelity: whether the output still serves the same function, even if the surface form changes.
like your example, "the meeting is at 3pm", _we got enough time_ intends something else with "the meeting is at 3pm" _where the hell are you?_ intends something else. It is not so obvious to get that intent without a lot of context (like, time, environment, emotion etc.)