I get where this is coming from, but the problem with the analogy is that it is implying that genAI has agency and intention that it just doesn't have.
jstrieb · 19h ago
I implied that on purpose. I'm generally hesitant to anthropomorphize LLMs, but in this case, I disagree with you that they don't have intention. They were developed to output likely tokens, and tuned such that their big tech developers approve the output. That is their intention. Their one and only intention.
That being said, I completely agree on the agency point. They don't make decisions, and certainly don't "think" like people.
Still, I believe the benefits of the analogy outweigh the potential loss of nuance here.
That being said, I completely agree on the agency point. They don't make decisions, and certainly don't "think" like people.
Still, I believe the benefits of the analogy outweigh the potential loss of nuance here.