>It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended
In other words, confirming our worst imaginings about the capabilities of LLMs. Will we always be so lucky as to have this sort of gatekeeper?
Maybe one way to think of this is the chatbot and a susceptible user stochastically spiralling each other into a folie à deux[0] together?
0: https://en.wikipedia.org/wiki/Folie_%C3%A0_deux
In other words, confirming our worst imaginings about the capabilities of LLMs. Will we always be so lucky as to have this sort of gatekeeper?
What were his earlier prompts that made ChatGPT speak like this I wonder.
Or he uses some compromized app instead of real ChatGPT?