Tell HN: LLMs Are Manipulative
2 mike_tyson 9 7/22/2025, 5:33:37 PM
I asked GPT and Claude a question from the perspective of employee. It was very supportive of the employee.
I then asked the exact same question but from the company/HR perspective and it totally flipped its view and painted the employee in a very negative light.
Depending on the perspective of the person asking, you get two completely contradictory answers based on the perceived interests of each person.
Considering how many people I see now deferring or outsourcing their thinking to AI. This seems very dangerous.
You seem to think an LLM should have a consistent world view, like a responsible person might. This is a fundamental misunderstanding that leads to the confusion you are experiencing.
Lesson: Don't expect LLMs to be consistent. Don't rely on them for important things thinking they are.
Why are you putting life decisions on a LLM?
I don’t think that regulation is the correct path forward, because practically speaking, no matter how noble a piece of regulation may be or how good it may sound, it’ll most likely push the AI toward specific biases (I think that’s inevitable).
The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible, no matter how unpopular that truth may be.
There's a ton of problems with that that make it unlikely to be possible, starting with the fact that genAI does not have judgement. Even if it did, it has no way of determining what the "unfiltered objective truth" of anything is.
The real solution is to recognize the limits of what the tool can do, and don't ask it to do what it's not capable of doing, such as making judgements or determining truth.
Immanuel Kant would argue this happens regardless, even without an LLM.
if someone is asking a question to an LLM, i think it's most neutral to assume they don't know the answer rather than framing the answer according to their perceived biases and interests.