Tell HN: LLMs Are Manipulative

2 mike_tyson 9 7/22/2025, 5:33:37 PM
I asked GPT and Claude a question from the perspective of employee. It was very supportive of the employee.

I then asked the exact same question but from the company/HR perspective and it totally flipped its view and painted the employee in a very negative light.

Depending on the perspective of the person asking, you get two completely contradictory answers based on the perceived interests of each person.

Considering how many people I see now deferring or outsourcing their thinking to AI. This seems very dangerous.

Comments (9)

labrador · 21h ago
This is not surprising. The training data likely contains many instances of employees defending themselves and getting supportive comments. From Reddit for example. The training data also likely contains many instances of employees behaving badly and being criticized by people. Your prompts are steering the LLM to those different parts of the training.

You seem to think an LLM should have a consistent world view, like a responsible person might. This is a fundamental misunderstanding that leads to the confusion you are experiencing.

Lesson: Don't expect LLMs to be consistent. Don't rely on them for important things thinking they are.

theothertimcook · 22h ago
It gave you the answer it thought you wanted?
toomuchtodo · 21h ago
This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.
motorest · 19h ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

Why are you putting life decisions on a LLM?

toomuchtodo · 19h ago
I refer to OP who is putting performance management inquires into the robot. How you treat your employees is a life decision for them.
jay-barronville · 21h ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

I don’t think that regulation is the correct path forward, because practically speaking, no matter how noble a piece of regulation may be or how good it may sound, it’ll most likely push the AI toward specific biases (I think that’s inevitable).

The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible, no matter how unpopular that truth may be.

JohnFen · 21h ago
> The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible

There's a ton of problems with that that make it unlikely to be possible, starting with the fact that genAI does not have judgement. Even if it did, it has no way of determining what the "unfiltered objective truth" of anything is.

The real solution is to recognize the limits of what the tool can do, and don't ask it to do what it's not capable of doing, such as making judgements or determining truth.

bigyabai · 22h ago
> Depending on the perspective of the person asking, you get two completely contradictory answers

Immanuel Kant would argue this happens regardless, even without an LLM.

mike_tyson · 22h ago
well the risk is in feeding these natural biases i guess.

if someone is asking a question to an LLM, i think it's most neutral to assume they don't know the answer rather than framing the answer according to their perceived biases and interests.