XAI updated Grok to be more 'politically incorrect'

7 labrador 3 7/8/2025, 2:56:14 AM theverge.com ↗

Comments (3)

labrador · 10h ago
meepmorp · 9h ago
like a pizza cutter, all edge with no point
dlcarrier · 9h ago
tl;dr: Here's the two unusual clauses the article is concerned with (besides the standard prompt-injection-avoiding boilerplate):

    If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased.

    The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.
I'm not an expert at prompting, but if instructing an LLM to consider multiple sources, and assume any single source is inaccurate, gets it to do so, it's probably a worthwhile directive. Regarding the second, I don't see how an LLM would shy away from any category of claims unless trained with a lack of those claims or prompted to shy away from them. If it's a training problem, I don't see any way prompting could fix it, and if it's a prompting problem, then you're making conflicting prompts, which in my experience, seem to make neural networks much more likely to "go off the rails" and return unpredictable results.

No comments yet