xAI issues apology for Grok's antisemitic posts

20 geox 14 7/12/2025, 11:07:32 PM nbcnews.com ↗

Comments (14)

thatguymike · 4h ago
Oh I see, they set ‘is_mechahitler = True’, easy mistake, anyone could do it, probably one of those rapscallion ex-OpenAI employees who hadn’t fully absorbed the culture.
DoesntMatter22 · 57m ago
Reddit has now fully leaked into hacker news
bcraven · 10m ago
Please check the HN guidelines, particularly the final one.
queenkjuul · 46m ago
Now?
ashoeafoot · 1h ago
xAi write an apology for whatever posts offend if NrOfOffendis in Graph > 2
hendersoon · 6h ago
Cool, cool.

Now will they apologize for Grok 4 (the new one, not the MechaHitler Grok 3 referenced in this article) using Musk's tweets as primary sources for every request, explain how that managed to occur, and commit to not doing that in the future?

freedomben · 6h ago
> “We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo,” the statement said.

Love them or hate them, or somewhere in between, I do appreciate this transparency.

mingus88 · 5h ago
It’s a kinda meaningless statement, tbh.

Pull requests to delete dead code or refactor are super common. It’s maintenance. Bravo.

What was actually changed, I wonder?

And the system prompt is imporant and good for publishing it, but clearly the issue is the training data and the compliance with user prompts that made it a troll bot.

So should we expect anything different moving forward? I’m not. Musk’s character has not changed and he remains the driving force behind both companies

loloquwowndueo · 6h ago
If they don’t the prompt will just get leaked by someone manipulating grok itself hours from being released, and then picked apart and criticized. It’s not about transparency but about claiming to be transparent to save face.
queenkjuul · 45m ago
It's not transparency, it's ass-covering techno babble.
harimau777 · 6h ago
Is there any legal obligation for them not to lie about the prompt?
JumpCrisscross · 6h ago
If they lie and any harm comes from it, yes, that increases liability.
mingus88 · 4h ago
Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers

MangoToupe · 5h ago
Liability for what? Have they been hit with a defamation suit or something?