xAI issues apology for Grok's antisemitic posts

24 geox 14 7/12/2025, 11:07:32 PM nbcnews.com ↗

Comments (14)

hendersoon · 13h ago
Cool, cool.

Now will they apologize for Grok 4 (the new one, not the MechaHitler Grok 3 referenced in this article) using Musk's tweets as primary sources for every request, explain how that managed to occur, and commit to not doing that in the future?

ashoeafoot · 8h ago
xAi write an apology for whatever posts offend if NrOfOffendis in Graph > 2
thatguymike · 11h ago
Oh I see, they set ‘is_mechahitler = True’, easy mistake, anyone could do it, probably one of those rapscallion ex-OpenAI employees who hadn’t fully absorbed the culture.
DoesntMatter22 · 8h ago
Reddit has now fully leaked into hacker news
bcraven · 7h ago
Please check the HN guidelines, particularly the final one.
queenkjuul · 8h ago
Now?
freedomben · 14h ago
> “We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo,” the statement said.

Love them or hate them, or somewhere in between, I do appreciate this transparency.

mingus88 · 12h ago
It’s a kinda meaningless statement, tbh.

Pull requests to delete dead code or refactor are super common. It’s maintenance. Bravo.

What was actually changed, I wonder?

And the system prompt is imporant and good for publishing it, but clearly the issue is the training data and the compliance with user prompts that made it a troll bot.

So should we expect anything different moving forward? I’m not. Musk’s character has not changed and he remains the driving force behind both companies

loloquwowndueo · 14h ago
If they don’t the prompt will just get leaked by someone manipulating grok itself hours from being released, and then picked apart and criticized. It’s not about transparency but about claiming to be transparent to save face.
harimau777 · 13h ago
Is there any legal obligation for them not to lie about the prompt?
JumpCrisscross · 13h ago
If they lie and any harm comes from it, yes, that increases liability.
mingus88 · 12h ago
Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers

MangoToupe · 12h ago
Liability for what? Have they been hit with a defamation suit or something?
queenkjuul · 8h ago
It's not transparency, it's ass-covering techno babble.