Only juvenile prompting and reporting finds value in "what nasty thing did the LLM say about Musk."
What's more interesting is the censorship on the input side. I wanted Grok to analyse a wood engraving from 1861, Gustave Doré's "Harpies in the Forest of Suicides". Wouldn't do it. Grok's content policy filter refused to accept the upload because..."boobies". So I pixelated the wood-engraved breasts and re-uploaded. This time it worked. [1]
What about Michelangelo's statue of David? Denied. Instead of pixelating, I cut out David's genitals and placed them on his leg, leaving a genital-shaped hole in the groin area. [1] Bingo. Image accepted. Genital location matters. Now wasn't this more interesting than "Musk-Bad-Man-Says-Grok"?
What's more interesting is the censorship on the input side. I wanted Grok to analyse a wood engraving from 1861, Gustave Doré's "Harpies in the Forest of Suicides". Wouldn't do it. Grok's content policy filter refused to accept the upload because..."boobies". So I pixelated the wood-engraved breasts and re-uploaded. This time it worked. [1]
What about Michelangelo's statue of David? Denied. Instead of pixelating, I cut out David's genitals and placed them on his leg, leaving a genital-shaped hole in the groin area. [1] Bingo. Image accepted. Genital location matters. Now wasn't this more interesting than "Musk-Bad-Man-Says-Grok"?
https://imgur.com/a/pIdBJXm
I mean, no? That's just general LLM safety nonsense, and very old news at this point.
The thing in the linked article is significantly more unusual.