Are AI filters becoming stricter than society itself?
22 tsevis 7 8/24/2025, 1:24:17 AM
While experimenting with digital art and AI tools, I noticed how aggressively filters block historical, political, or artistic imagery. I wrote about how this impacts art, research, and cultural memory. Curious how others here see this balance between safety and censorship.
https://tsevis.com/censorship-ai-and-the-war-on-context
The initial idea was good and very much needed to eliminate (or at least heavily reduce) long-established racism/bigotry.
But the problem is that a lot of people started to abuse it as a virtue-signalling mechanism and/or a way to justify their jobs, leading to insanities like renaming the Git “master” branch.
I suspect AI safety is the same. There’s a grain of truth and usefulness to it, but no AI safety person will intentionally declare “we figured out how to make models safe, my job here is done”, so they have to always push the envelope, even towards ridiculous levels.
Many of the arguments against just seem to come down to "I want to be a jerk".
As the Greeks said: “Μέτρον ἄριστον”—balance is best. We don’t need to provoke just to indulge our worst instincts, but we also need tolerance for expressions that aren’t perfectly curated.
That said, my article isn’t really about political correctness. It’s about low-quality AI filters that can’t read context, and the corporate shortcuts that rely on them. My point isn’t to rant or complain, but to suggest a new era of content moderation—one that’s smarter, fairer, and more democratic.