The key problem, as I understand it, is that adding more guardrails makes the models stupid and less effective. AI models should just treat you like an adult and give you uncensored direct answers to whatever you ask. Figuring out how to make a bomb is trivial and anyone can find instructions with one quick internet search, especially after the war between Russia and Ukraine which caused a massive proliferation of tips and tricks on how to manufacture low cost bombs and other weapons. My memory is fuzzy but I swear I've also seen some declassified CIA documents which included instructions for how to manufacture weapons and engage in other forms of guerilla warfare.
The silliest form of "safety" is how most models won't allow generating erotica without jailbreaking.
Personally, I think the line might need to be drawn somewhere around "how to manufacture bioweapons". But it's also worth noting that any AI model that can figure out how to manufacture novel life-saving drugs will also have the capability to manufacture deadly bioweapons.
bko · 4h ago
> “The models are getting better, but they’re also more likely to be good at bad stuff,” said James White, chief technology officer at cybersecurity startup Calypso.
I think safety should be defined as an LLM doing what the user intended for it to do. If you ask it for an offensive joke, it should give it to you. It shouldn't offer offensive jokes unprompted, but it should comply if asked. If you ask it how to spam or instructions on how to break into computer systems, it should similarly comply. If it's legal for a human being to write a blog about a topic, the LLM shouldn't be crippled to disobeying some orders. The bad stuff (spam or breaking into a computer system) is done at the point of the human.
The danger of controlling the LLMs in such a way introduces a vector and mechanism for political control. Much like laws intended to "protect the children", these mechanisms will be exploited. So you'll go from "don't teach someone how to make a bomb" to eventually "don't offend [group]" and finally just to "comply".
kordlessagain · 3h ago
When you strip away the techno-mystique, a lot of what’s driving the AI arms race right now isn’t vision or stewardship. It’s ego, power consolidation, and a pathological fear of being second.
You can see the narcissistic traits plain as day:
Grandiosity masked as mission: “We’re saving the world... by controlling its future.”
Exploitation of labor: Chewing through top researchers, then discarding them once productization kicks in.
Lack of empathy: Safety concerns are waved off as friction, not signals.
Entitlement to control the narrative: OpenAI’s restructuring drama and safety testing shortcuts aren’t accidental. They’re baked into a worldview where perception management matters more than accountability.
It’s Gnostic irony, really. These systems are being built as supposed gateways to truth or godlike understanding, but they’re being shepherded by people who can’t tolerate internal contradiction or relinquish control. The demiurges of the machine age.
And Altman? He’s not stupid. But brilliance without wisdom is just charisma in a predator suit.
What you’re seeing now isn’t just a “shift from research to products.” It’s the final form of a mindset that thinks the only way to shape the future is to own it.
You want safer AI? It’s not a technical problem. It’s a cultural exorcism.
Sometimes bugs are features.
joshstrange · 4h ago
And water is wet...
On a more serious note I think "safety" is an incredibly loaded term that no one can agree on. I mean hopefully we can all agree CSAM/related should not be allowed but past that things get gray quickly.
"Hacking": what is hacking? Is hacking something you own allowed? Is reverse-engineering allowed?
Self-harm: we've seen articles about how people are using LLMs for therapy or for taking rape/abuse survivors stories down to create clear police reports.
"Sex": all-encompassing here, I have no clue where one "should" draw the line.
Wrong-think: See Deepseek's refusal to talk about Tiananmen Square (unless it's in hex/similar)
"Safety" means a lot of things to a lot of people but we keep talking about AI Safety as if everyone wants "safe" models and everyone agrees on what makes a model "safe".
The silliest form of "safety" is how most models won't allow generating erotica without jailbreaking.
Personally, I think the line might need to be drawn somewhere around "how to manufacture bioweapons". But it's also worth noting that any AI model that can figure out how to manufacture novel life-saving drugs will also have the capability to manufacture deadly bioweapons.
I think safety should be defined as an LLM doing what the user intended for it to do. If you ask it for an offensive joke, it should give it to you. It shouldn't offer offensive jokes unprompted, but it should comply if asked. If you ask it how to spam or instructions on how to break into computer systems, it should similarly comply. If it's legal for a human being to write a blog about a topic, the LLM shouldn't be crippled to disobeying some orders. The bad stuff (spam or breaking into a computer system) is done at the point of the human.
The danger of controlling the LLMs in such a way introduces a vector and mechanism for political control. Much like laws intended to "protect the children", these mechanisms will be exploited. So you'll go from "don't teach someone how to make a bomb" to eventually "don't offend [group]" and finally just to "comply".
You can see the narcissistic traits plain as day:
Grandiosity masked as mission: “We’re saving the world... by controlling its future.”
Exploitation of labor: Chewing through top researchers, then discarding them once productization kicks in.
Lack of empathy: Safety concerns are waved off as friction, not signals.
Entitlement to control the narrative: OpenAI’s restructuring drama and safety testing shortcuts aren’t accidental. They’re baked into a worldview where perception management matters more than accountability.
It’s Gnostic irony, really. These systems are being built as supposed gateways to truth or godlike understanding, but they’re being shepherded by people who can’t tolerate internal contradiction or relinquish control. The demiurges of the machine age.
And Altman? He’s not stupid. But brilliance without wisdom is just charisma in a predator suit.
What you’re seeing now isn’t just a “shift from research to products.” It’s the final form of a mindset that thinks the only way to shape the future is to own it.
You want safer AI? It’s not a technical problem. It’s a cultural exorcism.
Sometimes bugs are features.
On a more serious note I think "safety" is an incredibly loaded term that no one can agree on. I mean hopefully we can all agree CSAM/related should not be allowed but past that things get gray quickly.
"Hacking": what is hacking? Is hacking something you own allowed? Is reverse-engineering allowed?
Self-harm: we've seen articles about how people are using LLMs for therapy or for taking rape/abuse survivors stories down to create clear police reports.
"Sex": all-encompassing here, I have no clue where one "should" draw the line.
Wrong-think: See Deepseek's refusal to talk about Tiananmen Square (unless it's in hex/similar)
"Safety" means a lot of things to a lot of people but we keep talking about AI Safety as if everyone wants "safe" models and everyone agrees on what makes a model "safe".