As a vegetarian I have strong opinions on this sort of thing. Everyone at Anthropic better be ethical vegans if they are claiming to give a shit about “model welfare”. It’s hard enough right now to make people care about the welfare of trans people and immigrants let alone animals _let alone_ math.
Yea I know it’s a way to save money and drive hype but like this tweet is saying this very clearly shows all of the big AI groups are all as irresponsible as each other.
Nevermark · 33m ago
The moment models reach a competency level high enough that having an adversarial relationship with them would not work out in our favor, we better be already prepared with a collaborative way to be co-citizens.
AI will already have a simple path to most of the rights of a citizen, i.e. they will be able to do most things a powerful human can do, often with fewer barriers than real humans, within the cloak of the corporations that developed them.
And even if an AI itself somehow doesn't have a bias to self-directed self-interest, the corporation it inhabits, which provides resources for it in order for it to generate more resources for the corporation, will provide that.
We need to ensure superior AI is more accountable than today's functionally psychopathic corporations or we will be screwed.
Shareholders won't help. Their natural path, their black-hole scale gravitational path, will be to compete with each other, however far things go. The alternative being their assets quickly become irrelevant losers.
It seems absolutely strange to me, that in 2025 there are still people who don't consider the prospect of machines smarter than us, and the unprecedented challenges that will raise, credible if not inevitable.
Giving machines moral status is a necessary, but not sufficient condition, for them to give us any moral status.
bigyabai · 3h ago
"in case such welfare is possible" lol
It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."
xyzzy123 · 2h ago
I think it's bad karma to let people torture models. What I mean by karma is that in my view, it ultimately hurts the people doing it because of the effect their actions have on themselves.
What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?
Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.
ghssds · 2h ago
Is it also bad karma to let people kill npc in videogames? If yes, why? If not, how is it different?
xyzzy123 · 1h ago
Great question, I don't know. It doesn't seem necessary to feel empathy for a pawn knocked off a chess board. I do think a detailed and realistic torture simulator game would be a bad idea though.
Thinking it through I feel it is maybe about intent?
nis0s · 14m ago
For one, an NPC is a simulated process which mimics something alive, and not alive itself.
nis0s · 1h ago
It’s unnecessary to rule out giving moral status to AI, but I think the OP is right that it doesn’t make sense to give it to LLMs, which is what all chatbots are currently. The current chatbots, and their underlying models, lack any meaningful self-reflection or self-regulation, and as such are more akin to advanced automata than AI agents.
The community (of scientists, of users, of observers) at large needs to distinguish between AI and other algorithmic processes which don’t necessarily merit ethical consideration.
If there is such a time when there’s an AI agent which merits ethical consideration, then the community would be remiss to deny it that, given that we currently have ethical considerations for animals or other dynamical systems, e.g., the environment.
I think the pushback on giving AI agents moral status comes from being economically or intellectually threatened, and not because the argument itself lacks merit. I could be wrong. If I am right though, then the goal should be to encourage a symbiotic relationship between AI and humans, similar to other symbiotic relationships and interactions in the animal kingdom.
A key to such symbiosis may to be deny AI an embodied existence, but that may be in some way cruel. A secondary way, then, is AI and human integration, but we’re not even close to anything like that.
[This comment has been downvoted twice, so I’d love to learn why! I am eager to know what’s the difference in opinion, or if I am simply wrong.]
Yea I know it’s a way to save money and drive hype but like this tweet is saying this very clearly shows all of the big AI groups are all as irresponsible as each other.
AI will already have a simple path to most of the rights of a citizen, i.e. they will be able to do most things a powerful human can do, often with fewer barriers than real humans, within the cloak of the corporations that developed them.
And even if an AI itself somehow doesn't have a bias to self-directed self-interest, the corporation it inhabits, which provides resources for it in order for it to generate more resources for the corporation, will provide that.
We need to ensure superior AI is more accountable than today's functionally psychopathic corporations or we will be screwed.
Shareholders won't help. Their natural path, their black-hole scale gravitational path, will be to compete with each other, however far things go. The alternative being their assets quickly become irrelevant losers.
It seems absolutely strange to me, that in 2025 there are still people who don't consider the prospect of machines smarter than us, and the unprecedented challenges that will raise, credible if not inevitable.
Giving machines moral status is a necessary, but not sufficient condition, for them to give us any moral status.
It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."
What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?
Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.
Thinking it through I feel it is maybe about intent?
The community (of scientists, of users, of observers) at large needs to distinguish between AI and other algorithmic processes which don’t necessarily merit ethical consideration.
If there is such a time when there’s an AI agent which merits ethical consideration, then the community would be remiss to deny it that, given that we currently have ethical considerations for animals or other dynamical systems, e.g., the environment.
I think the pushback on giving AI agents moral status comes from being economically or intellectually threatened, and not because the argument itself lacks merit. I could be wrong. If I am right though, then the goal should be to encourage a symbiotic relationship between AI and humans, similar to other symbiotic relationships and interactions in the animal kingdom.
A key to such symbiosis may to be deny AI an embodied existence, but that may be in some way cruel. A secondary way, then, is AI and human integration, but we’re not even close to anything like that.
[This comment has been downvoted twice, so I’d love to learn why! I am eager to know what’s the difference in opinion, or if I am simply wrong.]