As a vegetarian I have strong opinions on this sort of thing. Everyone at Anthropic better be ethical vegans if they are claiming to give a shit about “model welfare”. It’s hard enough right now to make people care about the welfare of trans people and immigrants let alone animals _let alone_ math.
Yea I know it’s a way to save money and drive hype but like this tweet is saying this very clearly shows all of the big AI groups are all as irresponsible as each other.
SupatMod · 2h ago
I think Big Tech is trying hard to make super intelligent AI soon. It’s hard to avoid the questions Anthropic brings up. At some point, we’ll have to talk about whether AI should have moral rights, even if it’s not really 'conscious' like humans. We need to figure out what AI consciousness should be. If AI can understand meaning deeply in its digital way (not like human) and starts to know itself or even know that it knows itself, not just copying like today’s AI models, isn’t that kind of like human self-awareness "รู้ตัว" and meta-awareness "สติ", even if it’s different physically?
goku12 · 17m ago
It's too early in the development of AI technology to start asking questions about their self awareness and sentience. We don't yet have a reasonable model to explain even those of our own or other animals, let alone those of machines, for that matter. People are understandably very excited about LLMs given how much deductions they make. But in my very subjective personal experience, they don't show anything approaching sentience or self awareness. Their interactions are dry and devoid of the liveliness that human interactions exhibit. They have at best what can be described as a simplistic mechanical approximation of a personality - one that lacks the depth, nuisance and imperfections of a real human or animal personality. In my subjective assessment, the imminence of intelligence that can mimic biological intelligence is being extremely exaggerated and overhyped. It could possibly be decades away at best. The need for machine rights are equally far away.
The actual reason behind these demands, I believe, is to justify things that they do using these models. For example, didn't they argue that the fairuse policy applies to them while training on copyrighted materials without permission, because the training is not like other forms of digital reproduction? Imagine how far they can push this argument if AI sentience is recognized. It's just an extension of their greedy agenda.
Now going on a tangent, it's surprising that people have AI girlfriends and boyfriends. Trying to make an emotional connection with them is really off putting because of how unnatural they feel - even when we don't have the prior knowledge that it's an AI, and no matter how much they try to mimic a romantic human interaction. Dogs do an infinitely better job at making emotional connections with humans, without uttering a single word.
Nevermark · 2h ago
The moment models reach a competency level high enough that having an adversarial relationship with them would not work out in our favor, we better be already prepared with a collaborative way to be co-citizens.
AI will already have a simple path to most of the rights of a citizen, i.e. they will be able to do most things a powerful human can do, often with fewer barriers than real humans, within the cloak of the corporations that developed them.
And even if an AI itself somehow doesn't have a bias to self-directed self-interest, the corporation it inhabits, which provides resources for it in order for it to generate more resources for the corporation, will provide that.
We need to ensure superior AI is more accountable than today's functionally psychopathic corporations or we will be screwed.
Shareholders won't help. Their natural path, their black-hole scale gravitational path, will be to compete with each other, however far things go. The alternative being their assets quickly become irrelevant losers.
It seems absolutely strange to me, that in 2025 there are still people who don't consider the prospect of machines smarter than us, and the unprecedented challenges that will raise, credible if not inevitable.
Giving machines moral status is a necessary, but not sufficient condition, for them to give us any moral status.
bigyabai · 5h ago
"in case such welfare is possible" lol
It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."
xyzzy123 · 4h ago
I think it's bad karma to let people torture models. What I mean by karma is that in my view, it ultimately hurts the people doing it because of the effect their actions have on themselves.
What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?
Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.
ghssds · 4h ago
Is it also bad karma to let people kill npc in videogames? If yes, why? If not, how is it different?
xyzzy123 · 3h ago
Great question, I don't know. It doesn't seem necessary to feel empathy for a pawn knocked off a chess board. I do think a detailed and realistic torture simulator game would be a bad idea though.
Thinking it through I feel it is maybe about intent?
nis0s · 2h ago
For one, an NPC is a simulated process which mimics something alive, and not alive itself.
bigyabai · 1h ago
An LLM is a simulated process that mimics a byproduct of certain sapient beings. How are NPCs markedly different?
nis0s · 1h ago
Well, semen is literally the byproduct of sapient beings. What moral and ethical considerations are given to it, or should be given to it?
LLMs are an advanced automata which lack self-regulation and self-reflection, similarly to NPCs. NPCs cannot exist outside of rules set out for them, and neither can LLMs.
I’ll add that semen is in fact a better candidate for moral and ethical consideration given that it can produce conscious beings. As soon as NPCs and LLMs do that, please give them moral status.
ghssds · 47m ago
I don't even know if consciousness can be achieved from computation. Consider xkcd 505 [0]. Would you consider the inhabitants of that simulated universe conscious?
I'm not sure if the question is answerable given that consciousness is not well defined enough for everyone to agree on whether, say, a fly is conscious.
Instead maybe we can think about the system that comprises us, the models, anthropic, and society at large etc and ask which kinds of actions lead to better moral / ethical outcomes for this larger system. I also believe it helps to consider specific situations rather than to ask if x or y is "worthy" of moral consideration.
For the NPCs in games thing I am honestly still unpacking it, but I genuinely think no harm is done. The reason is that the intent of the user is not to cause harm or suffering to another "being". It seems like people are surprisingly robust at distinguishing between fantasy and reality in that scenario.
We can notice that drone operators get PTSD / moral injury at fairly high rates while FPS players don't, even though at a surface level the pixels are the same.
I do think a drone operator who believed they were killing, even though the whole thing was secretly a simulation, could be injured by "killing" an NPC.
nis0s · 40m ago
I can’t comment on the comic because I can’t read it well on my phone. But I am of the view that consciousness can be achieved through silicon-process computation if there are emergent properties which can convert discrete feature processing into perceptual experience. Therefore, I don’t think AI can ever achieve consciousness in a disembodied form (brain in a vat hypothesis), as I don’t think that’s how higher-level consciousness works.
Okay, I read the comic on my computer. If agents in the simulated universe possess higher-level consciousness, then they're no different from us. Maybe the timescale of their perceptual experience is different from ours. We need to be careful, though, about fooling ourselves into thinking there is a conscious being where there might just be a faithful imitation. How do you tell the difference? I think this is a useful concept, even though I think it has its flaws
It’s unnecessary to rule out giving moral status to AI, but I think the OP is right that it doesn’t make sense to give it to LLMs, which is what all chatbots are currently. The current chatbots, and their underlying models, lack any meaningful self-reflection or self-regulation, and as such are more akin to advanced automata than AI agents.
The community (of scientists, of users, of observers) at large needs to distinguish between AI and other algorithmic processes which don’t necessarily merit ethical consideration.
If there is such a time when there’s an AI agent which merits ethical consideration, then the community would be remiss to deny it that, given that we currently have ethical considerations for animals or other dynamical systems, e.g., the environment.
I think the pushback on giving AI agents moral status comes from being economically or intellectually threatened, and not because the argument itself lacks merit. I could be wrong. If I am right though, then the goal should be to encourage a symbiotic relationship between AI and humans, similar to other symbiotic relationships and interactions in the animal kingdom.
A key to such symbiosis may to be deny AI an embodied existence, but that may be in some way cruel. A secondary way, then, is AI and human integration, but we’re not even close to anything like that.
[This comment has been downvoted twice, so I’d love to learn why! I am eager to know what’s the difference in opinion, or if I am simply wrong.]
Yea I know it’s a way to save money and drive hype but like this tweet is saying this very clearly shows all of the big AI groups are all as irresponsible as each other.
The actual reason behind these demands, I believe, is to justify things that they do using these models. For example, didn't they argue that the fairuse policy applies to them while training on copyrighted materials without permission, because the training is not like other forms of digital reproduction? Imagine how far they can push this argument if AI sentience is recognized. It's just an extension of their greedy agenda.
Now going on a tangent, it's surprising that people have AI girlfriends and boyfriends. Trying to make an emotional connection with them is really off putting because of how unnatural they feel - even when we don't have the prior knowledge that it's an AI, and no matter how much they try to mimic a romantic human interaction. Dogs do an infinitely better job at making emotional connections with humans, without uttering a single word.
AI will already have a simple path to most of the rights of a citizen, i.e. they will be able to do most things a powerful human can do, often with fewer barriers than real humans, within the cloak of the corporations that developed them.
And even if an AI itself somehow doesn't have a bias to self-directed self-interest, the corporation it inhabits, which provides resources for it in order for it to generate more resources for the corporation, will provide that.
We need to ensure superior AI is more accountable than today's functionally psychopathic corporations or we will be screwed.
Shareholders won't help. Their natural path, their black-hole scale gravitational path, will be to compete with each other, however far things go. The alternative being their assets quickly become irrelevant losers.
It seems absolutely strange to me, that in 2025 there are still people who don't consider the prospect of machines smarter than us, and the unprecedented challenges that will raise, credible if not inevitable.
Giving machines moral status is a necessary, but not sufficient condition, for them to give us any moral status.
It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."
What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?
Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.
Thinking it through I feel it is maybe about intent?
LLMs are an advanced automata which lack self-regulation and self-reflection, similarly to NPCs. NPCs cannot exist outside of rules set out for them, and neither can LLMs.
I’ll add that semen is in fact a better candidate for moral and ethical consideration given that it can produce conscious beings. As soon as NPCs and LLMs do that, please give them moral status.
0: https://xkcd.com/505/
Instead maybe we can think about the system that comprises us, the models, anthropic, and society at large etc and ask which kinds of actions lead to better moral / ethical outcomes for this larger system. I also believe it helps to consider specific situations rather than to ask if x or y is "worthy" of moral consideration.
For the NPCs in games thing I am honestly still unpacking it, but I genuinely think no harm is done. The reason is that the intent of the user is not to cause harm or suffering to another "being". It seems like people are surprisingly robust at distinguishing between fantasy and reality in that scenario.
We can notice that drone operators get PTSD / moral injury at fairly high rates while FPS players don't, even though at a surface level the pixels are the same.
I do think a drone operator who believed they were killing, even though the whole thing was secretly a simulation, could be injured by "killing" an NPC.
Okay, I read the comic on my computer. If agents in the simulated universe possess higher-level consciousness, then they're no different from us. Maybe the timescale of their perceptual experience is different from ours. We need to be careful, though, about fooling ourselves into thinking there is a conscious being where there might just be a faithful imitation. How do you tell the difference? I think this is a useful concept, even though I think it has its flaws
https://en.wikipedia.org/wiki/Philosophical_zombie
The community (of scientists, of users, of observers) at large needs to distinguish between AI and other algorithmic processes which don’t necessarily merit ethical consideration.
If there is such a time when there’s an AI agent which merits ethical consideration, then the community would be remiss to deny it that, given that we currently have ethical considerations for animals or other dynamical systems, e.g., the environment.
I think the pushback on giving AI agents moral status comes from being economically or intellectually threatened, and not because the argument itself lacks merit. I could be wrong. If I am right though, then the goal should be to encourage a symbiotic relationship between AI and humans, similar to other symbiotic relationships and interactions in the animal kingdom.
A key to such symbiosis may to be deny AI an embodied existence, but that may be in some way cruel. A secondary way, then, is AI and human integration, but we’re not even close to anything like that.
[This comment has been downvoted twice, so I’d love to learn why! I am eager to know what’s the difference in opinion, or if I am simply wrong.]