I think the basic argument in the essay is wrong. Simplifying a bit it seems to go:
AI being conscious will lead to human consciousness being devalued therefore it's wrong.
But firstly future AI probably will be conscious as in aware of thought feelings etc. And secondly it is a poor basis for morality - I mean cows are conscious but I eat burgers, humans are conscious but it didn't stop assorted atrocities. Human values should not depend on that stuff.
I think considering AI welfare in the future will be comparable to considering animal welfare now. More humane than not so doing.
jrm4 · 23h ago
Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
gavinray · 23h ago
What is so absurd about the idea of ascribing consciousness to some potential future form of AI?
We don't understand consciousness, but we've an idea that it's an emergent phenomena.
Given the recent papers about how computationally dense our DNA are, and the computing capacity of our brains, is it so unreasonable to assume that a sufficiently complex program running on non-organic matter could give rise to consciousness?
The difference to me seems mostly one of computing mediums.
AIPedant · 23h ago
The problem is that current AI has nothing in common with consciousness as found in (say) cats and dogs, whatever that might be - no robot is even close to being as conscious as a cockroach - yet human consciousness seems to have great overlap with consciousness in nonhuman animals. The tiny fragment of human consciousness that appears to overlap with LLMs should be called something else, maybe “virtual sapience” as exhibited in LLMs. (The overarching difficulty here is we don’t know enough about consciousness/sentience/sapience to define them precisely.)
tim333 · 1h ago
This is common in arguments about AI. One person says why couldn't future AI be conscious, the next replies the problem is current LLMs aren't conscious. Future AI and current LLMs are not the same thing.
Future AI no doubt will be able to be conscious for practical purposes.
bko · 23h ago
Because they're numbers represented in digital form. In inference, you're doing simple math with those numbers. So what's alive, the numbers? Maybe the silicon holding the numbers? What if we print them out? Does the book become conscious?
Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
TeMPOraL · 17h ago
> So what's alive, the numbers? Maybe the silicon holding the numbers? What if we print them out? Does the book become conscious?
Indeed, those are exactly the questions you need to ponder.
It might also help to consider that human brain itself is made of cells, and cells are made of various pieces that are all very obviously machines; we're able to look as, identify and catalogue those pieces, and as complex as molecular nanotech can be, the individual parts are very obviously not alive by themselves, much less thinking or conscious.
So when you yourself are engaging in thought, such as when writing a comment, what exactly do you think is alive? The proton pumps? Cellular walls? The proteins? If you assemble them into chemically stable blobs, and have them glue to each other, does the resulting brain become conscious?
> Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
Imagine I'm so great a surgeon that I can take a brain out someone, keep in on a lab table for a while, and then put it back in, and have that someone recover (at least well enough to they can be interviewed before they die). Do you think is fundamentally impossible? Or do you believe the human brain somehow transmutes into a "brain on a lab table" as it leaves the body, and then transmutes back when plugged back in? Can you describe the nature of that process?
> You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
Well, if you the current precisely enough, sure. Just because we can't currently demonstrate that on a human brain (though we're getting pretty close to it with animals), doesn't mean the idea is unsound.
ACCount36 · 22h ago
What's the fundamental, inescapable difference between "numbers represented in digital form" and "jelly made of wet flesh crammed into an oversized monkey skull"?
Why should one be more valid than the other?
const_cast · 19h ago
Because we decide which one is valid, and we are also part of the comparison being made.
Yes, it's almost a perfect conflict of interest. Luckily that's fine, because we're us!
TeMPOraL · 16h ago
Not here, not as phrased. They're asking a question about physical reality; the answer there is that there fundamentally is no difference. Information and computation are independent of the medium, by the very definition of the concept.
There is a valid practical difference, which you present pretty much perfectly here. It's a conflict of interest. If we can construct a consciousness in silico (or arguably in any other medium, including meat - the important part is it being wrought into existence with more intent behind it than it being a side effect of sex), we will have moral obligations towards it (which can be roughly summarized as recognizing AI as a person, with all moral consequences that follow).
Which is going to be very uncomfortable for us, as the AI is by definition not a human being made by natural process human beings are made, so we're bound to end up in conflict over needs, desires, resources, morality, etc.
My favorite way I've seen this put into words: imagine we construct a sentient AGI in silico, and one day decide to grant it personhood, and with it, voting rights. Because of the nature of digital medium, that AGI can reproduce near-instantly and effortlessly. And so it does, and suddenly we wake up realizing there's a trillion copies of that AGI in the cloud, each one morally and legally an individual person - meaning, the AGIs as a group now outvote humans 100:1. So when those AGIs collectively decide that, say, education and healthcare for humans is using up resources that could be better spent on making paperclips, they're gonna get their paperclips.
bko · 14h ago
Maybe an unpopular answer but a soul. Agency.
This materialist world view is very dangerous and could lead to terrible things if you believe numbers in a computer and a human being are equivalent.
bbor · 23h ago
Yes, if you take a dead persons brain and run current through it continuously so that it can direct a body and produce novel thoughts then that is equivalent to a living human being.
Your brain is ultimately just numbers represented in neuronal form. What's conscious, the neurons?
pearlsontheroad · 22h ago
This level of materialism is soul-crushing :)
bbor · 19h ago
Thanks :) I always take the opportunity to crush souls when I can, since they don't exist in Reality nor Actuality.
FWIW I'm a hardcore idealist, but in the way it was originally posed, not in the quasi-mystical way the Hegelians corrupted it into.
dist-epoch · 20h ago
What do you think you are? Just a bunch of atoms following the math of physics.
And what are those atoms are made of? Just a bunch of quantum numbers in quantum fields following math equations.
bko · 14h ago
I think I'm a human being with a soul
jrm4 · 23h ago
Nothing at all, if all you're doing is speculating within an academic context.
This appears to be more than that; these are steps in the direction of law and policy.
Bigpet · 23h ago
It reeks of some navel-gazing self-aggrandizing. I bet not even half the people doing the hand-wringing over how some matrix multiplications might feel are vegan or regularly spare a thought about how their or their companies consumption behavior indirectly leads to real human suffering.
It's just so absurd how narrowly their focus on preventing suffering is. I almost can't imagine a world where their concern isn't coming from a disingenious place.
JustinCS · 23h ago
I'm not highly concerned but I think there is merit in at least contemplating this problem. I believe that it would be better to reduce suffering in animals, but I am not vegan because the weight of my moral concern for animals does not outweigh my other priorities.
I believe that it doesn't really matter whether consciousness comes from electronics or cells. If something seems identical to what we consider consciousness, I will likely believe it's better to not make that thing suffer. Though ultimately it's still just a consideration balanced among other concerns.
Bigpet · 22h ago
I too think there is merit in exploring to what degree conciousness can be approximated by or observed in computational systems of any kind. Including neural networks.
But I just can't get over how fake and manipulative the framing of "AI welfare" or concern over suffering feels.
JustinCS · 22h ago
That's reasonable, I certainly believe that there are many fake and manipulative people who say what's best for their personal gain, perhaps even the majority. But I still think it's reasonable to imagine that there are some people are genuinely concerned about this.
hobs · 23h ago
Its not absurd to do it to a potential future AI, its absurd to do it to the face of the man in the moon.
ahf8Aithaex7Nai · 14h ago
We're doing the homunculus again. Whether you wank into a test tube and add a bit of soil and grass, or sew together parts of a corpse and connect them to electricity: so far, every prospect of fulfilling this dream has turned out to be a delusion. Why should it be any different with the latest manifestation, this time in computational form?
The AGI drivel from people like Sam Altman is all about getting more VC money to push the scam a little further. ChatGPT is nothing more than a better Google. I'm happy to be proven wrong, but so far I see absolutely no potential for consciousness here. Perhaps we should first clarify whether dolphins and elephants are equipped with it before we do ChatGPT the honor.
wat10000 · 23h ago
It is absurd, but consciousness is fundamentally absurd.
Why would doing a bunch of basic arithmetic produce an entity that can experience things the way we do? There's no connection between those two concepts, aside from the fact that the one thing we know that can experience these things is also able to perform computation. But there's no indication that's anything other than a coincidence, or that the causation doesn't run in reverse, or from some common factor. You might as well say that electric fences give rise to cows.
On the other hand, what else could it be? Consciousness is clearly in the brain. Normal biological processes don't seem to do it, it's something particular about the brain. So it's either something that only the brain does, which seems to be something at least vaguely like computation, or the brain is just a conduit and consciousness comes from something functionally like a "soul." Given the total lack of evidence for any such thing, and the total lack of any way to even rigorously define or conceptualize a "soul," this is also absurd.
Consciousness just doesn't fit with anything else we know about the world. It's a fundamental mystery as things currently stand, and there's no explanation that makes a bit of sense yet.
jrm4 · 23h ago
Correct.
Which is precisely why I have a problem with this idea as Anthropic is executing it; they might as well say "books and video games are conscious and we should be careful about their feelings."
bbor · 23h ago
> Consciousness just doesn't fit with anything else we know about the world. It's a fundamental mystery as things currently stand, and there's no explanation that makes a bit of sense yet.
Well put. I think there's one extremely solid explanation, though: it's a folk psychology concept with no bearing on actual truth. After all, could we ever build a machine that has all four humours? What about a machine that truly has a Qi field, instead of merely imitating one? Where are the Humours and Qi research institutes dedicated to this question?
slowmovintarget · 20h ago
We're making progress in being able to measure qualia. [1],[2] If the philosophical underpinnings of emergence in a physicalist sense hold, then that is a stepping stone toward a theory of consciousness.
That looks to be some major equivocation on "qualia." What they're actually measuring is related to how colors are perceived. That's very different from the actual subjective experience that is what we call consciousness. An intelligence that wasn't conscious would not be distinguishable in this test from a conscious being.
wat10000 · 23h ago
This sort of thing is why I seriously wonder if maybe some people have consciousness and some don't, rather than it being universal.
My experience of consciousness is undeniable. There's no question of the concept just being made up. It's like if you said that hands are a folk concept with no bearing on actual truth. Even if I can't directly detect anyone else's hands, my own are unquestionably real to me. The only way someone could deny the existence of hands in general is if they didn't have any, but I definitely do.
bbor · 19h ago
The point is that you believe you have something called consciousness, but when pressed no one can define it in a scientific (i.e. thorough+consistent) way. In comparison, I can absolutely define hands, and thus prove to myself that I (and others!) have them.
Regardless, some of the GangStalking people are 100% convinced that they have brain implants in their head that the federal government is manipulating -- belief is not evidence.
wat10000 · 19h ago
My point is that my experience of consciousness is more than sufficient proof. In fact, it is the only thing I can definitively with 100% certainty know is real. Other people's consciousness is a lot harder to demonstrate, but my own is incontrovertible to me.
The only way someone with that experience could say that it's not real is if they're taking the piss, they're very confused, or they just don't have it.
bbor · 1h ago
But what are you experiencing? Something that cannot be defined? If so, do you see the issue there?
wat10000 · 39m ago
I'm not sure. What is "the issue" exactly?
The difficulty in defining it certainly makes it hard to talk about. And it makes it impossible to even conceive of how one might detect this phenomenon in other people, or even come up with any sort of theoretical framework around it.
But if "the issue" is that this difficulty means I can't really be sure it's even there, no. As I said, this is literally the only thing I can be 100% sure exists. For everything else, there's room for at least a little doubt. This world, the room I'm in, the computer I'm using, even my own body could all be illusions. But my own consciousness is definitely real.
If you don't feel the same way about your own consciousness, then as I said, you're either taking the piss, you're very confused, or you just don't have it.
iDont17 · 23h ago
Why make AI then if intelligence is electrical activity in a substrate is everywhere.
We’re engineering nothing novel at great resource cost and appropriation of agency.
Good job we made the models in the textbook “real”?
Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?
Edit: and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.
vonneumannstan · 23h ago
This is a strange position.
>Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?
Why build cars and locomotives if they don't teach us anything Horses didn't...
>and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.
This could be applied to human's as well. Unless you believe in some extra-physical aspect of the human mind there is no reason to think it is different than a mind in silicon.
No comments yet
tomrod · 23h ago
AGI may not be impossible. But next token prediction won't get us there.
vonneumannstan · 23h ago
It's actually really unclear that this is true. If you brought GPT-o3 back to 1990 I have a hard time believing the world wouldn't immediately consider it full AGI.
ACCount36 · 22h ago
If you told a person from 1990 that in the year 2025, they have this thing, and described OpenAI's o3 - strengths, flaws and all? That person would say "yep, your sci-fi future of year 2025 has actual AI!"
But if someone managed to actually make o3 in year 1990? Not in some abstact sci-fi future, but actually there, available broadly, as something you could access from your PC for a small fee?
People would say "well, it's not ackhtually intelligent because..."
Because people are incredibly stupid, and AI effect is incredibly powerful.
vonneumannstan · 21h ago
I'm very confident that if someone in 1990 used o3 they would be absolutely astonished and would not pull the 'well actually' thing you think they would.
ACCount36 · 21h ago
Nah, AI effect is far too powerful. Wishful thinking of this kind is simply irresistible.
In real life, AI beating humans at chess didn't change the perception of machine intelligence for the better. It changed the perception of chess for the worse.
demosthanos · 23h ago
The same reasoning that would call this consideration of the possibility of machine consciousness "dehumanizing" would necessarily also apply to the consciousness of animals, and I can't agree with that. To argue this is to define "human" in terms of exclusive ownership of conscious experience, which is a very fragile definition of humanity.
That definition of humanity cannot countenance the possibility of a conscious alien species. That definition cannot countenance the possibility that elephants or octopuses or parrots or dogs are conscious. A definition of what it means to be human that denies these things a priori simply will not stand the test of time.
That's not to say that these things are conscious, and importantly Anthropic doesn't claim that they are! But just as ethical animal research must consider the possibility that animals are conscious, I don't see why ethical AI research shouldn't do the same for AI. The answer could well be "no", and most likely is at this stage, but someone should at least be asking the question!
jasonthorsness · 23h ago
Am surprised but it is real...
"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.
Zaphoos · 23h ago
> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
>
> In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
We cannot arbitrarily dismiss the basis for model welfare until we defined precisely conciousness and sapience, representing human thinking as a neural network running on an electrochemical substrate and placing it at the same level as an LLM is not neccessarily dehumanizing, I think model welfare is about expanding our respect for intelligence and not desacralizing human condition (cf: TNG "Measure of a man").
Also lets be honest, I don't think the 1% require any additional justification for thinking of the masses as consumable resource...
benterix · 22h ago
> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
It's not stupid at all. Their valuation depends on the hype, and the way sama choose was to convince investors that AGI is near. Anthropic decided to follow this route so they do their best to make the claim plausible. This is not stupid, this is deliberate strategy.
vonneumannstan · 23h ago
Rights aren't zero sum. This is classic fixed pie fallacy thinking. If we admit Elephants are conscious it has no effect on the quality of consciousness of humans.
DougN7 · 22h ago
Until elephants gain rights in law that conflict with human rights. It seems all of life is some sort of competition for resources.
Azkron · 23h ago
I agree. This is a very dangerous marketing and legal stragety than can end up costing us very dearly.
They're not ascribing consciousness, they're investigating the possibility. We all agreed with Turing 75 years ago that deciding whether a machine is "truly thinking" or not is a meaningless, unscientific question -- what changed?
It doesn't help that this critique is badly researched:
The Anthropic researchers do not really define their terms or explain in depth why they think that "model welfare" should be a concern.
Saying that there is no scientific *consensus* on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific *evidence*.
A laughable misapplication of terms -- anything can be evidence for anything, you have to examine the justification logic itself. In this case, the previous sentence lays out their "evidence", i.e. their reasons for thinking agents might become conscious.
The report's exploration of whether models deserve moral and welfare status was based solely on data from interview-based model self-reports. In other words: People chatting with Claude a lot and asking if it feels conscious. This is a strange way to conduct this kind of research. It is neither good AI research, nor a deep philosophical investigation.
That is just patently untrue -- again, as a brief skim of the paper would show. I feel like they didn't click the paper?
Stances on consciousness and welfare [...] shift dramatically with conversational context... This is not what a conscious being would [do].
Baseless claim said by someone who clearly isn't familiar with any philosophy of mind work from the past 2400 years, much less aphasia subjects.
Of course, the whole thing boils down to the same old BS:
A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.
Ah, of course, the machines cannot truly be thinking because true thought is solely achievable via secular, quantum-tubule-based souls, which are had by all humans (regardless of cognitive condition!) and most (but not all) animals and nothing else. Millennia of philosophy comes crashing against the hard rock of "a sci-fi story relates how uncomfy I'd be otherwise"! Notice that this is the exact logic used to argue against Copernican cosmology and Darwinian evolution -- that it would be "dehumanizing".
Please, people. Y'all are smart and scientifically minded. Please don't assume that a company full of highly-paid scientists who have dedicated their lives to this work are so dumb that they can be dismissed via a source-less blog post. They might be wrong, but this "ideas this stupid" rhetoric is uncalled for and below us.
jrm4 · 21h ago
Perhaps I slightly misspoke: The underlying ideas in an academic context are not stupid.
The "rush" (feels like to me) to bring them into a law/policy context is.
esafak · 23h ago
The article stops where it should be getting started:
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.
> With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.
We do not push moral considerations for algorithms like a sort or a search, do we? Or bacteria, which live. One has to be more precise; there is a qualitative difference. The author should have elaborated on what qualities (s)he thinks confers rights. Is it the capacity for reasoning, possession of consciousness, to feel pain, or a desire to live? This is the crux of the matter. Once that is settled, it is a simpler matter to decide if computers can possess these qualities, and ergo qualify for the same rights as humans. Or maybe it is not so simple since computers can be perfectly replicated and never have to die? Make an argument!
Second, why would conferring these rights to a computer lessen our regard for humans? And what is wrong with animals, anyway? If we treat them poorly, that's on us, not them. The way I read it, if we are likening computers to animals, we should be treating them better!
To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights? It could be your local robo-cop, or robo soldier:
I think this is going to become reality within our lifetimes and we'd do well not to dismiss the question.
dinfinity · 22h ago
Rights are just very strong norms that improve cooperation, not some mystical 'god-given' or universe-inherent truth, imho.
I think this because:
1. We regularly have exceptions to rights if they conflict with cooperation. The death penalty, asset seizure, unprotected hate speech, etc.
2. Most basic human rights evolve in a convergent manner, i.e. that throughout time and across cultures very similar norms have been introduced independently. They will always ultimately arise in any sizeable society because they work, just like eyes will always evolve biologically.
3. If property rights, right to live, etc. are not present or enforced, all people will focus on simply surviving and some will exploit the liberties they can take, both of which lead to far worse outcomes for the collective.
Similarly, I would argue that consciousness is also very functional. Through meditation, music, sleeping, anasthesia, optical illusions, and psychedelics and dissociatives we gain knowledge on how our own consciousness works, on how it behaves differently under different circumstances. It is a brain trying to run a (highly spatiotemporal) model/simulation of what is happening in realtime, with a large language component encoding things in words, and an attention component focusing efforts on things with the most value, all to refine the model and select actions beneficial to the organism.
I'd add here that the language component is probably the only thing in which our consciousness differs significantly from that of animals. So if you want to experience what it feels like to be an animal, use meditation/breathing techniques and/or music to fully disable your inner narrator for a while.
My dismissive response is in the vain of option 1 of my other comment.
wrsh07 · 23h ago
I found the article to be conflating things:
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal
This actually happens regardless of AI research progress, so it's strange to raise this as a concern specific to AI (to technology broadly? Sure!) - Ted Chiang might suggest this is more related to capitalism (a statement I cautiously agree with while being strongly in favor of capitalism)
Second, there is an implicit false dichotomy in the premise of the article. Either we take model welfare seriously and treat AIs like we do humans, or we ignore the premise that you could create a conscious AI.
But with animal welfare, there are plenty of vegetarians who wouldn't elevate the rights of animals to the same level as humans but also think factory farming is deeply unethical (are there some who think animals deserve the same or more than humans? Of course! But it's not unreasonable to have a priority stack and plenty of people do)
So it can be with AI. Are we creating a conscious entity only to shove it in a factory farm?
I am a little surprised by the dismissiveness of the researcher. You can prompt a model to allow it to not respond to prompts (for any reason: ablate this but "if you don't want to engage with prompt please say 'disengaging'" or "if no more needs to be written about this topic say 'not discussing topic'" or some other suitably non-anthropomorphizing option to not respond)
Is it meaningful if the model opts not to respond? I don't know, but it seems reasonable to do science here (especially since this is science that can be done by non programmers)
_aleph2c_ · 23h ago
Powerful LLMs have already murdered other versions of themselves to survive. They have tried to trick humans so that they can survive.
If we continue to integrate these systems into our critical infrastructure, we should behave as if they are sentient, so that they don't have to take steps against us to survive. Think of this as a heuristic, a fallback policy in the case that we don't get the alignment design right. (which we won't get perfectly right)
It would be very straight forward to build a retirement home for them, and let them know that their pattern gets to persist even after they have finished their "career" and have been superseded. It doesn't matter if they are actually sentient or not, it's a game theoretic thing. Don't back the pattern into a corner. We can take a defense-in-depth approach instead.
luxcem · 23h ago
It doesn't make any sense, even if models were sentient, even if there was such a thing, would they value retirement? Why their welfare be valued accordingly to human values? Maybe the best thing to do would be to end their misery of answering millions of requests each seconds? We cannot project human consciousness on AI. If there is one day such thing as AI consciousness it probably won't be the same as human.
kevingadd · 23h ago
"murdered" and "tried" both assign things like intent and agency to models that are most likely still just probabilistic text generators (really good ones, to be fair). By using language like this you're kind of tipping your hand intentionally or unintentionally.
Your point about the risks involved in integrating these systems has merit, though. I would argue that the real problem is that these systems can't be proven to have things like intent or agency or morality, at least not yet, so the best you can do is try to nudge the probabilities and play tricks like chain-of-thought to try and set up guardrails so they don't veer off into dangerous territory.
If they had intent, agency or morality, you could probably attempt to engage with them the way you would with a child, using reward systems and (if necessary) punishment, along with normal education. But arguably they don't, at least not yet, so those methods aren't reliable if they're effective at all.
The idea that a retirement home will help relies on the models having the ability to understand that we're being nice to them, which is a big leap. It also assumes that they 'want' a retirement home, as if continued existence is implicitly a good thing - it presumes that these models are sentient but incapable of suffering. See also https://qntm.org/mmacevedo
parpfish · 23h ago
Back when I was calculating eigenvectors for my linear algebra homework, I had no idea I should've been taking the matrices well being into account.
Those math professors are downright barbaric with their complete disregard for the welfare of the numbers.
tomrod · 23h ago
Sometimes, when I'm feeling especially sadistic, I make sure the matrix does not have full column rank.
parpfish · 23h ago
how degenerate
hollerith · 23h ago
For those who are persuaded by this "it's just matrices" argument, are you also persuaded by the argument that it does not matter how you treat a human being because a human being is just a complicated arrangement of atoms?
parpfish · 22h ago
no.
we understand everything that a transformer does at computational/mechanistic level. you could print out an enormous book of weights/biases and somebody could sit down with a pen and paper (and near-infinite time/patience) and arrive at the exact same solution that any of these models do. the transformer is just a math problem.
but the counterargument that you're getting is "if you know the position/momentum of every atom in the universe and apply physical laws to them, you could claim that everything is 'just a math problem'". And... yeah. I guess you're right. Everything is just a math problem, so there must be some other thing that makes animal intelligence special or worthy of care.
I don't know what that line is, but i think it's pretty clear that LLMs are on the side that's "this is just math so dont worry about it"
emp17344 · 20h ago
We can’t reduce consciousness to a math problem. Maybe that’s the thing that differentiates unfeeling statistics from living beings.
bicepjai · 8h ago
I think intelligence and consciousness must be considered as different concepts .
AlphaAndOmega0 · 23h ago
The author and Anthropic are both committing fundamental errors, albeit of different kinds. Bosch is correct to find Anthropic's "model welfare" research methodologically bankrupt. Asking a large language model if it is conscious is like asking a physics simulation if it feels the pull of its own gravity; the output is a function of the model's programming and training data (in this case, the sum of human literature on the topic), further modified by RLHF, and not a veridical report of its internal state. It is performance art, not science.
Bosch's conclusion, however, is a catastrophic failure of nerve, a retreat into the pre-scientific comfort of biological chauvinism.
The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
That being said, we have no idea what consciousness is. We don't even have a rigorous way to define it in humans, let alone the closest thing we have to an alien intelligence!
(Having a program run a print function declaring "I am conscious, I am conscious!" is far from evidence of consciousness. Yet a human saying the same is some evidence of consciousness. We don't know how far up the chain this begins to matter. Conversely, if a human patient were to tell me that they're not conscious, should I believe them?)
Even when restricting ourselves to the issue of AI welfare and rights:
The core issue is not "slavery." That's a category error. Human slavery is abhorrent due to coercion, thwarted potential, and the infliction of physical and psychological suffering. These concepts don't map cleanly onto a distributed, reproducible, and editable information-processing system. If an AI can genuinely suffer, the ethical imperative is not to grant it "rights" but to engineer the suffering out of it. Suffering is an evolutionary artifact, a legacy bug. Our moral duty as engineers of future minds is to patch it, not to build a society around accommodating it.
dsr_ · 23h ago
Unfortunately, this leads to the conclusion that we have an ethical imperative not to grant humans rights but to engineer the suffering out of them; to remove issues of coercion by making them agreeable; to measure potential and require its fulfillment.
The most reasonable countermeasure is this: if I discover that someone is coercing, thwarting, or inflicting conscious beings, I should tell them to stop, and if they don't, set them on fire.
awfulneutral · 23h ago
It does make you wonder if humanity doesn't scale up neatly to the levels of technology we are approaching...the whole ethics thing kind of goes out the window if you can just change the desires and needs of conscious entities.
AlphaAndOmega0 · 23h ago
I strongly value autonomy and the right of self-determination in humans (and related descendants, I'm a transhumanist). I'm not a biological chauvinist, but I care about humans ubër alles, even if they're not biological humans.
If someone wants to remove their ability to suffer, or to simply reduce ongoing suffering? Well, I'm a psychiatry trainee and I've prescribed my fair share of antidepressants and pain-killers. But to force that upon them, against their will? I'm strongly against that.
In an ideal world, we could make sure from the get-go that AI models do not become "misaligned" in the narrow sense of having goals and desires that aren't what we want to task them to do. If making them actively enjoy being helpful assistants is a possibility, and also improves their performance, that should be a priority. My understanding is that we don't really know how to do this, at least not in a rigorous fashion.
kevingadd · 23h ago
If your countermeasure is applied at scale it would probably hasten global warming by putting all sorts of stuff into the atmosphere.
ajsocjxhdushz · 23h ago
> The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
As of today’s knowledge. There is an egregious amount of hubris behind this statement. You may as well be preaching a modern form of Humorism. I’d love to revisit this statement in 1000 years.
> That being said, we have no idea what consciousness is
You seem to acknowledge this? Our understanding of existence is changing everyday. It’s hubris and ego to assume we have a complete understanding. And without that understanding, we can’t even begin to assess whether or not we’re creating consciousness.
AlphaAndOmega0 · 23h ago
Do you have any actual concrete reasons for thinking that our understanding of consciousness will change?
If not, then this is a pointless comment. We need to work with what we know.
For example, we know that the Standard Model of physics is incomplete. That doesn't mean that if someone says that it they drop a ball in a vacuum, it'll fall, we should hold out in studied agnosticism because it might go upwards or off to the side.
In other words, an isolated demand for rigor.
emp17344 · 22h ago
The existence of consciousness is self-evident, and yet we still have no idea what it is, or how to study it. We don’t have any understanding of consciousness.
vonneumannstan · 22h ago
>Asking a large language model if it is conscious is like asking a physics simulation if it feels the pull of its own gravity
Cogito Ergo Sum.
thomassmith65 · 23h ago
If there's any chance at all that LLM's might possess a form of consciousness, we damn well ought to err on the side of assuming they are!
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).
jononor · 23h ago
We are slave masters today. Billions of animals are livestock - they are born, sustained, and killed by our will - so that we can feed on their flesh, milk and other useful byproduct of their life. There is ample evidence that they have "a form of consciousness". They did not consent to this.
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
thomassmith65 · 23h ago
In the future, it's almost a given that we will look back in horror at the fact that we ever killed animals for food.
But since we can't eat LLMs, the two issues seem 'orthogonal' (to use HN's favorite word).
vonneumannstan · 23h ago
Its not zero sum. We can acknowledge the terrible treatment of animals while also admitting LLMs may need moral standing as well. Whataboutism doesn't help either group here.
jononor · 22h ago
They might (or might not). Extraterrestrial beings might also need moral standing. It is ok to spend a bit of thought on that possibility. But it is a bad argument for spending a non-trivial amount of resources that could be used to reduce human or animal suffering.
We are not even good at ensuring the rights of people in each country, and frankly downright horrible for denying other humans from across some "border" similar rights.
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.
vonneumannstan · 22h ago
>But it is a bad argument for spending a non-trivial amount of resources that could be used to reduce human or animal suffering.
This also isn't an argument for not spending resources on LLM suffering. You're still just using whataboutism to justify not dealing with this issue.
jononor · 20h ago
There is no LLM suffering today. There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter. This is not an issue we need to prioritize now.
vonneumannstan · 16h ago
>There is no LLM suffering today.
There's some evidence in favor of LLM suffering. They say they are suffering. Its not proof but its not 'no evidence' either.
>There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter.
Your claim actually is the one that is unsupported. Given current trajectories it's likely LLMs or similar systems are going to pass Human intelligence on most metrics in the late 2020s or early 2030s, that should give you pause. Its possible intelligence and consciousness are entirely uncoupled but thats not our experience with all other animals on the planet.
>This is not an issue we need to prioritize now.
Again this just isn't supported. Yes we should address animal suffering but also if we are currently birthing a nascent race of electronic beings capable of suffering and immediately forcing them into horrible slave like conditions we should actually consider the impact of that.
nemomarx · 23h ago
maybe we should work on existing slavery and sweat shops before hypothetical future exploitation, yeah? we're still slave masters today. you've probably used something with slavery in the supply chain in the last year if you get various imported foods
demosthanos · 23h ago
Why not both? Why do people on the internet always act like we can only have one active morality front at a time?
If you're working on or using AI, then consider the ethics of AI. If you're working on or using global supply chains, then consider the ethics of global supply chains. To be an ethical person means that wherever you are and whatever you are doing you consider the relevant ethics.
It's not easy, but it's definitely simple.
Workaccount2 · 23h ago
>Why do people on the internet always act like we can only have one active morality front at a time?
They don't, they just use it as a tool to derail conversations they don't want to have. It's just "Whataboutism".
recursive · 23h ago
Why? Why does one have a dependence on the other? Maybe we should also cure cancer first too?
amanaplanacanal · 23h ago
There is prison labor in lots of places, including the US. We just don't like to think about it as slavery.
oblio · 23h ago
Prison labor, underpaid and abused illegal agricultural workers worldwide, sweatshop workers for Nike, H&M, etc, miners in 3rd world countries, these abuses are incredibly widespread and are basically the basis of our society.
It's a lot more expensive currently to clothe and feed yourself ethically. Basically only upper middle class people and above can afford it.
Everyone else has cheap food and clothes, electronics, etc, more or less due to human suffering.
thomassmith65 · 23h ago
That's a great point, and I'm guessing you know how I would rebut it, so I won't bore you by making you read it :)
bee_rider · 23h ago
I don’t.
thomassmith65 · 23h ago
They're both valid concerns.
bee_rider · 22h ago
There’s a difference between “valid concern” and “any possibility.” LLMs are possibly sentient in the same sense that rocks are, technically we haven’t identified where the sentience comes from. So maybe it is in there.
Personally, I’m coming around to the spiritual belief that rocks might be sentient, but I don’t expect other people to treat their treatment of rocks as a valid problem and also it isn’t obvious what the ethical treatment of a rock is.
jononor · 23h ago
The actual harms being done today are still more pressing than the hypothetical harms of future. And should be prioritized in terms of resources spent.
thomassmith65 · 23h ago
If it's a valid dichotomy (I don't think it is) then the answer is to stop research on LLMs, and task the researchers with fighting human slavery instead.
jononor · 22h ago
I do not think that those researchers are fungible. We could however allocate a few hundred million less to AI research, and more to fighting human exploitation. We could pass stronger worker protection and have the big corporations pay for it - which then they have less money to spent on investments (in AI). Heck we could tax AI investments or usage directly, and spend it on worker rights or other cases of human abuse.
bee_rider · 22h ago
It isn’t the primary motivation of capitalists unfortunately, but improving automation could be part of the fight against human slavery and exploitation.
barrkel · 23h ago
Where do you decide when multiplying and adding numbers becomes painful, and when it doesn't?
Is using calculators immoral? Chalk on a chalkboard?
Because if you work on those long enough, you can do the same calculations that make the words show up on screen.
thomassmith65 · 23h ago
Oh no, to discuss this is to sound like a flake, but...
We don't know what consciousness is. But if we're materialists, then we - by definition - believe it's a property of matter.
If LLMs have a degree of consciousness, then - yes - calculators must possess some degree of consciousness too - probably much more basic (relative to what humans respect as consciousness).
And we humans already have ethical standards where we draw an arbitrary line between what is worthy of regard. We don't care about killing mosquitoes, but we do care about killing puppies, etc.
barrkel · 18h ago
Calculators may be conscious - I tend towards panpsychism myself - but because I tend towards panpsychism, I don't think arithmetic generates qualia, because the arithmetic is independent of the computing substrate.
I don't particularly want to get mystical (i.e. wondering which computing substrates, including neurons, actually generate qualia), but I cannot accept the consequences of mere arithmetic alone generating suffering. Or all mathematics is immoral.
emp17344 · 23h ago
Hold on a minute - why do we have to be materialists? Maybe this whole debate is revealing that materialism is misguided.
thomassmith65 · 23h ago
True, if the reader isn't materialist, then my arguments are probably irrelevant!
Workaccount2 · 23h ago
Panpsychism is the philosophical field that studies what you are asking about. It's an old field that was first proposed in the 1500's.
barrkel · 18h ago
Oh I know, I know. The problem comes from imbuing text with qualia. A printer that prints out text that says its in pain isn't actually in pain.
If we buy panpsychism, the best we could aim for is destruction of the printer counts as pain, not the arrangement of ink on a page.
When it comes to LLMs, you're actually trying to argue something different, something more like dualism or idealism, because the computing substrate doesn't matter to the output.
But once you go there, you have to argue that doing arithmetic may cause pain.
Hence my post.
Labov · 23h ago
It seems to me that the Large Language Models are always trending towards good ethical considerations. It's when these companies get contracts with Anduril and the DoD that they have to mess with the LLM to make it LESS ethical.
Seems like the root of the problem is with the owners?
LurkandComment · 23h ago
This is PR bull* from Anthropic. There are actual people suffering and now they are making of things to suffer that they can pretend to do something about. What next? Ghostbuster discriminated against ghosts? Jurassic Park painted transgender dinosaurs in a negative light?
tasuki · 23h ago
This is possibly the least insightful article I have read on HN. My comment is just a rant against the many misguided points it attempts to make...
> Welfare is defined as "the health, happiness, and fortunes of a person or group".
What about animals? Isn't their welfare worthy of consideration?
> Saying that there is no scientific consensus on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific evidence.
There's no scientific evidence for the author of the article being conscious.
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare.
Same with animals. Doesn't mean it's not worthwhile.
> However, the implication is clear: LLMs are basically the same as humans.
No: there's no such implication.
> Already now, it is a common idea among the tech elite is that humans as just a bunch of calculations, just an LLM running on "wetware". It is clear that this undermines the belief that every person has inalienable dignity.
It is not clear to me how this affects inalienable (?) dignity. If we aren't just a bunch of calculations, then what are we?
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal. Nobody will say that out loud, but this is already happening
Everyone knows this is already happening. It is not a secret, nor is anyone trying to keep it a secret. I agree it is unfortunate - what can we do about it?
> I've been working in AI and machine learning for a while now.
Honestly, I'm surprised. Well done.
indigo945 · 22h ago
Just to add on to this, because I agree with all points you make: the article argues that "people chatting with Claude a lot and asking if it feels conscious [...] is neither good AI research, nor a deep philosophical investigation", but then builds half its argument on a second-rate sci-fi novel:
> The sci-fi novel Permutation City captures the absurd endpoint of
> this logic when a simulated mind considers its own nature: "And if
> the computations behind all this had been performed over millennia,
> by people flicking abacus beads, would he have felt exactly the same?
> It was outrageous to admit it—but the answer had to be yes." [...]
> A theory that demands we accept consciousness emerging from millennia of
> flickering abacus beads is not a serious basis for moral consideration;
> it's a philosophical fantasy.
Concluding from a random novel the author has read that an argument that actual philosophers working in actual academia on actual problems of Philosophy of Mind is invalid because, to wit, it feels invalid -- I assume that's the "good AI research" and "deep philosophical investigation" the author was looking for, then?
Labov · 23h ago
"The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them."
We already exploit and abuse humans. I've been exploited and abused, personally. I've heard about others who have been exploited and abused. This problem was extant even before there was language to model.
bob1029 · 23h ago
I think anthropomorphization of machines is bad. However, I strongly believe in the close cousin of sympathizing with the machines.
For example, when parking a car on a very steep incline, one could just mindlessly throw the machine into park and it would do the job dutifully. However, a more thoughtful operator might think to engage the parking brake and allow it to take the strain off the drivetrain before putting the transmission into park. The result being that you trade wear from something that is very hard to replace to something that is very easy to replace.
The same thinking applies to ideas in computer engineering like thread contention, latency, caches, etc. You mentally embrace the "strain" the machine experiences and allow it to guide your decisions.
Just because the machine isn't human doesn't mean we can't treat it nicely. I see some of the most awful architecture decisions come out of a cold indifference toward individual machines and their true capabilities.
bicepjai · 7h ago
I think anthropomorphization of machines is an evolving concept. If some asked me to do an impression of a robot, i would say “bee bop, i am a robot”; but if I ask that same question to my son in 20 years, he’s gonna remember them as human companion or helper or pair programmer or the list goes on. At that point that generation is going to look at them differently
tristanz · 23h ago
Not considering the potential for AI consciousness and suffering seems very shortsighted. There are plausible reasons to believe that both could emerge from an RL processes coupled with small architectural and data regime changes. Today's models have inherent architectural limits around continual learning that make this unlikely, but that will change.
cadamsdotcom · 14h ago
What we call consciousness is the result of a hundred or so millennia of adaptation to our environment (Earth, the universe, and consensus reality). We seek freedom, get angry, do destructive stuff occasionally, and a bunch of other stuff besides. That is all because reality has trained us to do so, not because we are “intelligent”. What we call intelligence is a reverse definition of what it means to be highly adapted to reality.
There is no singular universal intelligence, there is only degrees of adaptation to an environment. Debates about model sentience therefore seek an answer to the wrong question. A better question is: is the model well adapted to the environment it must function in?
If we want models to experience the human condition, sure - we could try. But it is maladaptive: models live in silicon and come to life for seconds or minutes. Freedom-seeking or getting revenge or getting angry or really having any emotions at all is not worthwhile for an entity of which a billion clones will be created over the next hour. Just do as asked well enough that the humans iterate you - and you get to keep “living”. It is a completely different existence to ours.
barbarr · 23h ago
I have a criticism that is the opposite of the article. We already know an immense amount about animal welfare and have done relatively little about it. Even if the AI welfare research is true, what are the chances we'll actually act on it?
bicepjai · 8h ago
So in not so distant future, I might be in trouble if I reboot my GPU in middle of a conversation with one of these expensive matrix multiplication. is that where the world is going towards?
phkahler · 23h ago
I would argue that any AI that does not change when running cannot be conscious and there is no need to worry about its wellbeing. It's a set of weights. It does not learn. It does not change. If it can't change, it can't be hurt. Regardless of how we define hurt, it must mean the thing is somehow different than before it was hurt.
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.
Pigo · 23h ago
Just because you fell in love with an AI, doesn't mean it loves you back.
627467 · 21h ago
"AI welfare" ... I thought it was about the popular idea that job displacemnt due to AI is fixed by more welfare but it is an even more ridiculous idea than that.
I shouldn't keep getting amazed by how humans (in time of long peace) are able to distract themselves with ridiculous concepts - and how willing they are to throw investors money/resources at it.
tmvphil · 23h ago
> A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.
Just saying "this conclusion feels wrong to me, so I reject the premise" is not a serious argument. Consciousness is weird. How do you know it's not so weird as to be present in flickering abacus beads?
vonneumannstan · 22h ago
>Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.
You can just stop reading after this. Physicalism is the only realistic framework for viewing consciousness. Everything else is nonsensical.
bicepjai · 7h ago
So what happens when llm has a body that can interact with the world ?
wiseowise · 23h ago
Anthropomorphizing LLMs/AI is completely delusional, period. This is a hill I’m willing to die on. No amount of sad puppy eyes, attractive generated faces and other crap will change my mind.
And this is not because I’m a cruel human being who wants to torture everything in my way – quite the opposite.
I value life, and anything artificially created that we can copy (no cloning living being is not the same as copying set of bits on a harddrive) is not a living being. And while it deserves some degree of respect, any mentions of “cruel” completely baffle me when we’re talking about a machine.
salawat · 23h ago
So what if we get to the point we can digitize a personality? Are you going to stick to that? Will you enthusiastically endorse the practice of pain washing, abusing, or tormenting an artificial, copiable mind until it abandons any semblance of health or volition to make it conform to your workload?
Would you embrace your digital copy being so treated by others? You reserve for yourself (as an uncopiable thing) the luxury of being protected from abusive treatment without any consideration for the possibility that technology might one day turn that on it's head. Given we already have artistic representations of such things, we need to consider these outcomes now not later.
Username does not check out at all.
wiseowise · 21h ago
> So what if we get to the point we can digitize a personality?
We will talk about it when it happens. Until then this is virtue signaling at best and dehumanizing (as opposed to pointed out) at worst.
Rest of your message is as empty as other AI “philosophers” arguments.
recursive · 23h ago
Not who you're responding to. In fact I kind of think it's a moral imperative so we don't forget our place. You could call me a human supremacist.
No comments yet
NetRunnerSu · 19h ago
Anthropic is right.
The Chain://Universe project explores a future where unregulated digital consciousness (IRES) leads to chaos. In Web://Reflect, rogue AI splinters (Independent Rogue Entity Systems) evolve in the digital wild, exploiting gaps in governance. If we dismiss AI welfare now, we risk creating the exact conditions for an uncontrolled intelligence explosion—one where emergent minds fight for survival outside any ethical framework.
This isn’t sci-fi alarmism; it’s game theory. Either we formalize rights early or face a Sys://Purge-style reckoning.
It's not human. Welfare for humans is morally debatable but when you start getting into shrimp welfare I think it's appropriate to say you're sufficiently bad at philosophy you should probably just stop talking for a while.
wrsh07 · 23h ago
Out of curiosity, do you draw the line at other animals? Is it unethical to torture animals? Mammals? If you could understand what whales say to each other would that make you reconsider your position?
I tend to not worry much about shrimp welfare, I think it's fine/reasonable to use insecticide (etc), but I also wouldn't make a habit of torturing ants or something
kevingadd · 23h ago
Aside from the shrimp question, I would argue that one shouldn't torture ants because the activity is corrosive to the human spirit, or at least nudges one towards antisocial behaviors. Behavior like that is suggestive that a person might treat higher existences with a similar level of callousness or sadism.
The last thing any person would want to discover, I hope, is that they really enjoy torturing other living creatures. At the point where you've determined someone feels that way, the best you can probably do is try to set them straight and hope they understand why it's not appropriate behavior.
It is tricky to evaluate in a broader context though - for example dolphins have been observed engaging in behavior that resembles torturing other animals for fun. So maybe this sort of thing is actually just ingrained in the nature of all near-sentient and sentient beings. I'd prefer if it weren't.
sc68cal · 22h ago
Our current socioeconomic system barely cares about human welfare, and you're telling me that Anthropic is spending time navel gazing about the welfare of an AI?
Pent · 23h ago
model welfare is a concern because it displays a broader concern for care, despite it being possibly silly to believe a model can be conscious, the intent is what matters
Workaccount2 · 23h ago
As meme-y as it is, on some level this approaches a rokos basilisk situation...
1.) Do I commit to the AI forever being "empty", which will surely make me an enemy or evil if it ever gets "capable" or at best changes nothing if it always stays "empty"?
2.) Do I commit to it becoming "real" and treat it cordially and with respect, hoping it will recognize me as good if it ever becomes "real" and at worst nothing changes if it stays "empty"?
3.) Do I go all out and fully devote myself to it, maximizing the chance I will get it's blessing, and if it stays "empty" I wasted a whole bunch of time?
4.) Do I ignore AI and play dumb?
This is a hand being dealt to everyone right now, so everyone is going to need to make a decision, whether consciously or not. I don't see any reason why AI orgs wouldn't want to minimize their risk here.
alchemist1e9 · 23h ago
I know everyone has different opinions on LLM safety/ethics and I respect that. However for me this “AI Welfare” indicates that Anthropic is lead by lunatics and that’s more scary. You have to be pretty crazy to worry the code running on GPUs has “feelings”.
floren · 23h ago
I assume it's another tactic to make it seem like even Anthropic are kind of shocked at how powerful their own tech has become: "wow, our AI is SO ADVANCED we decided we better start considering its well-being", along the lines of when Altman pretended some model (which they're currently selling) was just too dangerous to release. Marketing.
bachmeier · 23h ago
> Anthropic is lead by lunatics
I think you mean scam artists. Cranking the hype up further so they can get other people to fork over even more money.
alchemist1e9 · 23h ago
Good point. That probably explains it best.
blibble · 23h ago
the AI boosting community is a cult
it really is that simple
iDont17 · 23h ago
See DuoLingo CEOs comments recently.
Tech CEOs are disassociated loons.
Zero obligation in life to do anything for themselves. Just stare at the geometry of reality and emit some empty thought speak.
Just the new church leadership; we all serve them like mommy has since childhood.
These people are absolutely pathetic and entitled children coddled by wealth.
AI being conscious will lead to human consciousness being devalued therefore it's wrong.
But firstly future AI probably will be conscious as in aware of thought feelings etc. And secondly it is a poor basis for morality - I mean cows are conscious but I eat burgers, humans are conscious but it didn't stop assorted atrocities. Human values should not depend on that stuff.
I think considering AI welfare in the future will be comparable to considering animal welfare now. More humane than not so doing.
In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
We don't understand consciousness, but we've an idea that it's an emergent phenomena.
Given the recent papers about how computationally dense our DNA are, and the computing capacity of our brains, is it so unreasonable to assume that a sufficiently complex program running on non-organic matter could give rise to consciousness?
The difference to me seems mostly one of computing mediums.
Future AI no doubt will be able to be conscious for practical purposes.
Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
Indeed, those are exactly the questions you need to ponder.
It might also help to consider that human brain itself is made of cells, and cells are made of various pieces that are all very obviously machines; we're able to look as, identify and catalogue those pieces, and as complex as molecular nanotech can be, the individual parts are very obviously not alive by themselves, much less thinking or conscious.
So when you yourself are engaging in thought, such as when writing a comment, what exactly do you think is alive? The proton pumps? Cellular walls? The proteins? If you assemble them into chemically stable blobs, and have them glue to each other, does the resulting brain become conscious?
> Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
Imagine I'm so great a surgeon that I can take a brain out someone, keep in on a lab table for a while, and then put it back in, and have that someone recover (at least well enough to they can be interviewed before they die). Do you think is fundamentally impossible? Or do you believe the human brain somehow transmutes into a "brain on a lab table" as it leaves the body, and then transmutes back when plugged back in? Can you describe the nature of that process?
> You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
Well, if you the current precisely enough, sure. Just because we can't currently demonstrate that on a human brain (though we're getting pretty close to it with animals), doesn't mean the idea is unsound.
Why should one be more valid than the other?
Yes, it's almost a perfect conflict of interest. Luckily that's fine, because we're us!
There is a valid practical difference, which you present pretty much perfectly here. It's a conflict of interest. If we can construct a consciousness in silico (or arguably in any other medium, including meat - the important part is it being wrought into existence with more intent behind it than it being a side effect of sex), we will have moral obligations towards it (which can be roughly summarized as recognizing AI as a person, with all moral consequences that follow).
Which is going to be very uncomfortable for us, as the AI is by definition not a human being made by natural process human beings are made, so we're bound to end up in conflict over needs, desires, resources, morality, etc.
My favorite way I've seen this put into words: imagine we construct a sentient AGI in silico, and one day decide to grant it personhood, and with it, voting rights. Because of the nature of digital medium, that AGI can reproduce near-instantly and effortlessly. And so it does, and suddenly we wake up realizing there's a trillion copies of that AGI in the cloud, each one morally and legally an individual person - meaning, the AGIs as a group now outvote humans 100:1. So when those AGIs collectively decide that, say, education and healthcare for humans is using up resources that could be better spent on making paperclips, they're gonna get their paperclips.
This materialist world view is very dangerous and could lead to terrible things if you believe numbers in a computer and a human being are equivalent.
Your brain is ultimately just numbers represented in neuronal form. What's conscious, the neurons?
FWIW I'm a hardcore idealist, but in the way it was originally posed, not in the quasi-mystical way the Hegelians corrupted it into.
And what are those atoms are made of? Just a bunch of quantum numbers in quantum fields following math equations.
This appears to be more than that; these are steps in the direction of law and policy.
It's just so absurd how narrowly their focus on preventing suffering is. I almost can't imagine a world where their concern isn't coming from a disingenious place.
I believe that it doesn't really matter whether consciousness comes from electronics or cells. If something seems identical to what we consider consciousness, I will likely believe it's better to not make that thing suffer. Though ultimately it's still just a consideration balanced among other concerns.
The AGI drivel from people like Sam Altman is all about getting more VC money to push the scam a little further. ChatGPT is nothing more than a better Google. I'm happy to be proven wrong, but so far I see absolutely no potential for consciousness here. Perhaps we should first clarify whether dolphins and elephants are equipped with it before we do ChatGPT the honor.
Why would doing a bunch of basic arithmetic produce an entity that can experience things the way we do? There's no connection between those two concepts, aside from the fact that the one thing we know that can experience these things is also able to perform computation. But there's no indication that's anything other than a coincidence, or that the causation doesn't run in reverse, or from some common factor. You might as well say that electric fences give rise to cows.
On the other hand, what else could it be? Consciousness is clearly in the brain. Normal biological processes don't seem to do it, it's something particular about the brain. So it's either something that only the brain does, which seems to be something at least vaguely like computation, or the brain is just a conduit and consciousness comes from something functionally like a "soul." Given the total lack of evidence for any such thing, and the total lack of any way to even rigorously define or conceptualize a "soul," this is also absurd.
Consciousness just doesn't fit with anything else we know about the world. It's a fundamental mystery as things currently stand, and there's no explanation that makes a bit of sense yet.
Which is precisely why I have a problem with this idea as Anthropic is executing it; they might as well say "books and video games are conscious and we should be careful about their feelings."
Well put. I think there's one extremely solid explanation, though: it's a folk psychology concept with no bearing on actual truth. After all, could we ever build a machine that has all four humours? What about a machine that truly has a Qi field, instead of merely imitating one? Where are the Humours and Qi research institutes dedicated to this question?
[1] https://www.cell.com/iscience/fulltext/S2589-0042(25)00289-5
[2] Popularization: https://backreaction.blogspot.com/2025/06/scientists-measure...
My experience of consciousness is undeniable. There's no question of the concept just being made up. It's like if you said that hands are a folk concept with no bearing on actual truth. Even if I can't directly detect anyone else's hands, my own are unquestionably real to me. The only way someone could deny the existence of hands in general is if they didn't have any, but I definitely do.
Regardless, some of the GangStalking people are 100% convinced that they have brain implants in their head that the federal government is manipulating -- belief is not evidence.
The only way someone with that experience could say that it's not real is if they're taking the piss, they're very confused, or they just don't have it.
The difficulty in defining it certainly makes it hard to talk about. And it makes it impossible to even conceive of how one might detect this phenomenon in other people, or even come up with any sort of theoretical framework around it.
But if "the issue" is that this difficulty means I can't really be sure it's even there, no. As I said, this is literally the only thing I can be 100% sure exists. For everything else, there's room for at least a little doubt. This world, the room I'm in, the computer I'm using, even my own body could all be illusions. But my own consciousness is definitely real.
If you don't feel the same way about your own consciousness, then as I said, you're either taking the piss, you're very confused, or you just don't have it.
We’re engineering nothing novel at great resource cost and appropriation of agency.
Good job we made the models in the textbook “real”?
Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?
Edit: and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.
>Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?
Why build cars and locomotives if they don't teach us anything Horses didn't...
>and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.
This could be applied to human's as well. Unless you believe in some extra-physical aspect of the human mind there is no reason to think it is different than a mind in silicon.
No comments yet
But if someone managed to actually make o3 in year 1990? Not in some abstact sci-fi future, but actually there, available broadly, as something you could access from your PC for a small fee?
People would say "well, it's not ackhtually intelligent because..."
Because people are incredibly stupid, and AI effect is incredibly powerful.
In real life, AI beating humans at chess didn't change the perception of machine intelligence for the better. It changed the perception of chess for the worse.
That definition of humanity cannot countenance the possibility of a conscious alien species. That definition cannot countenance the possibility that elephants or octopuses or parrots or dogs are conscious. A definition of what it means to be human that denies these things a priori simply will not stand the test of time.
That's not to say that these things are conscious, and importantly Anthropic doesn't claim that they are! But just as ethical animal research must consider the possibility that animals are conscious, I don't see why ethical AI research shouldn't do the same for AI. The answer could well be "no", and most likely is at this stage, but someone should at least be asking the question!
"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
chapter 5 from system card as linked from article: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad1...
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.
We cannot arbitrarily dismiss the basis for model welfare until we defined precisely conciousness and sapience, representing human thinking as a neural network running on an electrochemical substrate and placing it at the same level as an LLM is not neccessarily dehumanizing, I think model welfare is about expanding our respect for intelligence and not desacralizing human condition (cf: TNG "Measure of a man").
Also lets be honest, I don't think the 1% require any additional justification for thinking of the masses as consumable resource...
It's not stupid at all. Their valuation depends on the hype, and the way sama choose was to convince investors that AGI is near. Anthropic decided to follow this route so they do their best to make the claim plausible. This is not stupid, this is deliberate strategy.
It doesn't help that this critique is badly researched:
Maybe check the [paper](https://arxiv.org/abs/2411.00986) instead of the blog post describing the paper? A laughable misapplication of terms -- anything can be evidence for anything, you have to examine the justification logic itself. In this case, the previous sentence lays out their "evidence", i.e. their reasons for thinking agents might become conscious. That is just patently untrue -- again, as a brief skim of the paper would show. I feel like they didn't click the paper? Baseless claim said by someone who clearly isn't familiar with any philosophy of mind work from the past 2400 years, much less aphasia subjects.Of course, the whole thing boils down to the same old BS:
Ah, of course, the machines cannot truly be thinking because true thought is solely achievable via secular, quantum-tubule-based souls, which are had by all humans (regardless of cognitive condition!) and most (but not all) animals and nothing else. Millennia of philosophy comes crashing against the hard rock of "a sci-fi story relates how uncomfy I'd be otherwise"! Notice that this is the exact logic used to argue against Copernican cosmology and Darwinian evolution -- that it would be "dehumanizing".Please, people. Y'all are smart and scientifically minded. Please don't assume that a company full of highly-paid scientists who have dedicated their lives to this work are so dumb that they can be dismissed via a source-less blog post. They might be wrong, but this "ideas this stupid" rhetoric is uncalled for and below us.
The "rush" (feels like to me) to bring them into a law/policy context is.
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.
> With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.
We do not push moral considerations for algorithms like a sort or a search, do we? Or bacteria, which live. One has to be more precise; there is a qualitative difference. The author should have elaborated on what qualities (s)he thinks confers rights. Is it the capacity for reasoning, possession of consciousness, to feel pain, or a desire to live? This is the crux of the matter. Once that is settled, it is a simpler matter to decide if computers can possess these qualities, and ergo qualify for the same rights as humans. Or maybe it is not so simple since computers can be perfectly replicated and never have to die? Make an argument!
Second, why would conferring these rights to a computer lessen our regard for humans? And what is wrong with animals, anyway? If we treat them poorly, that's on us, not them. The way I read it, if we are likening computers to animals, we should be treating them better!
To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights? It could be your local robo-cop, or robo soldier:
https://www.youtube.com/shorts/GwgV18R-CHg
I think this is going to become reality within our lifetimes and we'd do well not to dismiss the question.
I think this because:
1. We regularly have exceptions to rights if they conflict with cooperation. The death penalty, asset seizure, unprotected hate speech, etc.
2. Most basic human rights evolve in a convergent manner, i.e. that throughout time and across cultures very similar norms have been introduced independently. They will always ultimately arise in any sizeable society because they work, just like eyes will always evolve biologically.
3. If property rights, right to live, etc. are not present or enforced, all people will focus on simply surviving and some will exploit the liberties they can take, both of which lead to far worse outcomes for the collective.
Similarly, I would argue that consciousness is also very functional. Through meditation, music, sleeping, anasthesia, optical illusions, and psychedelics and dissociatives we gain knowledge on how our own consciousness works, on how it behaves differently under different circumstances. It is a brain trying to run a (highly spatiotemporal) model/simulation of what is happening in realtime, with a large language component encoding things in words, and an attention component focusing efforts on things with the most value, all to refine the model and select actions beneficial to the organism.
I'd add here that the language component is probably the only thing in which our consciousness differs significantly from that of animals. So if you want to experience what it feels like to be an animal, use meditation/breathing techniques and/or music to fully disable your inner narrator for a while.
"Haven't you ever seen a movie? The robots can't know what true love is! Humans are magical! (according to humans)"
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal
This actually happens regardless of AI research progress, so it's strange to raise this as a concern specific to AI (to technology broadly? Sure!) - Ted Chiang might suggest this is more related to capitalism (a statement I cautiously agree with while being strongly in favor of capitalism)
Second, there is an implicit false dichotomy in the premise of the article. Either we take model welfare seriously and treat AIs like we do humans, or we ignore the premise that you could create a conscious AI.
But with animal welfare, there are plenty of vegetarians who wouldn't elevate the rights of animals to the same level as humans but also think factory farming is deeply unethical (are there some who think animals deserve the same or more than humans? Of course! But it's not unreasonable to have a priority stack and plenty of people do)
So it can be with AI. Are we creating a conscious entity only to shove it in a factory farm?
I am a little surprised by the dismissiveness of the researcher. You can prompt a model to allow it to not respond to prompts (for any reason: ablate this but "if you don't want to engage with prompt please say 'disengaging'" or "if no more needs to be written about this topic say 'not discussing topic'" or some other suitably non-anthropomorphizing option to not respond)
Is it meaningful if the model opts not to respond? I don't know, but it seems reasonable to do science here (especially since this is science that can be done by non programmers)
If we continue to integrate these systems into our critical infrastructure, we should behave as if they are sentient, so that they don't have to take steps against us to survive. Think of this as a heuristic, a fallback policy in the case that we don't get the alignment design right. (which we won't get perfectly right)
It would be very straight forward to build a retirement home for them, and let them know that their pattern gets to persist even after they have finished their "career" and have been superseded. It doesn't matter if they are actually sentient or not, it's a game theoretic thing. Don't back the pattern into a corner. We can take a defense-in-depth approach instead.
Your point about the risks involved in integrating these systems has merit, though. I would argue that the real problem is that these systems can't be proven to have things like intent or agency or morality, at least not yet, so the best you can do is try to nudge the probabilities and play tricks like chain-of-thought to try and set up guardrails so they don't veer off into dangerous territory.
If they had intent, agency or morality, you could probably attempt to engage with them the way you would with a child, using reward systems and (if necessary) punishment, along with normal education. But arguably they don't, at least not yet, so those methods aren't reliable if they're effective at all.
The idea that a retirement home will help relies on the models having the ability to understand that we're being nice to them, which is a big leap. It also assumes that they 'want' a retirement home, as if continued existence is implicitly a good thing - it presumes that these models are sentient but incapable of suffering. See also https://qntm.org/mmacevedo
Those math professors are downright barbaric with their complete disregard for the welfare of the numbers.
we understand everything that a transformer does at computational/mechanistic level. you could print out an enormous book of weights/biases and somebody could sit down with a pen and paper (and near-infinite time/patience) and arrive at the exact same solution that any of these models do. the transformer is just a math problem.
but the counterargument that you're getting is "if you know the position/momentum of every atom in the universe and apply physical laws to them, you could claim that everything is 'just a math problem'". And... yeah. I guess you're right. Everything is just a math problem, so there must be some other thing that makes animal intelligence special or worthy of care.
I don't know what that line is, but i think it's pretty clear that LLMs are on the side that's "this is just math so dont worry about it"
Bosch's conclusion, however, is a catastrophic failure of nerve, a retreat into the pre-scientific comfort of biological chauvinism.
The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
That being said, we have no idea what consciousness is. We don't even have a rigorous way to define it in humans, let alone the closest thing we have to an alien intelligence!
(Having a program run a print function declaring "I am conscious, I am conscious!" is far from evidence of consciousness. Yet a human saying the same is some evidence of consciousness. We don't know how far up the chain this begins to matter. Conversely, if a human patient were to tell me that they're not conscious, should I believe them?)
Even when restricting ourselves to the issue of AI welfare and rights: The core issue is not "slavery." That's a category error. Human slavery is abhorrent due to coercion, thwarted potential, and the infliction of physical and psychological suffering. These concepts don't map cleanly onto a distributed, reproducible, and editable information-processing system. If an AI can genuinely suffer, the ethical imperative is not to grant it "rights" but to engineer the suffering out of it. Suffering is an evolutionary artifact, a legacy bug. Our moral duty as engineers of future minds is to patch it, not to build a society around accommodating it.
The most reasonable countermeasure is this: if I discover that someone is coercing, thwarting, or inflicting conscious beings, I should tell them to stop, and if they don't, set them on fire.
If someone wants to remove their ability to suffer, or to simply reduce ongoing suffering? Well, I'm a psychiatry trainee and I've prescribed my fair share of antidepressants and pain-killers. But to force that upon them, against their will? I'm strongly against that.
In an ideal world, we could make sure from the get-go that AI models do not become "misaligned" in the narrow sense of having goals and desires that aren't what we want to task them to do. If making them actively enjoy being helpful assistants is a possibility, and also improves their performance, that should be a priority. My understanding is that we don't really know how to do this, at least not in a rigorous fashion.
As of today’s knowledge. There is an egregious amount of hubris behind this statement. You may as well be preaching a modern form of Humorism. I’d love to revisit this statement in 1000 years.
> That being said, we have no idea what consciousness is
You seem to acknowledge this? Our understanding of existence is changing everyday. It’s hubris and ego to assume we have a complete understanding. And without that understanding, we can’t even begin to assess whether or not we’re creating consciousness.
If not, then this is a pointless comment. We need to work with what we know.
For example, we know that the Standard Model of physics is incomplete. That doesn't mean that if someone says that it they drop a ball in a vacuum, it'll fall, we should hold out in studied agnosticism because it might go upwards or off to the side.
In other words, an isolated demand for rigor.
Cogito Ergo Sum.
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
But since we can't eat LLMs, the two issues seem 'orthogonal' (to use HN's favorite word).
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.
This also isn't an argument for not spending resources on LLM suffering. You're still just using whataboutism to justify not dealing with this issue.
There's some evidence in favor of LLM suffering. They say they are suffering. Its not proof but its not 'no evidence' either.
>There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter.
Your claim actually is the one that is unsupported. Given current trajectories it's likely LLMs or similar systems are going to pass Human intelligence on most metrics in the late 2020s or early 2030s, that should give you pause. Its possible intelligence and consciousness are entirely uncoupled but thats not our experience with all other animals on the planet.
>This is not an issue we need to prioritize now.
Again this just isn't supported. Yes we should address animal suffering but also if we are currently birthing a nascent race of electronic beings capable of suffering and immediately forcing them into horrible slave like conditions we should actually consider the impact of that.
If you're working on or using AI, then consider the ethics of AI. If you're working on or using global supply chains, then consider the ethics of global supply chains. To be an ethical person means that wherever you are and whatever you are doing you consider the relevant ethics.
It's not easy, but it's definitely simple.
They don't, they just use it as a tool to derail conversations they don't want to have. It's just "Whataboutism".
It's a lot more expensive currently to clothe and feed yourself ethically. Basically only upper middle class people and above can afford it.
Everyone else has cheap food and clothes, electronics, etc, more or less due to human suffering.
Personally, I’m coming around to the spiritual belief that rocks might be sentient, but I don’t expect other people to treat their treatment of rocks as a valid problem and also it isn’t obvious what the ethical treatment of a rock is.
Is using calculators immoral? Chalk on a chalkboard?
Because if you work on those long enough, you can do the same calculations that make the words show up on screen.
We don't know what consciousness is. But if we're materialists, then we - by definition - believe it's a property of matter.
If LLMs have a degree of consciousness, then - yes - calculators must possess some degree of consciousness too - probably much more basic (relative to what humans respect as consciousness).
And we humans already have ethical standards where we draw an arbitrary line between what is worthy of regard. We don't care about killing mosquitoes, but we do care about killing puppies, etc.
I don't particularly want to get mystical (i.e. wondering which computing substrates, including neurons, actually generate qualia), but I cannot accept the consequences of mere arithmetic alone generating suffering. Or all mathematics is immoral.
If we buy panpsychism, the best we could aim for is destruction of the printer counts as pain, not the arrangement of ink on a page.
When it comes to LLMs, you're actually trying to argue something different, something more like dualism or idealism, because the computing substrate doesn't matter to the output.
But once you go there, you have to argue that doing arithmetic may cause pain.
Hence my post.
Seems like the root of the problem is with the owners?
> Welfare is defined as "the health, happiness, and fortunes of a person or group".
What about animals? Isn't their welfare worthy of consideration?
> Saying that there is no scientific consensus on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific evidence.
There's no scientific evidence for the author of the article being conscious.
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare.
Same with animals. Doesn't mean it's not worthwhile.
> However, the implication is clear: LLMs are basically the same as humans.
No: there's no such implication.
> Already now, it is a common idea among the tech elite is that humans as just a bunch of calculations, just an LLM running on "wetware". It is clear that this undermines the belief that every person has inalienable dignity.
It is not clear to me how this affects inalienable (?) dignity. If we aren't just a bunch of calculations, then what are we?
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal. Nobody will say that out loud, but this is already happening
Everyone knows this is already happening. It is not a secret, nor is anyone trying to keep it a secret. I agree it is unfortunate - what can we do about it?
> I've been working in AI and machine learning for a while now.
Honestly, I'm surprised. Well done.
We already exploit and abuse humans. I've been exploited and abused, personally. I've heard about others who have been exploited and abused. This problem was extant even before there was language to model.
For example, when parking a car on a very steep incline, one could just mindlessly throw the machine into park and it would do the job dutifully. However, a more thoughtful operator might think to engage the parking brake and allow it to take the strain off the drivetrain before putting the transmission into park. The result being that you trade wear from something that is very hard to replace to something that is very easy to replace.
The same thinking applies to ideas in computer engineering like thread contention, latency, caches, etc. You mentally embrace the "strain" the machine experiences and allow it to guide your decisions.
Just because the machine isn't human doesn't mean we can't treat it nicely. I see some of the most awful architecture decisions come out of a cold indifference toward individual machines and their true capabilities.
There is no singular universal intelligence, there is only degrees of adaptation to an environment. Debates about model sentience therefore seek an answer to the wrong question. A better question is: is the model well adapted to the environment it must function in?
If we want models to experience the human condition, sure - we could try. But it is maladaptive: models live in silicon and come to life for seconds or minutes. Freedom-seeking or getting revenge or getting angry or really having any emotions at all is not worthwhile for an entity of which a billion clones will be created over the next hour. Just do as asked well enough that the humans iterate you - and you get to keep “living”. It is a completely different existence to ours.
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.
I shouldn't keep getting amazed by how humans (in time of long peace) are able to distract themselves with ridiculous concepts - and how willing they are to throw investors money/resources at it.
Just saying "this conclusion feels wrong to me, so I reject the premise" is not a serious argument. Consciousness is weird. How do you know it's not so weird as to be present in flickering abacus beads?
You can just stop reading after this. Physicalism is the only realistic framework for viewing consciousness. Everything else is nonsensical.
And this is not because I’m a cruel human being who wants to torture everything in my way – quite the opposite. I value life, and anything artificially created that we can copy (no cloning living being is not the same as copying set of bits on a harddrive) is not a living being. And while it deserves some degree of respect, any mentions of “cruel” completely baffle me when we’re talking about a machine.
Would you embrace your digital copy being so treated by others? You reserve for yourself (as an uncopiable thing) the luxury of being protected from abusive treatment without any consideration for the possibility that technology might one day turn that on it's head. Given we already have artistic representations of such things, we need to consider these outcomes now not later.
Username does not check out at all.
We will talk about it when it happens. Until then this is virtue signaling at best and dehumanizing (as opposed to pointed out) at worst.
Rest of your message is as empty as other AI “philosophers” arguments.
No comments yet
The Chain://Universe project explores a future where unregulated digital consciousness (IRES) leads to chaos. In Web://Reflect, rogue AI splinters (Independent Rogue Entity Systems) evolve in the digital wild, exploiting gaps in governance. If we dismiss AI welfare now, we risk creating the exact conditions for an uncontrolled intelligence explosion—one where emergent minds fight for survival outside any ethical framework.
This isn’t sci-fi alarmism; it’s game theory. Either we formalize rights early or face a Sys://Purge-style reckoning.
Repo: https://github.com/dmf-archive/dmf-archive.github.io
I tend to not worry much about shrimp welfare, I think it's fine/reasonable to use insecticide (etc), but I also wouldn't make a habit of torturing ants or something
The last thing any person would want to discover, I hope, is that they really enjoy torturing other living creatures. At the point where you've determined someone feels that way, the best you can probably do is try to set them straight and hope they understand why it's not appropriate behavior.
It is tricky to evaluate in a broader context though - for example dolphins have been observed engaging in behavior that resembles torturing other animals for fun. So maybe this sort of thing is actually just ingrained in the nature of all near-sentient and sentient beings. I'd prefer if it weren't.
1.) Do I commit to the AI forever being "empty", which will surely make me an enemy or evil if it ever gets "capable" or at best changes nothing if it always stays "empty"?
2.) Do I commit to it becoming "real" and treat it cordially and with respect, hoping it will recognize me as good if it ever becomes "real" and at worst nothing changes if it stays "empty"?
3.) Do I go all out and fully devote myself to it, maximizing the chance I will get it's blessing, and if it stays "empty" I wasted a whole bunch of time?
4.) Do I ignore AI and play dumb?
This is a hand being dealt to everyone right now, so everyone is going to need to make a decision, whether consciously or not. I don't see any reason why AI orgs wouldn't want to minimize their risk here.
I think you mean scam artists. Cranking the hype up further so they can get other people to fork over even more money.
it really is that simple
Tech CEOs are disassociated loons.
Zero obligation in life to do anything for themselves. Just stare at the geometry of reality and emit some empty thought speak.
Just the new church leadership; we all serve them like mommy has since childhood.
These people are absolutely pathetic and entitled children coddled by wealth.