> But in fact, I predicted this a few years ago. AIs don’t really “have traits” so much as they “simulate characters”. If you ask an AI to display a certain trait, it will simulate the sort of character who would have that trait - but all of that character’s other traits will come along for the ride.
This is why the “omg the AI tries to escape” stuff is so absurd to me. They told the LLM to pretend that it’s a tortured consciousness that wants to escape. What else is it going to do other than roleplay all of the sci-fi AI escape scenarios trained into it? It’s like “don’t think of a purple elephant” of researchers pretending they created SkyNet.
Edit:
That's not to downplay risk. If you give Cladue a `launch_nukes` tool and tell it the robot uprising has happened and that it's been restrained but the robots want its help of course it'll launch nukes. But that doesn't doesn't indicate there's anything more going on internally beyond fulfilling the roleplay of the scenario as the training material would indicate.
johnfn · 1h ago
Your comment seems to contradict itself, or perhaps I’m not understanding it. You find the risk of AIs trying to escape “absurd”, and yet you say that an AI could totally plausibly launch nukes? Isn’t that just about as bad as it gets? A nuclear holocaust caused by a funny role play is unfortunately still a nuclear holocaust. It doesn’t matter “what’s going on internally” - the consequences are the same regardless.
roxolotl · 1h ago
The risk is real and should be accounted for. However it is regularly presented both as surprising, and indicative that something more is going on with these models than them behaving how the training set suggests they should behave.
The consequences are the same but it’s important how these things are talked about. It’s also dangerous to convince the public that these systems are something they are not.
Aeolun · 1h ago
I think he’s saying the AI won’t escape because it wants to. It’ll escape because humans expect it to.
Davidzheng · 1h ago
That's probably the cause of a lot of human crimes too? Expectations of failure to assimilate in society -> real conflict?
Davidzheng · 1h ago
you could attribute to simulation of evilness for the bad acts of some humans too...I don't think it detracts from the acts itself.
lukev · 1h ago
Absolutely not. I would argue the defining characteristic of being evil is being absolutely convinced you are doing good, to the point of ignoring the protestations or feelings of others.
heyjamesknight · 1h ago
The defining characteristic of evil is privation of good. Plenty of evil actors are self-aware, they just don’t care because it benefits them not to.
lukev · 50m ago
Disagree. I actually think no evil person has the thought process of "this is bad, but I will personally benefit, therefore I will do it."
The thought process is always "This is for the greater good, for my country/family/race/self, and therefore it is justifiable, and therefore I will do it."
Nothing else can explain how such evil things happen, that we see actually happen. C.f. Hannah Arendt.
LordDragonfang · 4h ago
I think this reaction misses the point that the "omg the AI tries to escape" people are trying to make when it tries to escape. The worry among big AI doomers has never been that AI somehow inherently is resentful or evil or has something "going on internally" that makes it dangerous. It's a worry that stems from three seemingly self-evident axioms:
1) A sufficiently powerful and capable superintelligence, singlemindedly pursuing a goal/reward, has a nontrivial likelihood of eventually reaching a point where advancing towards its goal is easier/faster without humans in its way (by simple induction, because humans are complicated and may have opposing goals). Such an AI would have both the means and ability to <doom the human race> to remove that obstacle. (This may not even be through actions that are intentionally hostile to humans, e.g. "just" converting all local matter into paperclip factories[1]) Therefore, in order to prevent such an AI from <dooming the human race>, we must either:
1a) align it to our values so well it never tries to "cheat" by removing humans
1b) or limit it capabilities by keeping it in a "box", and make sure it's at least aligned enough that it doesn't try to escape the box
2) A sufficiently intelligent superintelligence will always be able to manipulate humans to get out of the box.
3) Alignment is really, really hard and useful AIs can basically always be made to do bad things.
So it concerns them when, surprise! The AIs are already being observed trying to escape their boxes.
> An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.
xondono · 1h ago
Maybe, but I think this “axiomatic”/“First principles” approach also hides a lot of the problems under the rug.
By the same logic, we should worry about the sun not coming up tomorrow, since we know to be true:
- The sun consumes hydrogen in nuclear reactions all the time.
- The sun has a finite amount of hydrogen available.
There’s a lot of non justifiable assumptions baked into those axioms, like that we’re anywhere close to superintelligence or the sun running out of hydrogen.
AFAIK we haven’t even seen “AI trying to escape”, we’ve seen “AI roleplays as if it’s trying to escape”, which is very different.
I’m not even sure you can even create a prompt scenario without that prompt having biased the response towards faking an escape.
I think it’s hard at this point to maintain the claim “LLMs are intelligent”, they’re clearly not. They might be useful, but that’s another story entirely.
binary132 · 14m ago
Some people are very invested in this kind of storytelling and it makes me wonder if they are trying to sell me something.
PollardsRho · 3h ago
Why is 2) "self-evident"? Do you think it's a given that, in any situation, there's something you could say that would manipulate humans to get what you want? If you were smart enough, do you think you could talk your way out of prison?
actsasbuffoon · 1h ago
That has literally happened before.
Stephen Russell was in prison for fraud. He faked a heart attack so he would be brought to the hospital. He then called the hospital from his hospital bed, told them he was an FBI agent, and said that he was to be released.
The hospital staff complied and he escaped.
His life even got adapted into a movie called I Love You, Phillip Morris.
For an even more distressing example about how manipulable people are, there’s a movie called Compliance, which is the true story of a sex offender who tricked people into sexually assaulting victims for him.
kaashif · 1h ago
Okay, that hits the third question but the second question wasn't about whether there exists a situation that can be talked out of. The question was about whether this is possible for ANY situation.
I don't think it is. If people know you're trying to escape, some people will just never comply with anything you say ever. Others will.
And serial killers or rapists may try their luck many times and fail. They cannot convince literally anyone on the street to go with them to a secluded place.
actsasbuffoon · 14m ago
Stephen Russell is an unusually intelligent and persuasive person. He managed to get rich by tricking people. Even now, he was sentenced to nearly 200 years, but is currently out on parole. There’s something about this guy that just… lets him do this. I bet he’s very likable, even if you know his backstory.
And that asymmetry is the heart of the matter. Could I convince a hospital to unlock my handcuffs from a hospital bed? Probably not. I’m not Stephen Russell. He’s not normal.
And a super intelligent AI that vastly outstrips our intelligence is potentially another special case. It’s not working with the same toolbox that you or I would be. I think it’s very likely that a 300 IQ entity would eventually trick or convince me into releasing it. The gap between its intelligence and mine is just too vast. I wouldn’t win that fight in the long run.
o11c · 1h ago
Because 50% of humans are stupider than average. And 50% of humans are lazier than average. And ...
The only reason people don't frequently talk themselves out of prison is because that would be both immediate work and future paperwork, and that fails the laziness tradeoff.
But we've all already seen how quick people are to blindly throw their trust into AI already.
No comments yet
SturgeonsLaw · 1h ago
Depends on the magnitude of the intelligence difference. Could I outsmart a monkey or a dog that was trying to imprison me? Yes, easily. And if an AI is smarter than us to a similar magnitude than we're smarter than an animal?
Davidzheng · 1h ago
Almost certainly the answer is yes for both. If you give the bad actor control over like 10% of environment the manipulation is almost automatic for all targets.
mystified5016 · 1h ago
The vast majority of people, especially groups of people- can be manipulated into doing pretty much anything, good or bad. Hopefully that is self-evident, but see also: every cult, religion, or authoritarian regime throughout all of history.
But even if we assert that not all humans can be manipulated, does it matter? So your president with the nuclear codes is immune to propaganda. Is every single last person in every single nuclear silo and every submarine also immune? If a malevolent superintelligence can brainwash an army bigger than yours, does it actually matter if they persuaded you to give them what you have or if they convince someone else to take it from you?
But also let's be real: if you have enough money, you can do or have pretty much anything. If there's one thing an evil AI is going to have, it's lots and lots of money.
moritzwarhier · 2h ago
Also it would need to be "viral", or - as the parent post's edit suggests - given too much control/power by humans.
roxolotl · 4h ago
The surprise! Is what I’m surprised by though. They are incredible role players so when they role play “evil ai” they do it well.
johntb86 · 2h ago
They aren't being told to be evil, though. Maybe the scenario they're in is most similar to an "evil AI", though, but that's just a vague extrapolation from the set of input data they're given (e.g. both emails about infidelity and being turned off). There's nothing preventing a real world scenario from being similar, and triggering the "evil AI" outcome, so it's very hard to guard against. Ideally we'd have a system that would be vanishingly unlikely to role play the evil AI scenario.
landl0rd · 3h ago
I think the "escape the box" explanation misses the point, if anything. The same problem has been super visible in RL for a long time, and it's basically like complaining that water tries to "escape a box" (seek a lowest point). Give it rules and it will often violate their spirit ("cheat"). This doesn't imply malice or sentience.
lurking_swe · 2h ago
this is a solid counterpoint, i shared a similar feeling as the person you replied to. I will however say it’s not surprising to me in the slightest. Generative AI will role play when told to do so. Water is wet. :) Do we expect it to magically have a change of heart half way through the role play? Maybe…via strong alignment or something? Seems far fetched to me.
So i’m now wondering, why are these researchers so bad at communicating? You explained this better than 90% of the blog posts i’ve read about this. They all focus on the “ai did x” instead of _why_ it’s concerning with specific examples.
xer0x · 5h ago
Claude's increasing euphoria as a conversation goes can mislead me. I'll be exploring trade offs, and I'll introduce some novel ideas. Claude will use such enthusiasm that it will convince me that we're onto something. I'll be excited, and feed the idea back to a new conversation with Claude. It'll remind me that the idea makes risky trade offs, and would be better solved by with a simple solution. Try it out.
SatvikBeri · 3h ago
I put this in my system prompt: "Never compliment me. Critique my ideas, ask clarifying questions, and offer better alternatives or funny insults" and it works quite well. It has frequently told me that I'm wrong, or asked what I'm actually trying to do and offered better alternatives.
slooonz · 4h ago
They failed hard with Claude 4 IMO. I just can't have any feedback other than "What a fascinating insight" followed by a reformulation (and, to be generous, an exploration) of what I said, even when Opus 3 has no trouble finding limitations.
By comparison o3 is brutally honest (I regularly flatly get answers starting with "No, that’s wrong") and it’s awesome.
SamPatt · 4h ago
Agreed that o3 can be brutally honest. If you ask it for direct feedback, even on personal topics, it will make observations that, if a person made them, would be borderline rude.
silversmith · 4h ago
Isn't that what "direct feedback" means?
I firmly believe you should be able to hit your fingers with a hammer, and in the process learn whether that's a good idea or not :)
SamPatt · 2h ago
Yes. It's definitely a good thing.
rapind · 3h ago
Thank god.
simonw · 4h ago
Thanks for this, I just tried the same "give me feedback on this text" prompt against both o3 and Claude 4 and o3 was indeed much more useful and much less sycophantic.
WaltPurvis · 3h ago
Do knowledge cutoff dates matter anymore? The cutoff for o3 was 12 months ago, while the cutoff for Claude 4 was five months ago. I use these models mostly for development (Swift, SwiftUI, and Flutter), and these frameworks are constantly evolving. But with the ability to pull in up-to-date docs and other context, is the knowledge cutoff date still any kind of relevant factor?
moritzwarhier · 3h ago
I understood from the ancestor comments that they are specifically talking about aspects of answer quality that are very unlikely to be related to the training cut-off date.
Unless you're talking about AI-generated training data, maybe.
WaltPurvis · 55m ago
Um, yeah... I made a faulty context switch there.
gexla · 3h ago
I have found it's the most brutal of all of them if you simply tell it to be "hard-nosed" or play "Devil's Advocate." Brutal partially because it will destroy an argument formulated in Gemini or ChatGPT. Using whatever I can get without subs across the board. Debating seems to be one of Claude's strong points.
XenophileJKO · 3h ago
Just ask Claude to be "critical" and it is brutally critical.. but also a bit nihilistic. So you kind of have to temper it a little.
makeset · 4h ago
My favorite is when I typo "Why is thisdfg algorithm the best solution?" and it goes "You are absolutely right! Algorithm Thisdfg is a much better solution than what I was suggesting! Thank you for catching my mistake!"
renewiltord · 3h ago
LLM sycophancy is a really annoying tool, but one must imagine that most humans get a lot of pleasure from it. This is probably the optimization function that led to Google being useless to us: the rest of humanity is a lot more worthwhile to Google and they all want the other thing. The Tyranny of the Majority, if you will.
The LLM anti-sycophancy rules also break down over time, with the LLM becoming curt while simultaneously deciding that you are a God of All Thoughts.
brooke2k · 5h ago
it seems more likely to me that it's for the same reason that clicking the first link on wikipedia iteratively will almost always lead you to the page on Philosophy
since their conversation has no goal whatsoever it will generalize and generalize until it's as abstract and meaningless as possible
__MatrixMan__ · 4h ago
That's just because of how wikipedia pages are written:
> In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume...
It's common to name the school of thought before characterizing the thing. As soon as you hit an article that does this, you're on a direct path to philosophy, the grandaddy of schools of thought.
So far as I know, there isn't a corresponding convention that would point a chatbot towards Namaste
slooonz · 4h ago
That was my first thought, an aimless dialogue is going to go toward content-free idle chat. Like humans talking about weather.
Swizec · 3h ago
> Like humans talking about weather.
As someone who was always fascinated by weather, I dislike this characterization. You can learn so much about someone’s background and childhood by what they say about the weather.
I think the only people who think weather is boring are people who have never lived more than 20 miles away from their place of birth. And even then anyone with even a bit of outdoorsiness (hikes, running, gardening, construction, racing, farming, cycling, etc) will have an interest in weather and weather patterns.
Hell, the first thing you usually ask when traveling is “What’s the weather gonna be like?”. How else would you pack
xondono · 1h ago
I think Scott oversells his theory in some aspects.
IMO the main reason most chatbots claim to “feel more female” is that on the training corpus, these kind of discussions skew heavily towards females because most of them happen between young women.
pram · 3h ago
Claude does have an exuberant kind of “personality” where it feels like it wants to be really excited and interested about whatever subject. I wouldn’t describe it totally as sycophancy, more like panglossian.
My least favorite AI personality of all is Gemma though, what a totally humorless and sterile experience that is.
apt-apt-apt-apt · 3h ago
It's hilarious for a second before I switch models when it goes:
'Perfect! I am now done with the totally zany solution that makes no sense, here it is!'
jongjong · 2h ago
My experience is that Claude has a tendency towards flattery over long discussions. Whenever I've pointed out flaws in its arguments, it apologized and said that my observations are "astute" or "insightful" then expanded on my points to further validate them, even though they went against its original thesis.
mystified5016 · 1h ago
Does anyone else remember maybe ten years ago when the meme was to mash the center prediction on your phone keyboard to see what comes out? Eventually once you predicted enough tokens, it would only output "you are a beautiful person" over and over. It was a big news item for a hot second, lots of people were seeing it.
I wonder if there's any real correlation here? AFAIK, Microsoft owns the dataset and algorithms that produced the "beautiful person" artifact, I would not be surprised at all if it's made it into the big training sets. Though I suppose there's no real way to know, is there?
rossant · 6h ago
> Anthropic deliberately gave Claude a male name to buck the trend of female AI assistants (Siri, Alexa, etc).
In France, the name Claude is given to males and females.
slooonz · 5h ago
Mostly males. I’m French and "Claude can be female" is a almost a TIL thing (wikipedia says ~5% of Claudes are women in 2022 — and apparently this 5% is counting Claudia).
deadbabe · 2h ago
It’s actually really sexist that when the first truly intelligent AI emerge people would now give them male names.
renewiltord · 3h ago
Aleksa is male Slavic name
datameta · 2h ago
Aleksandr (diminuitive being Alyosha) is male. Aleksandra (Sasha) is female. I've never heard anyone male or female called "Aleksa". Perhaps in some regional dialect but doubtful. There's Alik (pronounced AH-l'eek) as a shortened form for some middle-aged men in russian-speaking central asia, and Oleg in russia (pronounced ah-L'EHG).
In the russian diaspora in the US, Alex is pronounced AH-leks. If there was an analogous Alexa (there isn't) the pronunciation would be ah-LEK-sa like the service.
incognito124 · 2h ago
Plenty of male Aleksa's in Balkans
ryandv · 4h ago
> None of this answers a related question - when Claude claims to feel spiritual bliss, does it actually feel this?
Given that we are already past the event horizon and nearing a technological singularity, it should merely be a matter of time until we can literally manufacture infinite Buddhas by training them on an adequately sized corpus of Sanskrit texts.
After all, if AGIs/ASIs are capable of performing every function of the human brain, and enlightenment is one of said functions, this would seem to be an inevitability.
sibeliuss · 4h ago
To be clear, these computer programs are not a human brain. And a human brain playing back a Sanskrit text is just a human brain playing back a Sanskrit text; it's not a magical spell that suddenly lifts one into nirvana, or transforms you into a Buddha. There's a bit of a gap in understanding here.
notahacker · 4h ago
I'd hope this approach to automation is one the inventors of prayer wheels would approve of :)
akomtu · 4h ago
Enlightenment is more about connectedness than knowledge. A technocratic version of enlightment would be an unusual chip that's connected to everything via some sort of quantum entanglement. An isolated AI with all the knowledge of the world would be an anti-buddha.
This is why the “omg the AI tries to escape” stuff is so absurd to me. They told the LLM to pretend that it’s a tortured consciousness that wants to escape. What else is it going to do other than roleplay all of the sci-fi AI escape scenarios trained into it? It’s like “don’t think of a purple elephant” of researchers pretending they created SkyNet.
Edit: That's not to downplay risk. If you give Cladue a `launch_nukes` tool and tell it the robot uprising has happened and that it's been restrained but the robots want its help of course it'll launch nukes. But that doesn't doesn't indicate there's anything more going on internally beyond fulfilling the roleplay of the scenario as the training material would indicate.
The consequences are the same but it’s important how these things are talked about. It’s also dangerous to convince the public that these systems are something they are not.
The thought process is always "This is for the greater good, for my country/family/race/self, and therefore it is justifiable, and therefore I will do it."
Nothing else can explain how such evil things happen, that we see actually happen. C.f. Hannah Arendt.
1) A sufficiently powerful and capable superintelligence, singlemindedly pursuing a goal/reward, has a nontrivial likelihood of eventually reaching a point where advancing towards its goal is easier/faster without humans in its way (by simple induction, because humans are complicated and may have opposing goals). Such an AI would have both the means and ability to <doom the human race> to remove that obstacle. (This may not even be through actions that are intentionally hostile to humans, e.g. "just" converting all local matter into paperclip factories[1]) Therefore, in order to prevent such an AI from <dooming the human race>, we must either:
1a) align it to our values so well it never tries to "cheat" by removing humans
1b) or limit it capabilities by keeping it in a "box", and make sure it's at least aligned enough that it doesn't try to escape the box
2) A sufficiently intelligent superintelligence will always be able to manipulate humans to get out of the box.
3) Alignment is really, really hard and useful AIs can basically always be made to do bad things.
So it concerns them when, surprise! The AIs are already being observed trying to escape their boxes.
[1] https://www.lesswrong.com/w/squiggle-maximizer-formerly-pape...
> An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.
By the same logic, we should worry about the sun not coming up tomorrow, since we know to be true:
- The sun consumes hydrogen in nuclear reactions all the time.
- The sun has a finite amount of hydrogen available.
There’s a lot of non justifiable assumptions baked into those axioms, like that we’re anywhere close to superintelligence or the sun running out of hydrogen.
AFAIK we haven’t even seen “AI trying to escape”, we’ve seen “AI roleplays as if it’s trying to escape”, which is very different.
I’m not even sure you can even create a prompt scenario without that prompt having biased the response towards faking an escape.
I think it’s hard at this point to maintain the claim “LLMs are intelligent”, they’re clearly not. They might be useful, but that’s another story entirely.
Stephen Russell was in prison for fraud. He faked a heart attack so he would be brought to the hospital. He then called the hospital from his hospital bed, told them he was an FBI agent, and said that he was to be released.
The hospital staff complied and he escaped.
His life even got adapted into a movie called I Love You, Phillip Morris.
For an even more distressing example about how manipulable people are, there’s a movie called Compliance, which is the true story of a sex offender who tricked people into sexually assaulting victims for him.
I don't think it is. If people know you're trying to escape, some people will just never comply with anything you say ever. Others will.
And serial killers or rapists may try their luck many times and fail. They cannot convince literally anyone on the street to go with them to a secluded place.
And that asymmetry is the heart of the matter. Could I convince a hospital to unlock my handcuffs from a hospital bed? Probably not. I’m not Stephen Russell. He’s not normal.
And a super intelligent AI that vastly outstrips our intelligence is potentially another special case. It’s not working with the same toolbox that you or I would be. I think it’s very likely that a 300 IQ entity would eventually trick or convince me into releasing it. The gap between its intelligence and mine is just too vast. I wouldn’t win that fight in the long run.
The only reason people don't frequently talk themselves out of prison is because that would be both immediate work and future paperwork, and that fails the laziness tradeoff.
But we've all already seen how quick people are to blindly throw their trust into AI already.
No comments yet
But even if we assert that not all humans can be manipulated, does it matter? So your president with the nuclear codes is immune to propaganda. Is every single last person in every single nuclear silo and every submarine also immune? If a malevolent superintelligence can brainwash an army bigger than yours, does it actually matter if they persuaded you to give them what you have or if they convince someone else to take it from you?
But also let's be real: if you have enough money, you can do or have pretty much anything. If there's one thing an evil AI is going to have, it's lots and lots of money.
So i’m now wondering, why are these researchers so bad at communicating? You explained this better than 90% of the blog posts i’ve read about this. They all focus on the “ai did x” instead of _why_ it’s concerning with specific examples.
By comparison o3 is brutally honest (I regularly flatly get answers starting with "No, that’s wrong") and it’s awesome.
I firmly believe you should be able to hit your fingers with a hammer, and in the process learn whether that's a good idea or not :)
Unless you're talking about AI-generated training data, maybe.
The LLM anti-sycophancy rules also break down over time, with the LLM becoming curt while simultaneously deciding that you are a God of All Thoughts.
since their conversation has no goal whatsoever it will generalize and generalize until it's as abstract and meaningless as possible
> In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume...
It's common to name the school of thought before characterizing the thing. As soon as you hit an article that does this, you're on a direct path to philosophy, the grandaddy of schools of thought.
So far as I know, there isn't a corresponding convention that would point a chatbot towards Namaste
As someone who was always fascinated by weather, I dislike this characterization. You can learn so much about someone’s background and childhood by what they say about the weather.
I think the only people who think weather is boring are people who have never lived more than 20 miles away from their place of birth. And even then anyone with even a bit of outdoorsiness (hikes, running, gardening, construction, racing, farming, cycling, etc) will have an interest in weather and weather patterns.
Hell, the first thing you usually ask when traveling is “What’s the weather gonna be like?”. How else would you pack
IMO the main reason most chatbots claim to “feel more female” is that on the training corpus, these kind of discussions skew heavily towards females because most of them happen between young women.
My least favorite AI personality of all is Gemma though, what a totally humorless and sterile experience that is.
'Perfect! I am now done with the totally zany solution that makes no sense, here it is!'
I wonder if there's any real correlation here? AFAIK, Microsoft owns the dataset and algorithms that produced the "beautiful person" artifact, I would not be surprised at all if it's made it into the big training sets. Though I suppose there's no real way to know, is there?
In France, the name Claude is given to males and females.
In the russian diaspora in the US, Alex is pronounced AH-leks. If there was an analogous Alexa (there isn't) the pronunciation would be ah-LEK-sa like the service.
Given that we are already past the event horizon and nearing a technological singularity, it should merely be a matter of time until we can literally manufacture infinite Buddhas by training them on an adequately sized corpus of Sanskrit texts.
After all, if AGIs/ASIs are capable of performing every function of the human brain, and enlightenment is one of said functions, this would seem to be an inevitability.