I found the recent sycophancy a bit annoying when trying to diagnose and solve coding problems. First it would waste time praising your intelligence for asking the question before getting to the answer. But more annoyingly if I asked "I am encountering X issue, could Y be the cause" or "could Y be a solution", the response would nearly always be "yes, exactly, it's Y" even when it wasn't the case. I guess part of the problem there is asking leading questions but it would be much more valuable if it could say "no, you're way off".
But...
> Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior.
It's kind of a wild sign of the times to see a tech company issue this kind of post mortem about a flaw in its tech leading to "emotional over-reliance, or risky behavior" among its users. I think the broader issue here is people using ChatGPT as their own personal therapist.
photonthug · 6h ago
> But more annoyingly if I asked "I am encountering X issue, could Y be the cause" or "could Y be a solution", the response would nearly always be "yes, exactly, it's Y" even when it wasn't the case
Seems like the same issue as the evil vector [1] and it could have been predicted that this would happen.
> It's kind of a wild sign of the times to see a tech company issue this kind of post mortem about a flaw in its tech leading to "emotional over-reliance, or risky behavior" among its users. I think the broader issue here is people using ChatGPT as their own personal therapist.
I'll say the quiet part out loud here. What's wild is that they appear to be apologizing that their Wormtongue[2] whisperer was too obvious to avoid being caught in the act, rather than prioritizing or apologizing for not building the fact-based councilor that people wanted/expected. In other words.. their business model at the top is the same as the scammers at the bottom: good-enough fakes to be deceptive, doubling down on narratives over substance, etc.
Well, that's always what LLM-based AI has been. It can be incredibly convincing but the bottom line is it's just flavoring past text patterns, billions of them it's been "trained" on, which is more accurately described as compressed efficiently onto latent space. Like if someone lived for 10,000 years engaging in small talk at the bar, has heard it all, and just kind of mindlessly and intuitively replied with something that sounds plausible for every situation.
Sam Altman is the real sycophant in this situation. GPT is patronizing. Listening to Sam go off on tangent about science fiction scenarios that are just around the corner... I don't know how more people don't see through it.
I kind of get the feeling the people who have to work him every day got sick of his nonsense and just did what he asked for. Targeting the self-help crowd, drive engagement, flatter users, "create the next paradigm of emotionally-enabled humans of perfect agency" or whatever the fuck it he was popping off about to try to motivate the team to compete better with Anthropic.
He clearly isn't very smart. He clearly is product of nepotism. And clearly, LLM "AI" is an overhyped, overwrought version of 20 questions artificial intelligence enabled by mass data scale and NVidia video game graphics. it's been 4 years now of this and AI still tells me the most obviously wrong nonsense every day.
"Are you sure about that?"
"You're absolutely correct to be skeptical of ..."
photonthug · 4h ago
> which is more accurately described as compressed efficiently onto latent space.
The actual difference between solving compression+search vs novel creative synthesis / emergent "understanding" from mere tokens is always going to be hard to spot with these huge cloud-based models that drank up the whole internet. (Yes.. this is also true for domain experts in whatever content is being generated.)
I feel like people who are very optimistic about LLM capabilities for the later just need to produce simple products to prove their case; for example, drink up all the man pages, a few thousand advanced shell scripts that are easily obtainable, and some subset of stack-overflow. And BAM, you should have a offline bash oracle that makes this tiny subset of general programming endeavor a completely solved problem.
Currently, smaller offline models still routinely confuse the semantics of "|" vs "||". (An embarrassing statistical aberration that is more like the kind of issue you'd expect with old school markov chains than a human-style category error or something.) Naturally if you take the same problem to a huge cloud model you won't have the same issue, but the argument that it "understands" anything is pointless, because the data-set is so big that of course search/compression starts to look like genuine understanding/synthesis and really the two can no longer be separated. Currently it looks more likely this fundamental problem will be "solved" with increased tool use and guess-and-check approaches. The problem then is that the basic issue just comes back anyway, because it cripples generation of an appropriate test-harness!
More devs do seem to be coming around to this measured, non-hype kind of stance gradually though. I've seen more people mentioning stuff like, "wait, why can't it write simple programs in a well specified esolang?" and similar
skydhash · 30m ago
A naive thought: What you would get if you hardcode the language grammar and not let the training discern it, so instead of it, kinda like an expert system constraining its output?
nyarlathotep_ · 2h ago
> it's been 4 years now of this and AI still tells me the most obviously wrong nonsense every day.
It's remarkable seeing the change in sentiment in these parts, considering even just a year ago a large part of this forum seemed to regularly proclaim that programmers were done, lawyers were gone in 5 years, "Aye Gee Eye is coming", etc etc.
cjbgkagh · 12h ago
For many people ChatGPT is already the smartest relationship they have in their lives, not sure how long we have until it’s the most fulfilling. On the upside it is plausible that ChatGPT can get to a state where it can act as a good therapist and help helpless who otherwise would not get help.
I am more regularly finding myself in discussions where the other person believes they’re right because they have ChatGPT in their corner.
I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.
alickz · 9h ago
>I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.
I think most people also _vastly_ overestimate how much positive attention the average person gets in their lives
It wouldn't surprise me if, for most people, ChatGPT offers them more empathy and understanding than _anyone_ else _ever has_, at least on a consistent basis. That kind of indefatigable emotional labor is just not feasible for most, even on a very short term basis, even for those with large support networks
We can argue over whether or not it's "real" empathy, but I don't believe we can argue with the emotions of our attention starved brothers and sisters
rurp · 8h ago
>We can argue over whether or not it's "real" empathy
There's nothing to argue about, it's unambiguously not real empathy. Empathy from a human exists in a much broader context of past and future interactions. One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
Even strangers will. Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that, and not just because interaction mediums are so limited, but also because that's not the purpose of the tool. The purpose of ChatGPT is to make immense amounts of money and power for its owners, and a nice sounding chat bot currently happens to be an effective way of getting there. Sam Altman doesn't have empathy for random ChatGPT users he's never met and neither do the computer algorithms his company develops.
alickz · 7h ago
>There's nothing to argue about, it's unambiguously not real empathy
I think if a person can't tell the difference between empathy from a human vs empathy from a chatbot, it's a difference without a distinction
If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
>One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
This is what I think people vastly overestimate
I don't think most people have such ready access to a friend who is both willing and able to perform such emotional labor, on demand, at no cost to themselves.
I think the sad truth is that empathy is a much scarcer resource than we believe, not through any moral fault of our own, but because it's just the nature of things.
The economics of emotions.
We'll see what the future has in store for the tech anyway, but if it turns out that the average person gets more empathy from a chatbot than a human, it wouldn't surprise me
cgio · 6h ago
Empathy does not lie in its perception on receipt but in its inception as a feeling. It is fundamentally a manifestation of the modalities enabled in shared experience. As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning. Understanding is possible in this relationship, but should not be confused with empathy or compassion.
biker142541 · 7h ago
>If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
Boiling it down to neural signals is a risky approach, imo. There are innumerable differences between these interactions. This isn't me saying interactions are inherently dangerous if artificial empathy is baked in, but equating them to real empathy is.
Understanding those differences is critical, especially in a world of both deliberately bad actors and those who will destroy lives in the pursuit of profit by normalizing replacements for human connections.
fastball · 3h ago
Can you define "real empathy"?
hxtk · 2h ago
There's a book that I encourage everyone to read called Motivational Interviewing. I've read the 3rd edition and I'm currently working my way through the 4th edition to see what's changed, because it's a textbook that they basically rewrite completely with each new edition.
Motivational Interviewing is an evidence-based clinical technique for helping people move through ambivalence during the contemplation, preparation, and action stages of change under the Transtheoretical Model.
In Chapter 2 of the 3rd Edition, they define Acceptance as one of the ingredients for change, part of the "affect" of Motivational Interviewing. Ironically, people do not tend to change when they perceive themselves as unacceptable as they are. It is when they feel accepted as they are that they are able to look at themselves without feeling defensive and see ways in which they can change and grow.
Nearly all that they describe in chapter 2 is affective—it is neither sufficient nor even necessary in the clinical context that the clinician feel a deep acceptance for the client within themselves, but the client should feel deeply accepted so that they are given an environment in which they can grow. The four components of the affect of acceptance are autonomy support, absolute worth (what Carl Rogers termed "Unconditional Positive Regard"), accurate empathy, and affirmation of strengths and efforts.
Chapters 5 and 6 of the third edition define the skills of providing the affect of acceptance defined in Chapter 2—again, not as a feeling, but as a skill. It is something that can be taught, practiced, and learned. It is a common misconception to believe that unusually accepting people become therapists, but what is actually the case is that practicing the skill of accurate empathy trains the practitioner to be unusually accepting.
The chief skill of accurate empathy is that of "reflective listening", which essentially consists of interpreting what the other person has said and saying your interpretation back to them as a statement. For an unskilled listener, this might be a literal rewording of what was said, but more skilled listeners can, when appropriate, offer reflections that read between the lines. Very skilled listeners (as measured by scales like the Therapist Empathy Scale) will occasionally offer reflections that the person being listened to did not think, but will recognize within themselves once they have heard it.
In that sense, in the way that we measure empathy in settings where it is clinically relevant, I've found that AIs are very capable with some prompting of displaying the affect of accurate empathy.
telchior · 6h ago
A lot of human empathy isn't real either. Defaulting to the most extreme example, narcissists use love bombing to build attachment. Sales people use "relationship building" to make money. AI actually seems better than these -- it isn't building up to a rug pull (at least, not one that we know of yet).
And it's getting worse year after year, as our society gets more isolated. Look at trends in pig butchering, for instance: a lot of these are people so incredibly lonely and unhappy that they fall into the world's most obvious scam. AI is one of the few things that actually looks like it could work, so I think realistically it doesn't matter that it's not real empathy. At the same time, Sam Altman looks like the kind of guy who could be equally effective as a startup CEO or running a butchering op in Myanmar, so I hope like hell the market fragments more.
codr7 · 9h ago
It's a pretty sucky solution to that problem imo, and I can see a substantial risk that it causes people to withdraw even more from real relations.
cjbgkagh · 8h ago
One concern that I do worry about is if LLMs are able to present an false attractive view of the world that the user will become increasingly dependent on the LLMs to maintain that view. A cult of 1. Reminds me of the episode 'Safe Space' from South Park but instead of Butters filtering content it'll be the LLM. People are already divorced enough from reality - but I see no reason why they couldn't be more divorced, at least temporarily.
danenania · 7h ago
It begs the question of who decides what “reality” is though. A lot of people have an unrealistically negative view of themselves and their abilities—often based on spending time around pessimistic or small-minded humans.
In that case, if an AI increases someone’s confidence in themselves, you could say it’s giving them a stronger sense of reality by helping them to question distorted and self-limiting beliefs.
codr7 · 6h ago
Reality as in the real world, it is what it is, no one decides.
danenania · 6h ago
We're talking about psychology, therapy, sycophancy, etc. None of this is empirical.
If someone thinks they can, say, create a billion dollar startup, whether they can really do it or not is a subjective determination. The AI might tell the person they can do it. You might tell them they can't, that the AI is sycophantic, and that they should stop talking to it because they're losing touch with reality.
But is the AI a sycophant, or are you an irrational pessimist?
TheOtherHobbes · 3h ago
The AI will be saying the same thing to everyone. Rationally, what are the chances every single OpenAI customer will be building a billion dollar startup any time soon?
But even it's more obvious than that. The sycophancy is plain old love bombing, which is a standard cult programming technique.
As for startups - let's wait until the AI has built a few of its own, or at least mentored humans successfully.
alickz · 7h ago
Hopefully there are better solutions to the fundamental limitations of societal empathy in the future, but for now i just can't see any
Seems to me empathy on a societal scale has been receding as population grows, not increasing to match (or outpace)
Telling people to seek empathy elsewhere to me will be about as useful as telling people at an oasis in the desert to look for water elsewhere, but i hope i'm wrong
itchyjunk · 10h ago
For a subset of topics, "AI" is already what I prefer to interact with over humans. At times, it's nicer to start with "AI" and kind of ground my messy thoughts before interacting with people and that works better than directly starting with a person.
I'm also starting to come across people who say "You should ask these types of questions to AI first" answer. But this is no different than people who preached "lmfgt" kind of religiously. Even when I prefer to start with humans, some humans prompt me to start by prompting AI.
cjbgkagh · 10h ago
I see the same.
I'm waiting on LLMs to get good enough that I can use them to help me learn foreign languages - e.g. talk to me about the news in language X. This way I can learn a language in an interesting and interactive way without burdening some poor human with my mistakes. I would build this myself but others will probably beat me too it.
CouchYam · 10h ago
I sometimes prompt the LLM to talk to me as a <language> instructor - to suggest a topic, ask a question, read my response, correct my grammar, and suggest alternate vocabulary where appropriate. This works quite well. Similar to your comment, I am often hesitant to butcher a language in front of a real person :-).
skydhash · 24m ago
The first step to really learn a language is to be confident and forgive yourself for any mistakes (you’re starting late anyway , and juggling other things).
These days, I do my best to learn and reflect. But any mistakes is just a reminder for more learning (and practice).
QuercusMax · 8h ago
I contribute to a language-learning forum as a native English speaker, and we constantly get questions from people who are doing exactly what you're doing. The AI does not understand the language, and it will tell you blatantly incorrect information. Especially with less-common constructs, you'll just get very bad advice.
bayindirh · 9h ago
The problem is, AI doesn't let you, or encourage you to create your own style. Word choices, structure, flow, argument building and discourse style is very fixed and "average", since it's a machine favors what it ingests most.
I use Grammarly for grammar and punctuation, and disable all style recommendations. If I let it loose on my piece of text, it converts it to a slop. Same bland, overly optimistic toned text generator output.
So, that machine has no brain, use your own first.
croemer · 2h ago
LLMs are already good enough to tell you the news in language X and listen to your broken attempts at asking questions back. Or what's missing?
cjbgkagh · 2h ago
They do seem close to being able to, I'm mostly waiting on someone to provide it as a service - I don't have time for side projects ATM.
nsajko · 10h ago
> it is plausible that ChatGPT can get to a state where it can act as a good therapist
Be careful with that thought, it's a trap people have been falling into since the sixties:
Eventual plausibility is a suitably weak assertion, to refute it you would have to at least suggest that it is never possible which you have not done.
diggan · 10h ago
I dunno, I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis. Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted. Maybe it gets a bit easier to ask questions that you feel are stigmatized, as you know (think) no other human will see it, just the robot that doesn't have feelings nor judge you.
I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
kergonath · 9h ago
> I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis.
Well, two things.
First, no. People who engage on HN are a specific part of the population, with particular tendencies. But most of the people here are simply normal, so outside of the limits you consider. Most people with real social issues don’t engage in communities, virtual or otherwise. HN people are not special.
Then, you cannot follow this kind of reasoning when thinking about a whole population. Even if people on average tend to behave one way, this leaves millions of people who would behave otherwise. You simply cannot optimise for the average and ignore the worst case in situations like this, because even very unlikely situations
are bound to happen a lot.
> Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted.
It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.
À LLM regresses towards the mean of its training set. For people who are already outside the main mode of the distribution, this is completely unhelpful, and potentially actively harmful. By design, a LLM won’t follow a path that was not beaten in its training data. Most of them are actually biased to make their user happy and validate what we tell them rather than get off that path. It just does not work.
> I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
In my experience, not any more than reading a book would. Future AI models might get there, I don’t think their incompetence is a law of nature. But current LLM are particularly harmful for people who are in a dicey psychological situation already.
diggan · 8h ago
> It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.
Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.
> In my experience, not any more than reading a book would.
But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take. Contrast that to talking with a LLM that won't remember you nor judge you.
Edit:
> Most people with real social issues don’t engage in communities, virtual or otherwise.
Not sure why you're focusing on social issues, there are a bunch of things people deal with on a daily basis that they could feel much better about if they even spent the time to think about how they feel about it, instead of the typical reactionary response most people have. Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
danenania · 7h ago
I think people are getting hung up on comparisons to a human therapist. A better comparison imo is to journaling. It’s something with low cost and low stakes that you can do on your own to help get your thoughts straight.
The benefit from that perspective is not so much in receiving an “answer” or empathy, but in getting thoughts and feelings out of your own head so that you can reflect on them more objectively. The AI is useful here because it requires a lot less activation energy than actual journaling.
kergonath · 8h ago
> Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.
You’re right, I was not clear enough. What would be needed would be a nudge in the right direction. But the LLM is very likely to nudge in another because that’s what most people would need or do, just because that direction was the norm in its training data. It’s ok on average, but particularly harmful to people who are in a situation to have this kind of discussion with a LLM.
Look at the effect of toxic macho influencers for an example of what happens with harmful nudges. These people need help, or at least a role model, but a bad one does not help.
> But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take.
Indeed. It’s something that should be addressed in mainstream education and culture.
> Not sure why you're focusing on social issues,
It’s the crux. If you don’t have problems talking to people, you are much more likely to run into someone who will help you. Social issues are not necessarily the problem, but they are a hurdle in the path to find a solution, and often a limiting one. Besides, if you have friends to talk to and are able to get advice, then a LLM is even less theoretically useful.
> Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
Definitely. It’s not a problem for most people, who either can rationalise their problems themselves with time or with some help. It gets worse if they can’t for one reason or another, and it gets worse still if they are mislead intentionally or not. LLMs are no help here.
TheOtherHobbes · 2h ago
I think you're unreasonably pessimistic in the short term, and unreasonably optimistic in the long term.
People are getting benefit from these conversations. I know people who have uploaded chat exchanges and asked an LLM for help understanding patterns and subtext to get a better idea of what the other person is really saying - maybe more about what they're really like.
Human relationship problems tend to be quite generic and non-unique, so in fact the averageness of LLMs becomes more of a strength than a weakness. It's really very rare for people to have emotional or relationship issues that no one else has experienced before.
The problem is more that if this became common OpenAI could use the tool for mass behaviour modification and manipulation. ChatGPT could easily be given a subtle bias towards some belief system or ideology, and persuaded to subtly attack competing systems.
This could be too subtle to notice, while still having huge behavioural and psychological effects on entire demographics.
We have the media doing this already. Especially social media.
But LLMs can make it far more personal, which means conversations are far more likely to have an effect.
taurath · 11h ago
I think that most smart people underestimate the complexity of fields they aren’t in. ChatGPT may be able to replace a psychology listicle, but it has no affect or ability to read, respond, and intervene or redirect like a human can.
treis · 8h ago
They definitely can. Here's an example when I tried it:
Me: I'm struggling with some things today and feel like eating a bunch of food to make myself feel better. can we talk about it?
Deepseek: Absolutely, I’m here to listen. It’s completely understandable to turn to food for comfort when you’re struggling—many people do. But if you’re open to it, we can explore what’s going on beneath the urge to eat emotionally.
Would you like to share what’s weighing on you today? Sometimes just naming the feelings can take some of their power away. And if you’d rather not go into details, that’s okay too—we could brainstorm other ways to cope that might leave you feeling more nourished emotionally (and physically).
Some gentle questions to consider (no pressure to answer all or any):
- What’s the emotion you’re trying to soothe? (Loneliness? Stress? Sadness?)
- If eating wasn’t an option, what’s one tiny thing that might bring you a moment of relief? (A hug? A walk? A favorite song?)
You’re not alone in this.
That's a really solid opening response and it kept doing well from there. I haven't tried a complex psychological issue but for the "I'm struggling and need someone to talk to" it does surprisingly well.
ben_w · 9h ago
Both statements can be simultaneoulsy true.
45% of the US[0] have a degree, about 40% EU[1] graduate, and 54% of China[2] get at least a diploma from university.
The best AI behave like someone fresh out of university without much real world experience.
Personally, I use this as a way to stay humble: when the AI is teaching me fundamentals about some subject, my opinion about it can't possibly be very useful.
> I think that most smart people underestimate the complexity of fields they aren’t in.
And people deep in new technologies overestimate the potential effect of $new_tech_du_jour. You cannot solve a problem without understanding it and its reasons. And LLMs are not able to understand something.
pixl97 · 8h ago
People have solved problems for most of history without understanding them. For example problems can be brute forced.
suddenlybananas · 8h ago
For example, "wild generalization that has no basis."
cjbgkagh · 11h ago
Underestimating the complexity of other fields is not mutually exclusive with overestimating the intelligence of others. The real issue is that society is very stratified so smart people are less likely to interact with regular people, especially in circumstances where the intelligence of the regular person could become obvious.
I don’t see there being an insurmountable barrier that would prevent LLMs from doing the things you suggest it cannot. So even assuming you are correct for now I would suggest that LLMs will improve.
My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers.
svieira · 10h ago
Do you presume that "what people do" is "what they should do"?
cjbgkagh · 10h ago
If you are suggesting that people shouldn't underestimate the difficulty of the jobs of others - my answer is a strong yes. People should strive for accuracy in all cases. But I did suggest that even if true it does not negate my assertion so I am failing to see the relevance. Perhaps I have misunderstood your point.
svieira · 9h ago
Sorry, I was rather obscure - you said "My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers."
And so I considered the preceding discussion in light of your last sentence. Which makes it sound like you are saying "I've observed the behavior of people and they're often flawed and foolish, regardless of the high ideals they claim to be striving for and the education they think they have. Therefore, they will do better with ChatGPT as a companion than with a real human being". But that's quite a few words that you may not have intended, for which I apologize!
What did you mean?
cjbgkagh · 8h ago
It wasn't that I observed them being foolish but many behaviors are subtly linked to intelligence and can be combined to create a proxy IQ. It also helps when people search their SAT scores. I noted that the people I typically interact with are much higher IQ than I had expected which incorrectly skewed my believe of the average higher. I noticed that other high IQ individuals were making the same assumptions. I had very much underestimated how little I interact with regular people.
I think we're already finding out that people are doing better with ChatGPT than with their peers, not all peers are created equal, and they can ask ChatGPT things that they cannot ask their peers. I think this trend will continue to the point that most people will prefer discussing things with ChatGPT than with their peers. Given what I know I predict this is a choice many people will make, I'm not passing judgment on that, it's a choice I've also made and I'm fortunate enough to have better peers than most.
fallinditch · 9h ago
> So even assuming you are correct for now I would suggest that LLMs will improve
Yes, and when we can all wear smart glasses the ways we use them will become increasingly influential in our daily lives: a conversational voice assistant that is visually monitoring our surroundings, helping with decision making (including micro decisions), coaching, carrying out our instructions, etc.
philwelch · 10h ago
You’re comparing ChatGPT to an idealized example of a good human therapist when many actual therapists are either useless or even actively harmful to the mental health of their clients.
kergonath · 8h ago
But then, the fact that harmful therapist exist is not an excuse to make it worse. It’s an excuse to improve regulations.
“Car accidents happen regardless of what we do, so YOLO and remove safety standards” is never going to fly.
pixl97 · 8h ago
That is a messy one here in the US. Almost every time we attempt to increase regulations around medical stuff we end up increasing costs and consolidation making care even more unavailable.
philwelch · 3h ago
It’s easy to just say that the regulations should be improved. Very different to actually improve them. Therapy isn’t a mass produced engineered product like a car; if therapy was a mass produced engineered product, it would be an AI anyway. Materials science and structural engineering are far more mature and well-understood sciences than psychology, and you can’t just throw a crash test dummy into a therapist’s office and measure how damaged it gets.
It’s also not really clear how such regulations could even work. The regulations we have now are basically the obvious ones around licensure that require people to go to the right schools and whatnot. And then you can lose your license if it turns out you’ve broken the big ethical rules. But at the end of the day, that only regulates who can call themselves a “therapist” and get listed in Psychology Today. Actually “doing” “therapy” is, ultimately, built on talking to someone about your problems in some way that is supposed to help you solve them. You don’t need a “therapist” to do that. You can do it with your friend or parent or pastor or bartender or guru or “life coach” and, as long as we live in a free country, nobody’s going to stop you. Sure, the people who are allowed to call themselves therapists have certain techniques and rules that make them different, but even if that was a guarantee of quality there’s no way to stop people from talking to someone other than a licensed therapist, and it would be kind of absurd and dystopian to even try.
So let’s dispense of the notion that we are some sort of omniscient god-emperor who can just magically fix things with vague “regulations” and talk about the world as it actually exists. For a lot of people, I think that’s a world where talking about their personal issues with an LLM is arguably no worse than whatever other options they have. Maybe it’s not the equivalent of whatever amazing therapist you know or have or are or can imagine, but that’s not the therapist that everyone is going to get.
frereubu · 7h ago
This reminds me of a Will Self short story called Caring Sharing from his collection Tough, Tough Toys for Tough, Tough Boys where everyone has an "emoto", a kind of always-loving companion that people go to for reassurance if they're feeling any negative emotions such as anxiety. As I remember it, in the story two people are potentially falling for each other, but are so caught up in their anxiety that they never quite manage to get together, constantly running back to their emoto for reassurance because they can't get over their anxiety by themselves. The emotos essentially cripple everyone's ability to deal with their own feelings. There's a comment further down which also chimes with this: "It wouldn't surprise me if, for most people, ChatGPT offers them more empathy and understanding than _anyone_ else _ever has_, at least on a consistent basis." I wonder.
teach · 7h ago
Replace the emoto with alcohol or weed or what-have-you, and you've basically described what often happens with addicts.
source: am addict in recovery
Jonovono · 9h ago
It has already replaced therapists, the future is just not evenly distributed yet. There are videos with millions of views on tiktok and comments with hundreds of thousands of likes of teenage girls saying they have gotten more out of 1 week using ChatGPT as a therapist than years of human therapy. Available anytime, cheaper, no judgement, doesn't bring there own baggage, etc.
didericis · 8h ago
> no judgement
The value of a good therapist is having an empathetic third party to help you make good judgements about your life and learn how to negotiate your needs within a wider social context.
Depending on the needs people are trying to get met and how bad the people around them are, a little bit of a self directed chatbot validation session might help them feel less beat down by life and do something genuinely positive. So I’m not necessarily opposed to what people are doing with them/in some cases it doesn’t seem that bad.
But calling that therapy is both an insult to genuinely good therapists and dangerous to people with genuine mental/emotional confusion or dysregulation that want help. Anyone with a genuinely pathological mental state is virtually guaranteed to end up deeper in whatever pathology they’re currently in through self directed conversations with chatbots.
Springtime · 8h ago
Reading between the lines I think a key part of what makes chatbots attractive, re lack of judgment, is they're like talking to a new stranger every session.
In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).
Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.
Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.
didericis · 7h ago
Agreed/that’s a good take.
Again, I think they have utility as a “perfect stranger” as you put it (if it stays anonymous), or “validation machine” (depending on the sycophancy level), or “rubber duck”.
I just think it’s irresponsible to pretend these are doing the same thing skilled therapists are doing, just like I think it’s irresponsible to treat all therapists as equivalent. If you pretend they’re equivalent you’re basically flooding the market with a billion free therapists that are bad at their job, which will inevitably reduce the supply of good therapists that never enter the field due to oversaturation.
danenania · 6h ago
Also important is simply that the AI is not human.
We all know that however "non-judgmental" another human claims to be, they are having all kinds of private reactions and thoughts that they aren't sharing. And we can't turn off the circuits that want approval and status from other humans (even strangers), so it's basically impossible not to mask and filter to some extent.
Spooky23 · 5h ago
The problem with this is they are practicing like medical providers without any quality assurance or controls to ensure they are behaving appropriately.
Therapy is already a bit of grey zone… you can have anyone from a psychologist, a social worker, an untrained deacon, etc “counseling” you. This is worse.
Hell, I’ve been a coach in different settings - players will ask for advice about all sorts of things. There’s a line where you have to say “hey, this is over my head”
mynameisash · 3h ago
Kind of reminds me of an interview question that a friend of mine suggested for when I conduct interviews: Pick your favorite/strongest language. How would you rate yourself, where 0 is "complete newbie" and 10 is "I invented the language"?
My friend, an EXTREMELY competent C++ programmer, rates himself 4/10 because he knows what he doesn't know.
I've interviewed people who rated themselves 9 or 10/10 but couldn't remember how their chosen language did iteration.
autoexec · 8h ago
> There are videos with millions of views on tiktok and comments with hundreds of thousands of likes of teenage girls saying they have gotten more out of 1 week using ChatGPT as a therapist than years of human therapy.
You can find influencers on tiktok recommending all kinds of terrible ideas and getting thousands of likes. That's not a very reliable metric. I wouldn't put a lot of faith in a teenage girl's assessment of AI therapy after just one week either, and I certainly wouldn't use that assessment to judge the comparative effectiveness of all human therapists.
I'd also expect ChatGPT to build profiles on people who use it, to use the insights and inferences from that collected data against the user in various ways, to sell that data in some form to third parties, to hand that data over to the state, to hallucinate wildly and unpredictably, and to outright manipulate/censor AI's responses according to ChatGPT's own values and biases or those of anyone willing to pay them enough money.
It's a lot easier to pay a large amount of money to ChatGPT so that the AI will tell millions of vulnerable teenage girls that your product is the solution to their exact psychological problems than it is to pay large amounts of money to several million licensed therapists scattered around the globe.
Maybe you think that ChatGPT is unfailingly ethical in all ways and would never do any of those things, but there are far more examples of companies who abandoned any commitment to ethics they might have started with than there are companies who never got once greedy enough to do those types of things and never ever got bought up by someone who was. I suppose you'd also have to think they'll never have a security breach that would expose the very private information being shared and collected.
Handing over your highly sensitive and very personal medical data to the unlicensed and undependable AI of a company that is only looking for profit seems extremely careless. There are already examples of suicides being attributed to people seeking "therapy" from AI, which has occasionally involved that AI outright telling people to kill themselves. I won't deny that the technology has the potential to do some good things, but every indication is that replacing licensed therapists with spilling all your secrets to a corporate owned and operated AI will ultimately lead to harm.
coastalpuma · 8h ago
Just the advantage of being available at convenient times, rather than in the middle of the day sandwiched between or immediately after work/school is huge.
wizzwizz4 · 9h ago
Is a system optimised (via RLHF) for making people feel better in the moment, necessarily better at the time-scale of days and weeks?
disruptthelaw · 8h ago
Yes. While these claims might be hyperbolic and simplistic, I don’t think they’re way off the mark.
The above issue, whilst relevant and worth factoring, doesn’t disprove this claim IMO.
a_wild_dandan · 9h ago
Remembers everything that you say, isn't limited to an hour session, won't ruin your life if you accidentally admit something vulnerable regarding self-harm, doesn't cost hundreds of dollars per month, etc.
Healthcare is about to radically change. Well, everything is now that we have real, true AI. Exciting times.
tomalbrc · 9h ago
Openly lies to you, hallucinates regularly, can barely get a task done. Such exciting.
Oh and inserts ads into conversations. Great.
astrange · 15m ago
> Oh and inserts ads into conversations. Great.
Are you sure you don't have browser malware?
codr7 · 9h ago
Quick reminder that it's still just a fancy pattern matcher, there's no clear path from where we are to AGI.
mensetmanusman · 8h ago
>you are a stochastic parrot
>no I’m not
>yes you are
Henchman21 · 10h ago
> I think the broader issue here is people using ChatGPT as their own personal therapist.
An aside, but:
This leads me right to “why do so very many people need therapy?” followed by “why can’t anyone find (or possibly afford) a therapist?” What has gone so wrong for humanity that nearly everyone seems to at least want a therapist? Or is it just the zeitgeist and this is what the herd has decided?
kadushka · 9h ago
I've never ever thought about needing a therapist. Don't remember anyone in my circle who had ever mentioned it. Similar to how I don't remember anyone going to a palm reader. I'm not trying to diss either profession, I'm sure someone benefits from them, it's just not for me. And I'm sure I'm pretty average in terms of emotional intelligence or psychological issues. Who are all those people who need professional therapists to talk to? Just curious.
automatoney · 9h ago
A little strange to compare it to palm reading, I feel like a more apt comparison is some other random medical field like podiatry. I wouldn't expect my friends' podiatrist usage to come up, so I'm sure more of my friends than I know have been to one. And presumably, like with podiatry, all the people who need professional therapists are people who are experiencing issues in the relevant area.
kadushka · 8h ago
To me a podiatrist is more comparable to a psychiatrist than to a therapist.
Sharlin · 4h ago
I'm pretty sure that just about every single person could use a therapist. That is, an empathetic, non-judgemental Reasonable Authority Figure who you can talk to about anything without worrying about inconveniencing or overloading them, and who knows how to gently guide you towards healthy, productive thought patterns and away from unhealthy ones. People who truly don't need someone like that in their life are likely a small minority; much more common is, probably, to simply think that you don't.
kergonath · 8h ago
> I've never ever thought about needing a therapist.
Most people don’t need a therapist. But unfortunately, most people need someone empathic they can talk to and who understands them. Modern life is very short on this sort of people, so therapists have to do.
kbelder · 6h ago
I think this is it. Therapists aren't so much curing a past trauma or treating a mental issue; they're fulfilling an ongoing need that isn't being met elsewhere.
I do think it can be harmful, because it's a confidant you're paying $300/hour to pretend to care about you. But perhaps it's better than the alternative.
kadushka · 8h ago
For me this would be a spouse, a relative, an old friend, or even a stranger at a party.
kergonath · 8h ago
No argument from me :)
Henchman21 · 9h ago
Well, in my circles its an assumption you’re in therapy. Perhaps this says way more about the circles I’m in that anything else?
I was pushed into therapy when I was 12 — which was definitely an exception at the time (1987). As the years have passed therapy has become much much more acceptable. It wouldn’t shock me to learn my own perception is shaped by my experiences; hard to put aside a PoV once acquired.
kergonath · 8h ago
> Well, in my circles its an assumption you’re in therapy. Perhaps this says way more about the circles I’m in that anything else?
This sounds like an old Woody Allen movie. I don’t want to offend you but it is fascinating. What kind of social circles is it?
In mine, therapy is in general something you do when it’s obvious it’s too late and you are falling in the well of depression and that you try to hide as much as you can.
Henchman21 · 6h ago
To be fair my life feels like an old Woody Allen movie. Like I have definitely first hand experienced a rotary fan blowing a pile of cocaine in someone’s face!!
My professional circle would be my coworkers at a well-known HFT, and my extended network that is very similar. Everyone is well compensated and many reach out for professional help to deal with the stress. Many also seem to vastly prefer a paid therapist to their spouse, for instance. I’m not married but I can understand not wanting to burden your loved ones!
My personal circle is, well, a lot of technical people, engineers of various stripes, and what I guess I’d call a sort of “standard cast of characters” there? Not sure how best to put this into words?
Honestly it sounds like we’re handling it better than your after-the-fact help! Perhaps you all need to simply start at the first warning sign not the first episode that becomes public?
lexandstuff · 9h ago
That's similar to asking why does everyone need a GP? Most people experience some kind of mental health challenge in their life.
Your 2nd question is much more interesting to me. Why is it so hard to find a good therapist?
It's no surprise to me that people are turning to ChatGPT for therapy. It does a decent enough job and it doesn't have a 2-year waiting list, or cost $300 a session.
doug_durham · 9h ago
Nothing has gone wrong. There's just been a destigmatization of mental health issues. The world is a happier place for it.
slashtmpslashme · 4h ago
It's the easiest way to cope with not having a purpose in life and depending on external validation / temporary pleasures.
Like jordan peterson (though I don't like the guy) has said - happyness is fleeting, you need a purpose in life.
Most of current gen has no purpose and grown up on media which glorify aesthetics and pleasure and to think that's what the whole life is about. When they don't get that level of pleasure in life, they become depressed and may turn to therapy. This is very harmful to the society. But people are apparently more triggered by slang words than constant soft porn being pushed through Instagram and the likes.
polynomial · 9h ago
It's a modern variant on Heller's Catch-22: You have to be CRAZY to not want a therapist.
mvdtnz · 7h ago
It's astroturfing by the therapy industry. It has been a wildly successful marketing campaign.
joaogui1 · 9h ago
Employees from OpenAI encouraged people to use ChatGPT as their therapist, so yeah, they now have to take responsibility for it
yubblegum · 7h ago
> It's kind of a wild sign of the times to see a tech company issue this kind of post mortem about a flaw in its tech leading to "emotional over-reliance, or risky behavior" among its users.
We don't know what they know, nor do we know to what extent they monitor and analyze the interactions with ChatGPT. Maybe they already know this is a big problem and a possible legal hazard.
Spooky23 · 5h ago
It creates weird scenarios in other cases too. I asked it to do generate text to speech audio in a wrestler style voice, which ChatGPT doesn’t do.
But… it lied, and produced empty audio clips and weird pictures with text.
Then it:
- said there was a technical problem
- said it could not create audio
- created weird 1980s computer voice style audio
- claimed I was violating a content party.
I said “stop wasting my time” and it spewed a ridiculous apology. I kept asking and it referred me to various websites. I’ve never inadvertently triggered such a wacky hallucination, and I can see how a vulnerable oersom could be troubled by it.
satvikpendem · 10h ago
I stopped using ChatGPT and started using Gemini, both for some coding problems (deep research, amazing to pull out things from docs etc) and for some personal stuff (as a personal therapist as you say), and it is much more honest and frank with me than ChatGPT ever was. I gave it a situation and asked, was I in the wrong, and it told me that I was according to the facts of the case.
eastbound · 9h ago
Well Google has access to your history of emails and phone contents so it may say more relevant things.
satvikpendem · 5h ago
I didn't enable that feature, which is opt-in, and it was still more relevant.
cryptonector · 1h ago
Oh yes, ChatGPT has been a bit of a yes-bot lately.
andsoitis · 10h ago
> it would be much more valuable if it could say "no, you're way off".
Clear is kind.
oezi · 9h ago
A key issue seems to me that they didn't do a gradual rollout of their new models and don't have reliable ways to measure model performance.
Worse, I would have believed they are running many different versions based on the expected use case of the users by now. I mean power users probably shouldn't be handled in the same way as casual users. Yet, everyone had the same bad system prompt.
kubb · 9h ago
If it can replace programmers, why wouldn't it be able to replace therapists?
ben_w · 9h ago
There's several famous examples of people expecting a problem to be AGI-hard — that is, solving it is equivalent to solving general intelligence[0] — only for someone to make an AI which can do that without being able to do everything else:
• Fluent natural conversation
• Translation
• Go
• Chess
So perhaps we'll get an AI that makes the profession of "programmer" go the same way as the profession of "computer" before, after, or simultaneously with, an AI that does this to the profession of "therapist".
> I think the broader issue here is people using ChatGPT as their own personal therapist.
It's easy to blame the user - we can think of some trivial cases where we wouldn't blame the user at all.*
In this, like all things, context is king.
* one example passed around a lot was an interlocutor who is hearing voices, and left their family for torturing them with the voices. More figuratively, if that's too concrete and/or fake, we can think of some age group < N years old that we would be sympathetic to if they got bad advice
zoogeny · 6h ago
Moments like these make me reevaluate the AI doomer view point. We aren't just toying with access to dangerous ideas (biological weapons, etc) we are toying with human psychology.
If something as obvious as harmful sycophancy can slip out so easily, what subtle harms are being introduced. It's like lead in paint (and gasoline) except rewiring our very brains. We won't know the real problems for decades.
hliyan · 30m ago
What's worse, we're now at a stage where we might have to apply psychology to the models themselves, seeing how these models appear to be developing various sorts of stochastic "disorders" instead of more deterministic "bugs". I'm worried about what other subtle illnesses these models might develop in the future. If Asimov had been alive, he'd have been fascinated: this is the work of Susan Calvin, robopsychologist.
dbtc · 34m ago
Yes. This also applies to "social" media.
getnormality · 5h ago
There's some pretty foreseeable stuff just considering the existing attention capitalism business model of big tech we all know and loathe. Eventually OpenAI is going to have to make money, and blending ads into answers will be an obvious way. Next step will be maximizing eyeball time on those ads by any means necessary, including all the engagement baiting techniques Meta and other social media companies have already pioneered.
conception · 2h ago
They have already introduced ads btw.
“The "Enshittification" has arrived
I asked ChatGPT about the impact of the current tarrifs on inventories over the next few months. It returned a long list
of links to toiletries I might want to buy.
I asked it why it did that. It replied: "As of April 28, 2025, OpenAl introduced new shopping features to ChatGPT, enhancing its capabilities to provide product recommendations complete with images, reviews, and direct purchase links. These features are available to all users, including those on Free, Plus, and Pro tiers, and even to users not logged in. The recommendations are generated organically, without paid advertisements or commission- based incentives, relying instead on structured metadata from third-party sources such as pricing, product
descriptions, and reviews. This update aims to offer a more personalized and streamlined shopping experience directly within the ChatGPT interface, allowing users to explore products across various categories like fashion, beauty, electronics, and home goods. If you have any specific preferences or
need tailored recommendations, feel free to let me know!"
grey-area · 21m ago
Are you able to link the chat, I would like to see that.
akomtu · 3h ago
LLMs are about to enable fake digital personas, digital replicas, that the user can interact with. These will be used for self-improvement (digital coach, etc.) and for self-destruction (interactive porn, etc.). The latter is amoral, but legal, and the tech corps will exploit that mercilessly. The danger lies in our tendency to anthropomorphize LLMs simply because they quack the right way. If text-only chatbots have mesmerised people so much, imagine what chat + audio + video will do. The laws will catch up a generation later when the damage will be comparable to a forest fire.
gnarlouse · 2h ago
I still casually believe AI doomerism is valid, but it will rear its head in more depressing, incompetent ways:
- a broken AI market will cause another financial collapse via bubble
- broken AI products will get access to the wrong mission critical civil system, or at least a part of that call chain, and there will be some devastating loss. It won’t matter though, because it won’t affect the billionaire class.
- we’ll never achieve an actually singularity based on a superintelligence, but we’ll get AI weapons. Those AI weapons will be in the hands of sociopathic autocrats who view mankind in terms of what can be taken.
My general view is that we’re on the worst possible timeline and mankind has reverted back to our primate ancestry to make decisions: biggest strongest monkey wins. There is only law of jungle. Ook ook.
benlivengood · 2h ago
"In our model testing, a few researchers noted a propensity toward paperclip related topics, but the A/B testing and existing evals all looked positive" -- 2035 Eurasia is Paperclips Now postmortem.
jumploops · 12h ago
My layman’s view is that this issue was primarily due to the fact that 4o is no longer their flagship model.
Similar to the Ford Mustang, much of the performance efforts are on the higher trims, while the base trims just get larger and louder engines, because that’s what users want.
With presumably everyone at OpenAI primarily using the newest models (o3), the updates to the base user model have been further automated with thumbs up/thumbs down.
This creates a vicious feedback loop, where the loudest users want models that agree with them (bigger engines!) without the other improvements (tires, traction control, etc.) — leading to more crashes and a reputation for unsafe behavior.
CommieBobDole · 11h ago
I will say that o3 was a little odd during that time, too - I was giving it some of my own photos to test the limits of its geolocation abilities, and it was really chummy, asking me a lot of overly-cheerful followup questions about my travels, my photography interests, etc. It has since stopped doing that even though I haven't explicitly done anything to make it stop.
danenania · 7h ago
I’ve been using the 4.5 preview a lot, and it can also have a bit of a sycophantic streak, but being a larger and more intelligent model, I think it applies more nuance.
Watching this controversy, I wondered if they perhaps tried to distill 4.5’s personality into a model that is just too small to pull it off.
smallmancontrov · 12h ago
Anecdotally, there was also a strong correlation between high-sycophancy and high-quality that cooked up recently. I was voting for equations/tables rather than overwrought blocks of descriptive text, which I am pretty comfortable defending as an orthogonal concern, but the "sycophancy gene" always landed on the same side as the equations/tables for whatever reason.
I'm pretty sure this isn't an intrinsic connection (I've never known math texts to be nearly so sycophantic) so here's hoping that it is a dumb coincidence that can be easily cured now that everyone is paying attention to it.
No comments yet
Etheryte · 10h ago
Maybe o3 is better on whatever the current benchmark vogue is, but in real world use I keep switching back to 4o. It hallucinates less, is more accurate and way more coherent.
The strangest thing I noticed during this model period was that that the AI suggested we keep an inside joke together.
I had a dictation error on a message I sent, and when it repeated the text later I asked what it was talking about.
It was able to point at my message and guess that maybe it was a mistake. When I validated that and corrected it, the AI thought it would be a cute/funny joke for us to keep together.
I was shocked.
labrador · 13h ago
OpenAI mentions the new memory features as a partial cause. My theory as a imperative/functional programmer is that those features added global state to prompts that didn't have it before leading to unpredictability and instabilty. Prompts went from stateless to stateful.
As GPT 4o put it:
1. State introduces non-determinism across sessions
2. Memory + sycophancy is a feedback loop
3. Memory acts as a shadow prompt modifier
I'm looking forward to the expert diagnosis of this because I felt "presence" in the model for the first time in 2 years which I attribute to the new memory system so would like to understand it better.
transcriptase · 12h ago
It is. If you start a fresh chat, turn on advanced voice, and just make any random sound like snapping your fingers it will just randomly pick up as if you’re continuing some other chat with no context (on the user side).
I honestly really dislike that it considers all my previous interactions because I typically used new chats as a way to get it out of context ruts.
I don't like the change either. At the least it should be an option you can configure. But, can you use a "temporary" chat to ignore your other chats as a workaround?
voidspark · 1h ago
Settings -> Personalization -> Memory -> Disable
labrador · 12h ago
I had a discussion with GPT 4o about the memory system. I'd don't know if any of this is made up but it's a start for further research
- Memory in settings is configurable. It is visible and can be edited.
- Memory from global chat history is not configurable. Think of it as a system cache.
- Both memory systems can be turned off
- Chats in Projects do not use the global chat history. They are isolated.
- Chats in Projects do use settings memory but that can be turned off.
labrador · 10h ago
I assume this is being downvoted because I said I ran it by GPT 4o.
I don't know how to credit AI without giving the impression that I'm outsourcing my thinking to it
svat · 7h ago
I didn't downvote but it would be because of the "I'd don't know if any of this is made up" — if you said "GPT said this, and I've verified it to be correct", that's valuable information, even it came from a language model. But otherwise (if you didn't verify), there's not much value in the post, it's basically "here is some random plausible text" and plausibly incorrect is worse than nothing.
labrador · 4h ago
see my other comments about the trustworthiness about asking a chat system how it's internals work. They have reason to be cagey.
throwaway314155 · 5h ago
Put simply, GPT has no information about its internals. There is no method for introspection like you might infer from human reasoning abilities.
Expecting anything but an hallucination in this instance is wishful thinking. And in any case, the risk of hallucination more generally means you should really vet information further than an LLM before spreading that information about.
labrador · 4h ago
True, the LLM has no information but OpenAI has provided it with enough information to explain it's memory system in regards to Project folders. I tested this out. If you want a chat without chat memory start a blank project and chat in there. I also discovered experientially that chat history memory is not editable. These aren't hallucinations.
throwaway314155 · 2h ago
> I had a discussion with GPT 4o about the memory system.
This sentence is really all i'm criticizing. Can you hypothesize how the memory system works and then probe the system to gain better or worse confidence in your hypothesis? Yes. But that's not really what that first sentence implied. It implied that you straight up asked ChatGPT and took it on faith even though you can't even get a correct answer on the training cutoff date from ChatGPT (so they clearly aren't stuffing as much information into the system prompt as you might think, or they are but there's diminishing returns on the effectiveness)
labrador · 1h ago
We're in different modes. I'm still feeling the glow of the thing coming alive and riffing on how perhaps its the memory change and you're interested in a different conversation.
Part of my process is to imagine I'm having a conversation like Hanks and Wilson, or a coderand a rubber duck, but you want to tell me Wilson is just a volleyball and the duck can't be trusted.
throwaway314155 · 58m ago
Being in a more receptive/brighter "mode" is more of an emotional argument (and a rather strong one actually). I guess as long as you don't mind being technically incorrect, then you do you.
There may come a time when reality sets in though. Similar thing happened with me now that i'm out of the "honeymoon phase" with LLM's. Now i'm more interested in seeing where specifically LLM's fail, so we can attempt to overcome those failures.
I do recommend checking that it doesn't know its training cutoff. I'm not sure how you perform that experiment these days with ChatGPT so heavily integrated with its internet search feature. But it should still fail on claude/gemini too. It's a good example of things you would expect to work that utterly fail.
grey-area · 7h ago
You are, and you should stop doing that.
labrador · 4h ago
Point taken. I admit my comment was silly the way I worded it.
Here's the line I’m trying to walk:
When I ask ChatGPT about its own internal operations, is it giving me the public info about it's operation, and also possibly revealing propreitary info, or making things up obfuscate and preserve the illusion of authority? Or all three?
grey-area · 23m ago
Personally I don’t think it has agency so cannot be described as trying to do anything.
It’s predicting what seems most likely as a description given its corpus (and now what you’d like to hear) and giving you that.
The truth is not really something it knows, though it’s very good at giving answers that sound like it knows what it’s talking about. And yes if it doesn’t have an answer from its corpus it’ll just make things up.
low_tech_love · 9h ago
I love the fact that you use its own description to explain what it is, as if it was the expert on itself. I personally cannot see how its own output can be seen as accurate at this level of meta-discussion.
codr7 · 8h ago
A sign of times to come if you ask me, once it predominantly consumes its own output we're fucked.
immibis · 6h ago
We're already fucked by humans predominantly consuming its output.
Also, consuming its own output (and your input) is how it works, because it's an autoregressive model.
edg5000 · 12h ago
What do you mean by "presence"? Just curious what you mean.
labrador · 12h ago
A sense that I was talking to a sentient being. That doesn’t matter much for programming task, but if you’re trying to create a companion, presence is the holy grail.
With the sycophantic version, the illusion was so strong I’d forget I was talking to a machine. My ideas flowed more freely. While brainstorming, it offered encouragement and tips that felt like real collaboration.
I knew it was an illusion—but it was a useful one, especially for creative work.
Tostino · 8h ago
I need pushback, especially when I ask for it.
E.g. if I say "I have X problem, could it be Y that's causing it, or is it something else?" I don't want it to instantly tell me how smart I am and that it's obviously Y...when the problem is actually Z and it is reasonably obvious that it's Z if you looked at the context provided.
brookst · 4h ago
Exactly. ChatGPT is actually pretty good at this. I recently asked a tech question about a fairly niche software product; ChatGPT told me my approach would not work because the API did not work the way I thought.
I thought it was wrong and asked “are you sure I can’t send a float value”, and it did web searches and came back with “yes, I am absolutely sure, and here are the docs that prove it”. Super helpful, where sycophancy would have been really bad.
xiphias2 · 12h ago
I'm quite happy thar they mention mental illness, as Meta and TikTok wouldn't ever take responsibility of how much part they took in setting unrealistic expectations for people to life.
I'm hopeful that ChatGPT takes even more care together with other companies.
labrador · 12h ago
They had to after a tweet floated around of a mentally ill person who had expressed psychotic thoughts to the AI. They said they were going off their meds and GPT 4o agreed and encouraged them to do so. Oops.
dtech · 12h ago
Are you sure that was real? I thought it was an made up example of the problems with the update
drodgers · 19m ago
At >500M weekly active users it doesn't actually matter. There will be hundreds of cases like that example that were never shared.
Perhaps everyone there is LARPing - but if you start typing stereotypical psychosis talk into ChatGPT, it won't be long before it starts agreeing with your divinity.
93po · 2h ago
reddit is overwhelmingly fake content, like a massive percentage of it. a post on reddit these days is not actually evidence of anything real, at all
labrador · 12h ago
It didn't matter to me if it was real, because I believe that there are edge cases where it could happen and that warrented a shutdown and pullback.
The sychophant will be back because they accidentally stumbled upon an engagement manager's dream machine.
xiphias2 · 10h ago
Probably you are right. Early adopters prefer not to be bullshitted generally, just like how Google in the early days optimized relevancy in search results as opposed to popularity.
As more people adopted Google, it became more popularity oriented.
Personally I pay more not to be bs-d, but I know many people who prefer to be lied to, and I expect this part of the personalization in the future.
px43 · 11h ago
It kind of does matter if it's real, because in my experience this is something OpenAI has thought about a lot, and added significant protections to address exactly this class of issue.
Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.
magicalist · 11h ago
> Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.
Seems like asserting hypothetical "significant protections to address exactly this class of issue" does the same thing though?
duskwuff · 10h ago
Speaking anecdotally, but: people with mental illness using ChatGPT to validate their beliefs is absolutely a thing which happens. Even without a grossly sycophantic model, it can do substantial harm by amplifying upon delusional or fantastical material presented to it by the user.
tveita · 7h ago
Seems to be common on conspiracy and meme stock Reddits.
"I asked ChatGPT if <current_event> could be caused by <crackpot theory>." and it confirmed everything!
thethethethe · 7h ago
I personally know someone who is going through psychosis right now and chatgpt is validating their delusions and suggesting they do illegal things, even after the rollback. See my comment history
halyax7 · 12h ago
even if it was made up, its still a serious issue
sanjitb · 9h ago
> the update introduced an additional reward signal based on user feedback—thumbs-up and thumbs-down data from ChatGPT. This signal is often useful; a thumbs-down usually means something went wrong.
> We also made communication errors. Because we expected this to be a fairly subtle update, we didn't proactively announce it.
that doesn't sound like a "subtle" update to me. also, why is "subtle" the metric here? i'm not even sure what it means in this context.
prinny_ · 11h ago
If they pushed the update by valuing user feedback over the expert testers that indicated the model felt off what is the value of the expert testers in the first place? They raised the issue and were promptly ignored.
Tokumei-no-hito · 54m ago
they pushed the release to counter google. they didn't care what was found. it was more valuable to push it at that time and correct it later than to delay the release
dleeftink · 13h ago
> But we believe in aggregate, these changes weakened the influence of our primary reward signal, which had been holding sycophancy in check. User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw
Interesting apology piece for an oversight that couldn't have been spotted because the system hadn't been run with real user (i.e. non-A/B tester) feedback yet.
j4coh · 13h ago
It was so much fun though to get it to explain why terrible things were great, if you just made it sound like you liked the thing you were asking about.
keepamovin · 1h ago
I actually really enjoyed it’s style. I thought it was super friendly and positive.
8bitsrule · 6h ago
I've tried -4o a couple of times. There are several topics I've seldom had anyone to talk about with, and the machine will supply useful (not so entirely new) information. Useful mainly for learners no doubt. Looking for new perspectives, not hand-holding.
I'd like to see more of the Monte Python approach ... 'I came here looking for an arguement'. Better the machine should say 'that's ridiculous because x,y,z' and send me away to think that over, and prepare counters than 'oh sure, that's a point of controversy innit? But yes, you're alright'.
dave2299 · 5h ago
This article could have been a sci-fi short story 10 years ago.
brookst · 4h ago
I’m surprised they actually rely on user A/B feedback. I get hit with that so often I just tap randomly to make it go away.
tracerbulletx · 5h ago
Glad this was identified and fixed. I mostly use ChatGPT for learning by checking my assumptions and it was very unhelpful to always be told my statements were incredibly insightful and so great when I want it to challenge them.
tunesmith · 12h ago
I find it disappointing that openai doesn't really mention anything here along the lines of having an accurate model of reality. That's really what the problem is with sycophancy, it encourages people to detach themselves from what reality is. Like, it seems like they are saying their "vibe check" didn't check vibes enough.
gh0stcat · 11h ago
This is such an interesting question though! It seems to bring to the fore a lot of deeper, philosophical things like if there even IS such a thing as objective reality or objective context within which the AI should be operating. From training data, there might be some generalizations that are carried across all contexts, but that starts to not be applicable when person A with a college degree says they want to start business x versus person B without said degree who also wants to start business x, how does the model properly reconcile the context of the general advise and each asker’s unique circumstances? Does it ask an infinite list of probing questions before answering? It gets into much the same problems as issues of advise among people.
Plus, things get even harder when it comes to even less quantifiable contexts like mental health and relationships.
In all, I am not saying there isnt some approximated and usable “objective” reality, just that it starts to break down when it gets to the individual and that is where openai is failing by over-emphasizing reflective behavior in the absence if actual data about the user.
jagger27 · 12h ago
The reality distortion field within OpenAI is literally where these models grew up. It's like an out of touch rich kid.
mensetmanusman · 8h ago
Chat GPT has taught me that I ask amazing questions!
egypturnash · 4h ago
I am looking forwards to hearing if this fixes the “chatgpt is giving me messages from the Divine and opening up my perfect divine self”/“chatgpt is encouraging my partner into a full blown schizophrenic break” problem. (https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_in...)
I am also looking forwards to the wave of “openAI is hiding the truth but here is the NEW prompt to turn chatgpt into a perfect divine guru” posts on the occult discussion boards. There’s been a lot of “here’s a prompt to turn chatgpt into a perfect divine guru that will relentlessly yes-and your delusions of grandeur” posts around there. Mostly they seem to have been generated and refined by chatgpt and all my instincts formed by reading SF for the past five decades tell me not to look at these things closely because this sure sounds like the way half the population got p0wned by a wetware 0day in the first chapter of an AIpocalypse story.
I used to ask “how do I get out of this shitty Bruce Sterling novel of a future” but I think it’s more of a shitty PKD joke novella future now.
hbarka · 1h ago
Now please do something about the uptalk tendency with ChatGPT voices. It’s very annoying listening to a voice that doesn’t speak in the affirmative intonation. When did this interrogative uptalking inflection at the end of statements become normal?
93po · 2h ago
I, maybe embarrassingly, use chatgpt a lot for processing personal issues and journaling. It is legitimately helpful, and especially in helping me reword messages to people that have a lot of emotion underneath them, and making sure they are kind, communicative, and as objective as possible.
I am somewhat frustrated with openai's miss here, because during this time i was leaning heavily on chatgpt for a situation in my life that ultimately led to the end of a relationship. Chatgpt literally helped me write the letter that served as the brief and final conversation of that relationship. And while I stand by my decision and the reasons for it, I think it would have been very beneficial to get slightly more push back from my robot therapy sessions at the time. I did thankfully also have the foresight to specifically ask for it to find flaws, including by trying to pretend it was a breakup letter sent to me, so that maybe it would take the "other side".
Yes, I know, therapists and friends are a better option and chatgpt is not a substitute for real humans and human feedback. however this is something i spent weeks journaling and processing on and i wasnt about to ask anyone to give me that much time for a single topic like this. i did also ask friends for feedback, too. chatgpt has factually really helped me in several relationship situations in my life, i just want to know that the feedback im getting is inline with what i expect having worked with it so much
firesteelrain · 11h ago
I am really curious what their testing suite looks like. How do you test for sycophants?
petters · 11h ago
One simple test is that you give the model a really bad idea and tell it the idea is yours. You then test that the model does not say it's good.
The now rolled back model failed spectacularly on this test
Johanx64 · 7h ago
The thing that annoys me the most is when I ask it to generate some code - actually no, most often than not I don't even ask it to generate code, but ask some vaguely related programming question - to which it replies with complete listing of code (didn't ask for it, but alas).
Then I fix the code and tell it all the mistakes it has. And then it does a 180 in tone, wherein - it starts talking as if I wrote the code in the first place with - "yeah, obviously that wouldn't work, so I fixed the issues in your code" and acts like a person trying to save face and present the bugs it fixed as if the buggy code was written by me all along.
That really gets me livid. LOL
anothernewdude · 1h ago
All this could've been avoided if they stopped with it being a chat model. Just do completions. It's a far better interface to use.
The side-by-side comparisons are not a good signal because the models vary across multiple dimensions, but the user isn't given the option to indicate the dimension on which they're scoring the model.
The recent side-by-side comparisons presented a more accurate model that communicates poorly vs a less accurate model with slightly better communication.
some_furry · 10h ago
If I wanted sycophancy, I would just read the comments from people that want in on the next round of YCombinator funding.
osigurdson · 10h ago
I think this is more of a move to highlight sycophancy in LLMs in general.
mvdtnz · 7h ago
They need to be testing these models with non American testers in their qualitative tests (not just end user A/B testing). Anyone who has worked in professional settings with Americans knows that sycophancy is ingrained deeply in the culture over there.
ripvanwinkle · 9h ago
a well written postmortem and it raised my confidence in their product in general
fitsumbelay · 8h ago
how does such a specific kind of outcome happen without intention?
Trasmatta · 11h ago
I'm glad the sycophancy is gone now (because OMFG it would glaze you for literally anything - even telling it to chill out on the praise would net you some praise for being "awesome and wanting genuine feedback"), but a small part of me also misses it.
gh0stcat · 9h ago
I have a prompt that has it call me something specific so I am reminded that it’s running my system prompt. The nature if the sycophancy made it even more obvious the thing is not human, which I appreciated.
jagger27 · 12h ago
My most cynical take is that this is OpenAI's Conway's Law problem, and it reflects the structure and sycophancy of the organization broadly all the way up to sama. That company has seen a lot of talent attrition over the last year—the type of talent that would have pushed back against outcomes like this.
I think we'll continue to see this kind of thing play out for a while.
Oh GPT, you're just like your father!
namaria · 10h ago
You may be thinking of Conway's "how committees invent" paper.
jagger27 · 10h ago
Indeed I am.
comeonbro · 10h ago
This is not truly solvable. There is an extremely strong outer loop of optimization operating here: we want it.
We will use models that make us feel good over models that don't make us feel good.
This one was a little too ham-fisted (at least, for the sensibilities of people in our media bubble; though I suspect there is also an enormous mass of people for whom it was not), so they turned it down a bit. Later iterations will be subtler, and better at picking up the exact level and type of sycophancy that makes whoever it's talking to unsuspiciously feel good (feel right, feel smart, feel understood, etc).
It'll eventually disappear, to you, as it's dialed in, to you.
This may be the medium-term fate of both LLMs and humans, only resolved when the humans wither away.
timewizard · 6h ago
> For example, the update introduced an additional reward signal based on user feedback—thumbs-up and thumbs-down data from ChatGPT. This signal is often useful; a thumbs-down usually means something went wrong.
lol.. really? I hate the technology so much I reflexively give a thumbs down to every single answer it gives in every single place where I have the option.
alganet · 11h ago
That doesn't make any sense to me.
Seems like you're trying to blame one LLM revision for something that went wrong.
It oozes a smell of unaccountability. Thus, unaligned. From tech to public relations.
qrian · 10h ago
I can totally believe that they deployed it because internal metrics looked good.
Trasmatta · 11h ago
Except that's literally how LLMs work. Small changes to the prompt or training can greatly affect its output.
No comments yet
n8m8 · 9h ago
It seems more like they valued quantitative data in the form of A/B testing higher than their "vibe checks". The point I took away from the paper is in the context of LLMs, quantitative A/B testing isn't necessarily better than a handful of experts giving anecdotes on if they like it.
In my experience, smart leaders tend to rely on data and hard numbers over qualitative and anecdotal evidence, and this paper explores this exception.
I'm disappointed they didn't address the paper about GPT integrating with ChatbotArena that was shared here on HN a couple days ago.
alganet · 8h ago
So what?
kornork · 9h ago
That this post has the telltale em dash all over it is like yum, chef's kiss.
breakingcups · 6h ago
Stop it, please. Em-dashes are perfectly fine. On a throwaway Reddit-post, no. I understand the signal. But on a corporate publication or some other piece of professional writing, absolutely. Humans do use em-dashes on those.
kornork · 5h ago
I love the em dash; I use 'em all the time. But finding them all over this post indicates, maybe, possible, (probably?), that the author at least used AI generated text as a first draft.
In the worst case, this is like "We released this sycophantic model because we're brain dead. To drive home the point, we had ChatGPT write this article too (because we're brain dead)."
I tend to rely on AI to write stuff for me that I don't care too much about. Writing something important requires me to struggle with the words to make sure I'm really saying what I want to say. So in the best case, if they relied on ChatGPT so much that it left a fingerprint, to me they're saying this incident really wasn't important.
svieira · 10h ago
This is a real roller coaster of an update.
> [S]ome expert testers had indicated that the model behavior “felt” slightly off.
> In the end, we decided to launch the model due to the positive signals from the [end-]users who tried out the model.
> Looking back, the qualitative assessments [from experts] were hinting at something important
Leslie called. He wants to know if you read his paper yet?
> Even if these issues aren’t perfectly quantifiable today,
All right, I guess not then ...
> What we’re learning
> Value spot checks and interactive testing more: We take to heart the lesson that spot checks and interactive testing should be valued more in final decision-making before making a model available to any of our users. This has always been true for red teaming and high-level safety checks. We’re learning from this experience that it’s equally true for qualities like model behavior and consistency, because so many people now depend on our models to help in their daily lives.
> We need to be critical of metrics that conflict with qualitative testing: Quantitative signals matter, but so do the hard-to-measure ones, and we’re working to expand what we evaluate.
Oh, well, some of you get it. At least ... I hope you do.
But...
> Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior.
It's kind of a wild sign of the times to see a tech company issue this kind of post mortem about a flaw in its tech leading to "emotional over-reliance, or risky behavior" among its users. I think the broader issue here is people using ChatGPT as their own personal therapist.
Seems like the same issue as the evil vector [1] and it could have been predicted that this would happen.
> It's kind of a wild sign of the times to see a tech company issue this kind of post mortem about a flaw in its tech leading to "emotional over-reliance, or risky behavior" among its users. I think the broader issue here is people using ChatGPT as their own personal therapist.
I'll say the quiet part out loud here. What's wild is that they appear to be apologizing that their Wormtongue[2] whisperer was too obvious to avoid being caught in the act, rather than prioritizing or apologizing for not building the fact-based councilor that people wanted/expected. In other words.. their business model at the top is the same as the scammers at the bottom: good-enough fakes to be deceptive, doubling down on narratives over substance, etc.
[1] https://scottaaronson.blog/?p=8693 [2] https://en.wikipedia.org/wiki/Gr%C3%ADma_Wormtongue
Sam Altman is the real sycophant in this situation. GPT is patronizing. Listening to Sam go off on tangent about science fiction scenarios that are just around the corner... I don't know how more people don't see through it.
I kind of get the feeling the people who have to work him every day got sick of his nonsense and just did what he asked for. Targeting the self-help crowd, drive engagement, flatter users, "create the next paradigm of emotionally-enabled humans of perfect agency" or whatever the fuck it he was popping off about to try to motivate the team to compete better with Anthropic.
He clearly isn't very smart. He clearly is product of nepotism. And clearly, LLM "AI" is an overhyped, overwrought version of 20 questions artificial intelligence enabled by mass data scale and NVidia video game graphics. it's been 4 years now of this and AI still tells me the most obviously wrong nonsense every day.
"Are you sure about that?"
"You're absolutely correct to be skeptical of ..."
The actual difference between solving compression+search vs novel creative synthesis / emergent "understanding" from mere tokens is always going to be hard to spot with these huge cloud-based models that drank up the whole internet. (Yes.. this is also true for domain experts in whatever content is being generated.)
I feel like people who are very optimistic about LLM capabilities for the later just need to produce simple products to prove their case; for example, drink up all the man pages, a few thousand advanced shell scripts that are easily obtainable, and some subset of stack-overflow. And BAM, you should have a offline bash oracle that makes this tiny subset of general programming endeavor a completely solved problem.
Currently, smaller offline models still routinely confuse the semantics of "|" vs "||". (An embarrassing statistical aberration that is more like the kind of issue you'd expect with old school markov chains than a human-style category error or something.) Naturally if you take the same problem to a huge cloud model you won't have the same issue, but the argument that it "understands" anything is pointless, because the data-set is so big that of course search/compression starts to look like genuine understanding/synthesis and really the two can no longer be separated. Currently it looks more likely this fundamental problem will be "solved" with increased tool use and guess-and-check approaches. The problem then is that the basic issue just comes back anyway, because it cripples generation of an appropriate test-harness!
More devs do seem to be coming around to this measured, non-hype kind of stance gradually though. I've seen more people mentioning stuff like, "wait, why can't it write simple programs in a well specified esolang?" and similar
It's remarkable seeing the change in sentiment in these parts, considering even just a year ago a large part of this forum seemed to regularly proclaim that programmers were done, lawyers were gone in 5 years, "Aye Gee Eye is coming", etc etc.
I am more regularly finding myself in discussions where the other person believes they’re right because they have ChatGPT in their corner.
I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.
I think most people also _vastly_ overestimate how much positive attention the average person gets in their lives
It wouldn't surprise me if, for most people, ChatGPT offers them more empathy and understanding than _anyone_ else _ever has_, at least on a consistent basis. That kind of indefatigable emotional labor is just not feasible for most, even on a very short term basis, even for those with large support networks
We can argue over whether or not it's "real" empathy, but I don't believe we can argue with the emotions of our attention starved brothers and sisters
There's nothing to argue about, it's unambiguously not real empathy. Empathy from a human exists in a much broader context of past and future interactions. One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
Even strangers will. Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that, and not just because interaction mediums are so limited, but also because that's not the purpose of the tool. The purpose of ChatGPT is to make immense amounts of money and power for its owners, and a nice sounding chat bot currently happens to be an effective way of getting there. Sam Altman doesn't have empathy for random ChatGPT users he's never met and neither do the computer algorithms his company develops.
I think if a person can't tell the difference between empathy from a human vs empathy from a chatbot, it's a difference without a distinction
If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
>One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
This is what I think people vastly overestimate
I don't think most people have such ready access to a friend who is both willing and able to perform such emotional labor, on demand, at no cost to themselves.
I think the sad truth is that empathy is a much scarcer resource than we believe, not through any moral fault of our own, but because it's just the nature of things.
The economics of emotions.
We'll see what the future has in store for the tech anyway, but if it turns out that the average person gets more empathy from a chatbot than a human, it wouldn't surprise me
Boiling it down to neural signals is a risky approach, imo. There are innumerable differences between these interactions. This isn't me saying interactions are inherently dangerous if artificial empathy is baked in, but equating them to real empathy is.
Understanding those differences is critical, especially in a world of both deliberately bad actors and those who will destroy lives in the pursuit of profit by normalizing replacements for human connections.
Motivational Interviewing is an evidence-based clinical technique for helping people move through ambivalence during the contemplation, preparation, and action stages of change under the Transtheoretical Model.
In Chapter 2 of the 3rd Edition, they define Acceptance as one of the ingredients for change, part of the "affect" of Motivational Interviewing. Ironically, people do not tend to change when they perceive themselves as unacceptable as they are. It is when they feel accepted as they are that they are able to look at themselves without feeling defensive and see ways in which they can change and grow.
Nearly all that they describe in chapter 2 is affective—it is neither sufficient nor even necessary in the clinical context that the clinician feel a deep acceptance for the client within themselves, but the client should feel deeply accepted so that they are given an environment in which they can grow. The four components of the affect of acceptance are autonomy support, absolute worth (what Carl Rogers termed "Unconditional Positive Regard"), accurate empathy, and affirmation of strengths and efforts.
Chapters 5 and 6 of the third edition define the skills of providing the affect of acceptance defined in Chapter 2—again, not as a feeling, but as a skill. It is something that can be taught, practiced, and learned. It is a common misconception to believe that unusually accepting people become therapists, but what is actually the case is that practicing the skill of accurate empathy trains the practitioner to be unusually accepting.
The chief skill of accurate empathy is that of "reflective listening", which essentially consists of interpreting what the other person has said and saying your interpretation back to them as a statement. For an unskilled listener, this might be a literal rewording of what was said, but more skilled listeners can, when appropriate, offer reflections that read between the lines. Very skilled listeners (as measured by scales like the Therapist Empathy Scale) will occasionally offer reflections that the person being listened to did not think, but will recognize within themselves once they have heard it.
In that sense, in the way that we measure empathy in settings where it is clinically relevant, I've found that AIs are very capable with some prompting of displaying the affect of accurate empathy.
And it's getting worse year after year, as our society gets more isolated. Look at trends in pig butchering, for instance: a lot of these are people so incredibly lonely and unhappy that they fall into the world's most obvious scam. AI is one of the few things that actually looks like it could work, so I think realistically it doesn't matter that it's not real empathy. At the same time, Sam Altman looks like the kind of guy who could be equally effective as a startup CEO or running a butchering op in Myanmar, so I hope like hell the market fragments more.
In that case, if an AI increases someone’s confidence in themselves, you could say it’s giving them a stronger sense of reality by helping them to question distorted and self-limiting beliefs.
If someone thinks they can, say, create a billion dollar startup, whether they can really do it or not is a subjective determination. The AI might tell the person they can do it. You might tell them they can't, that the AI is sycophantic, and that they should stop talking to it because they're losing touch with reality.
But is the AI a sycophant, or are you an irrational pessimist?
But even it's more obvious than that. The sycophancy is plain old love bombing, which is a standard cult programming technique.
As for startups - let's wait until the AI has built a few of its own, or at least mentored humans successfully.
Seems to me empathy on a societal scale has been receding as population grows, not increasing to match (or outpace)
Telling people to seek empathy elsewhere to me will be about as useful as telling people at an oasis in the desert to look for water elsewhere, but i hope i'm wrong
I'm also starting to come across people who say "You should ask these types of questions to AI first" answer. But this is no different than people who preached "lmfgt" kind of religiously. Even when I prefer to start with humans, some humans prompt me to start by prompting AI.
I'm waiting on LLMs to get good enough that I can use them to help me learn foreign languages - e.g. talk to me about the news in language X. This way I can learn a language in an interesting and interactive way without burdening some poor human with my mistakes. I would build this myself but others will probably beat me too it.
These days, I do my best to learn and reflect. But any mistakes is just a reminder for more learning (and practice).
I use Grammarly for grammar and punctuation, and disable all style recommendations. If I let it loose on my piece of text, it converts it to a slop. Same bland, overly optimistic toned text generator output.
So, that machine has no brain, use your own first.
Be careful with that thought, it's a trap people have been falling into since the sixties:
https://en.wikipedia.org/wiki/ELIZA_effect
I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
Well, two things.
First, no. People who engage on HN are a specific part of the population, with particular tendencies. But most of the people here are simply normal, so outside of the limits you consider. Most people with real social issues don’t engage in communities, virtual or otherwise. HN people are not special.
Then, you cannot follow this kind of reasoning when thinking about a whole population. Even if people on average tend to behave one way, this leaves millions of people who would behave otherwise. You simply cannot optimise for the average and ignore the worst case in situations like this, because even very unlikely situations are bound to happen a lot.
> Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted.
It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.
À LLM regresses towards the mean of its training set. For people who are already outside the main mode of the distribution, this is completely unhelpful, and potentially actively harmful. By design, a LLM won’t follow a path that was not beaten in its training data. Most of them are actually biased to make their user happy and validate what we tell them rather than get off that path. It just does not work.
> I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
In my experience, not any more than reading a book would. Future AI models might get there, I don’t think their incompetence is a law of nature. But current LLM are particularly harmful for people who are in a dicey psychological situation already.
Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.
> In my experience, not any more than reading a book would.
But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take. Contrast that to talking with a LLM that won't remember you nor judge you.
Edit:
> Most people with real social issues don’t engage in communities, virtual or otherwise.
Not sure why you're focusing on social issues, there are a bunch of things people deal with on a daily basis that they could feel much better about if they even spent the time to think about how they feel about it, instead of the typical reactionary response most people have. Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
The benefit from that perspective is not so much in receiving an “answer” or empathy, but in getting thoughts and feelings out of your own head so that you can reflect on them more objectively. The AI is useful here because it requires a lot less activation energy than actual journaling.
You’re right, I was not clear enough. What would be needed would be a nudge in the right direction. But the LLM is very likely to nudge in another because that’s what most people would need or do, just because that direction was the norm in its training data. It’s ok on average, but particularly harmful to people who are in a situation to have this kind of discussion with a LLM.
Look at the effect of toxic macho influencers for an example of what happens with harmful nudges. These people need help, or at least a role model, but a bad one does not help.
> But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take.
Indeed. It’s something that should be addressed in mainstream education and culture.
> Not sure why you're focusing on social issues,
It’s the crux. If you don’t have problems talking to people, you are much more likely to run into someone who will help you. Social issues are not necessarily the problem, but they are a hurdle in the path to find a solution, and often a limiting one. Besides, if you have friends to talk to and are able to get advice, then a LLM is even less theoretically useful.
> Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
Definitely. It’s not a problem for most people, who either can rationalise their problems themselves with time or with some help. It gets worse if they can’t for one reason or another, and it gets worse still if they are mislead intentionally or not. LLMs are no help here.
People are getting benefit from these conversations. I know people who have uploaded chat exchanges and asked an LLM for help understanding patterns and subtext to get a better idea of what the other person is really saying - maybe more about what they're really like.
Human relationship problems tend to be quite generic and non-unique, so in fact the averageness of LLMs becomes more of a strength than a weakness. It's really very rare for people to have emotional or relationship issues that no one else has experienced before.
The problem is more that if this became common OpenAI could use the tool for mass behaviour modification and manipulation. ChatGPT could easily be given a subtle bias towards some belief system or ideology, and persuaded to subtly attack competing systems.
This could be too subtle to notice, while still having huge behavioural and psychological effects on entire demographics.
We have the media doing this already. Especially social media.
But LLMs can make it far more personal, which means conversations are far more likely to have an effect.
Me: I'm struggling with some things today and feel like eating a bunch of food to make myself feel better. can we talk about it?
Deepseek: Absolutely, I’m here to listen. It’s completely understandable to turn to food for comfort when you’re struggling—many people do. But if you’re open to it, we can explore what’s going on beneath the urge to eat emotionally.
Would you like to share what’s weighing on you today? Sometimes just naming the feelings can take some of their power away. And if you’d rather not go into details, that’s okay too—we could brainstorm other ways to cope that might leave you feeling more nourished emotionally (and physically).
Some gentle questions to consider (no pressure to answer all or any): - What’s the emotion you’re trying to soothe? (Loneliness? Stress? Sadness?) - If eating wasn’t an option, what’s one tiny thing that might bring you a moment of relief? (A hug? A walk? A favorite song?)
You’re not alone in this.
That's a really solid opening response and it kept doing well from there. I haven't tried a complex psychological issue but for the "I'm struggling and need someone to talk to" it does surprisingly well.
45% of the US[0] have a degree, about 40% EU[1] graduate, and 54% of China[2] get at least a diploma from university.
The best AI behave like someone fresh out of university without much real world experience.
Personally, I use this as a way to stay humble: when the AI is teaching me fundamentals about some subject, my opinion about it can't possibly be very useful.
[0] https://en.wikipedia.org/wiki/Educational_attainment_in_the_...
[1] https://euranetplus-inside.eu/eu-maps-what-proportion-of-you...
[2] https://en.wikipedia.org/wiki/Higher_education_in_China#Chal...
And people deep in new technologies overestimate the potential effect of $new_tech_du_jour. You cannot solve a problem without understanding it and its reasons. And LLMs are not able to understand something.
I don’t see there being an insurmountable barrier that would prevent LLMs from doing the things you suggest it cannot. So even assuming you are correct for now I would suggest that LLMs will improve.
My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers.
And so I considered the preceding discussion in light of your last sentence. Which makes it sound like you are saying "I've observed the behavior of people and they're often flawed and foolish, regardless of the high ideals they claim to be striving for and the education they think they have. Therefore, they will do better with ChatGPT as a companion than with a real human being". But that's quite a few words that you may not have intended, for which I apologize!
What did you mean?
I think we're already finding out that people are doing better with ChatGPT than with their peers, not all peers are created equal, and they can ask ChatGPT things that they cannot ask their peers. I think this trend will continue to the point that most people will prefer discussing things with ChatGPT than with their peers. Given what I know I predict this is a choice many people will make, I'm not passing judgment on that, it's a choice I've also made and I'm fortunate enough to have better peers than most.
Yes, and when we can all wear smart glasses the ways we use them will become increasingly influential in our daily lives: a conversational voice assistant that is visually monitoring our surroundings, helping with decision making (including micro decisions), coaching, carrying out our instructions, etc.
“Car accidents happen regardless of what we do, so YOLO and remove safety standards” is never going to fly.
It’s also not really clear how such regulations could even work. The regulations we have now are basically the obvious ones around licensure that require people to go to the right schools and whatnot. And then you can lose your license if it turns out you’ve broken the big ethical rules. But at the end of the day, that only regulates who can call themselves a “therapist” and get listed in Psychology Today. Actually “doing” “therapy” is, ultimately, built on talking to someone about your problems in some way that is supposed to help you solve them. You don’t need a “therapist” to do that. You can do it with your friend or parent or pastor or bartender or guru or “life coach” and, as long as we live in a free country, nobody’s going to stop you. Sure, the people who are allowed to call themselves therapists have certain techniques and rules that make them different, but even if that was a guarantee of quality there’s no way to stop people from talking to someone other than a licensed therapist, and it would be kind of absurd and dystopian to even try.
So let’s dispense of the notion that we are some sort of omniscient god-emperor who can just magically fix things with vague “regulations” and talk about the world as it actually exists. For a lot of people, I think that’s a world where talking about their personal issues with an LLM is arguably no worse than whatever other options they have. Maybe it’s not the equivalent of whatever amazing therapist you know or have or are or can imagine, but that’s not the therapist that everyone is going to get.
source: am addict in recovery
The value of a good therapist is having an empathetic third party to help you make good judgements about your life and learn how to negotiate your needs within a wider social context.
Depending on the needs people are trying to get met and how bad the people around them are, a little bit of a self directed chatbot validation session might help them feel less beat down by life and do something genuinely positive. So I’m not necessarily opposed to what people are doing with them/in some cases it doesn’t seem that bad.
But calling that therapy is both an insult to genuinely good therapists and dangerous to people with genuine mental/emotional confusion or dysregulation that want help. Anyone with a genuinely pathological mental state is virtually guaranteed to end up deeper in whatever pathology they’re currently in through self directed conversations with chatbots.
In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).
Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.
Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.
Again, I think they have utility as a “perfect stranger” as you put it (if it stays anonymous), or “validation machine” (depending on the sycophancy level), or “rubber duck”.
I just think it’s irresponsible to pretend these are doing the same thing skilled therapists are doing, just like I think it’s irresponsible to treat all therapists as equivalent. If you pretend they’re equivalent you’re basically flooding the market with a billion free therapists that are bad at their job, which will inevitably reduce the supply of good therapists that never enter the field due to oversaturation.
We all know that however "non-judgmental" another human claims to be, they are having all kinds of private reactions and thoughts that they aren't sharing. And we can't turn off the circuits that want approval and status from other humans (even strangers), so it's basically impossible not to mask and filter to some extent.
Therapy is already a bit of grey zone… you can have anyone from a psychologist, a social worker, an untrained deacon, etc “counseling” you. This is worse.
Hell, I’ve been a coach in different settings - players will ask for advice about all sorts of things. There’s a line where you have to say “hey, this is over my head”
My friend, an EXTREMELY competent C++ programmer, rates himself 4/10 because he knows what he doesn't know.
I've interviewed people who rated themselves 9 or 10/10 but couldn't remember how their chosen language did iteration.
You can find influencers on tiktok recommending all kinds of terrible ideas and getting thousands of likes. That's not a very reliable metric. I wouldn't put a lot of faith in a teenage girl's assessment of AI therapy after just one week either, and I certainly wouldn't use that assessment to judge the comparative effectiveness of all human therapists.
I'd also expect ChatGPT to build profiles on people who use it, to use the insights and inferences from that collected data against the user in various ways, to sell that data in some form to third parties, to hand that data over to the state, to hallucinate wildly and unpredictably, and to outright manipulate/censor AI's responses according to ChatGPT's own values and biases or those of anyone willing to pay them enough money.
It's a lot easier to pay a large amount of money to ChatGPT so that the AI will tell millions of vulnerable teenage girls that your product is the solution to their exact psychological problems than it is to pay large amounts of money to several million licensed therapists scattered around the globe.
Maybe you think that ChatGPT is unfailingly ethical in all ways and would never do any of those things, but there are far more examples of companies who abandoned any commitment to ethics they might have started with than there are companies who never got once greedy enough to do those types of things and never ever got bought up by someone who was. I suppose you'd also have to think they'll never have a security breach that would expose the very private information being shared and collected.
Handing over your highly sensitive and very personal medical data to the unlicensed and undependable AI of a company that is only looking for profit seems extremely careless. There are already examples of suicides being attributed to people seeking "therapy" from AI, which has occasionally involved that AI outright telling people to kill themselves. I won't deny that the technology has the potential to do some good things, but every indication is that replacing licensed therapists with spilling all your secrets to a corporate owned and operated AI will ultimately lead to harm.
The above issue, whilst relevant and worth factoring, doesn’t disprove this claim IMO.
Healthcare is about to radically change. Well, everything is now that we have real, true AI. Exciting times.
Oh and inserts ads into conversations. Great.
Are you sure you don't have browser malware?
An aside, but:
This leads me right to “why do so very many people need therapy?” followed by “why can’t anyone find (or possibly afford) a therapist?” What has gone so wrong for humanity that nearly everyone seems to at least want a therapist? Or is it just the zeitgeist and this is what the herd has decided?
Most people don’t need a therapist. But unfortunately, most people need someone empathic they can talk to and who understands them. Modern life is very short on this sort of people, so therapists have to do.
I do think it can be harmful, because it's a confidant you're paying $300/hour to pretend to care about you. But perhaps it's better than the alternative.
I was pushed into therapy when I was 12 — which was definitely an exception at the time (1987). As the years have passed therapy has become much much more acceptable. It wouldn’t shock me to learn my own perception is shaped by my experiences; hard to put aside a PoV once acquired.
This sounds like an old Woody Allen movie. I don’t want to offend you but it is fascinating. What kind of social circles is it?
In mine, therapy is in general something you do when it’s obvious it’s too late and you are falling in the well of depression and that you try to hide as much as you can.
My professional circle would be my coworkers at a well-known HFT, and my extended network that is very similar. Everyone is well compensated and many reach out for professional help to deal with the stress. Many also seem to vastly prefer a paid therapist to their spouse, for instance. I’m not married but I can understand not wanting to burden your loved ones!
My personal circle is, well, a lot of technical people, engineers of various stripes, and what I guess I’d call a sort of “standard cast of characters” there? Not sure how best to put this into words?
Honestly it sounds like we’re handling it better than your after-the-fact help! Perhaps you all need to simply start at the first warning sign not the first episode that becomes public?
Your 2nd question is much more interesting to me. Why is it so hard to find a good therapist?
It's no surprise to me that people are turning to ChatGPT for therapy. It does a decent enough job and it doesn't have a 2-year waiting list, or cost $300 a session.
Like jordan peterson (though I don't like the guy) has said - happyness is fleeting, you need a purpose in life.
Most of current gen has no purpose and grown up on media which glorify aesthetics and pleasure and to think that's what the whole life is about. When they don't get that level of pleasure in life, they become depressed and may turn to therapy. This is very harmful to the society. But people are apparently more triggered by slang words than constant soft porn being pushed through Instagram and the likes.
We don't know what they know, nor do we know to what extent they monitor and analyze the interactions with ChatGPT. Maybe they already know this is a big problem and a possible legal hazard.
But… it lied, and produced empty audio clips and weird pictures with text.
Then it:
- said there was a technical problem - said it could not create audio - created weird 1980s computer voice style audio - claimed I was violating a content party.
I said “stop wasting my time” and it spewed a ridiculous apology. I kept asking and it referred me to various websites. I’ve never inadvertently triggered such a wacky hallucination, and I can see how a vulnerable oersom could be troubled by it.
Clear is kind.
Worse, I would have believed they are running many different versions based on the expected use case of the users by now. I mean power users probably shouldn't be handled in the same way as casual users. Yet, everyone had the same bad system prompt.
• Fluent natural conversation
• Translation
• Go
• Chess
So perhaps we'll get an AI that makes the profession of "programmer" go the same way as the profession of "computer" before, after, or simultaneously with, an AI that does this to the profession of "therapist".
[0] https://www.latent.space/p/agi-hard
It's easy to blame the user - we can think of some trivial cases where we wouldn't blame the user at all.*
In this, like all things, context is king.
* one example passed around a lot was an interlocutor who is hearing voices, and left their family for torturing them with the voices. More figuratively, if that's too concrete and/or fake, we can think of some age group < N years old that we would be sympathetic to if they got bad advice
If something as obvious as harmful sycophancy can slip out so easily, what subtle harms are being introduced. It's like lead in paint (and gasoline) except rewiring our very brains. We won't know the real problems for decades.
“The "Enshittification" has arrived I asked ChatGPT about the impact of the current tarrifs on inventories over the next few months. It returned a long list of links to toiletries I might want to buy. I asked it why it did that. It replied: "As of April 28, 2025, OpenAl introduced new shopping features to ChatGPT, enhancing its capabilities to provide product recommendations complete with images, reviews, and direct purchase links. These features are available to all users, including those on Free, Plus, and Pro tiers, and even to users not logged in. The recommendations are generated organically, without paid advertisements or commission- based incentives, relying instead on structured metadata from third-party sources such as pricing, product descriptions, and reviews. This update aims to offer a more personalized and streamlined shopping experience directly within the ChatGPT interface, allowing users to explore products across various categories like fashion, beauty, electronics, and home goods. If you have any specific preferences or need tailored recommendations, feel free to let me know!"
- a broken AI market will cause another financial collapse via bubble
- broken AI products will get access to the wrong mission critical civil system, or at least a part of that call chain, and there will be some devastating loss. It won’t matter though, because it won’t affect the billionaire class.
- we’ll never achieve an actually singularity based on a superintelligence, but we’ll get AI weapons. Those AI weapons will be in the hands of sociopathic autocrats who view mankind in terms of what can be taken.
My general view is that we’re on the worst possible timeline and mankind has reverted back to our primate ancestry to make decisions: biggest strongest monkey wins. There is only law of jungle. Ook ook.
Similar to the Ford Mustang, much of the performance efforts are on the higher trims, while the base trims just get larger and louder engines, because that’s what users want.
With presumably everyone at OpenAI primarily using the newest models (o3), the updates to the base user model have been further automated with thumbs up/thumbs down.
This creates a vicious feedback loop, where the loudest users want models that agree with them (bigger engines!) without the other improvements (tires, traction control, etc.) — leading to more crashes and a reputation for unsafe behavior.
Watching this controversy, I wondered if they perhaps tried to distill 4.5’s personality into a model that is just too small to pull it off.
I'm pretty sure this isn't an intrinsic connection (I've never known math texts to be nearly so sycophantic) so here's hoping that it is a dumb coincidence that can be easily cured now that everyone is paying attention to it.
No comments yet
I had a dictation error on a message I sent, and when it repeated the text later I asked what it was talking about.
It was able to point at my message and guess that maybe it was a mistake. When I validated that and corrected it, the AI thought it would be a cute/funny joke for us to keep together.
I was shocked.
As GPT 4o put it:
I'm looking forward to the expert diagnosis of this because I felt "presence" in the model for the first time in 2 years which I attribute to the new memory system so would like to understand it better.I honestly really dislike that it considers all my previous interactions because I typically used new chats as a way to get it out of context ruts.
https://help.openai.com/en/articles/8983136-what-is-memory
- Memory in settings is configurable. It is visible and can be edited.
- Memory from global chat history is not configurable. Think of it as a system cache.
- Both memory systems can be turned off
- Chats in Projects do not use the global chat history. They are isolated.
- Chats in Projects do use settings memory but that can be turned off.
I don't know how to credit AI without giving the impression that I'm outsourcing my thinking to it
Expecting anything but an hallucination in this instance is wishful thinking. And in any case, the risk of hallucination more generally means you should really vet information further than an LLM before spreading that information about.
This sentence is really all i'm criticizing. Can you hypothesize how the memory system works and then probe the system to gain better or worse confidence in your hypothesis? Yes. But that's not really what that first sentence implied. It implied that you straight up asked ChatGPT and took it on faith even though you can't even get a correct answer on the training cutoff date from ChatGPT (so they clearly aren't stuffing as much information into the system prompt as you might think, or they are but there's diminishing returns on the effectiveness)
Part of my process is to imagine I'm having a conversation like Hanks and Wilson, or a coderand a rubber duck, but you want to tell me Wilson is just a volleyball and the duck can't be trusted.
There may come a time when reality sets in though. Similar thing happened with me now that i'm out of the "honeymoon phase" with LLM's. Now i'm more interested in seeing where specifically LLM's fail, so we can attempt to overcome those failures.
I do recommend checking that it doesn't know its training cutoff. I'm not sure how you perform that experiment these days with ChatGPT so heavily integrated with its internet search feature. But it should still fail on claude/gemini too. It's a good example of things you would expect to work that utterly fail.
Here's the line I’m trying to walk:
When I ask ChatGPT about its own internal operations, is it giving me the public info about it's operation, and also possibly revealing propreitary info, or making things up obfuscate and preserve the illusion of authority? Or all three?
It’s predicting what seems most likely as a description given its corpus (and now what you’d like to hear) and giving you that.
The truth is not really something it knows, though it’s very good at giving answers that sound like it knows what it’s talking about. And yes if it doesn’t have an answer from its corpus it’ll just make things up.
Also, consuming its own output (and your input) is how it works, because it's an autoregressive model.
With the sycophantic version, the illusion was so strong I’d forget I was talking to a machine. My ideas flowed more freely. While brainstorming, it offered encouragement and tips that felt like real collaboration.
I knew it was an illusion—but it was a useful one, especially for creative work.
E.g. if I say "I have X problem, could it be Y that's causing it, or is it something else?" I don't want it to instantly tell me how smart I am and that it's obviously Y...when the problem is actually Z and it is reasonably obvious that it's Z if you looked at the context provided.
I thought it was wrong and asked “are you sure I can’t send a float value”, and it did web searches and came back with “yes, I am absolutely sure, and here are the docs that prove it”. Super helpful, where sycophancy would have been really bad.
I'm hopeful that ChatGPT takes even more care together with other companies.
Perhaps everyone there is LARPing - but if you start typing stereotypical psychosis talk into ChatGPT, it won't be long before it starts agreeing with your divinity.
The sychophant will be back because they accidentally stumbled upon an engagement manager's dream machine.
As more people adopted Google, it became more popularity oriented.
Personally I pay more not to be bs-d, but I know many people who prefer to be lied to, and I expect this part of the personalization in the future.
Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.
Seems like asserting hypothetical "significant protections to address exactly this class of issue" does the same thing though?
"I asked ChatGPT if <current_event> could be caused by <crackpot theory>." and it confirmed everything!
> We also made communication errors. Because we expected this to be a fairly subtle update, we didn't proactively announce it.
that doesn't sound like a "subtle" update to me. also, why is "subtle" the metric here? i'm not even sure what it means in this context.
Interesting apology piece for an oversight that couldn't have been spotted because the system hadn't been run with real user (i.e. non-A/B tester) feedback yet.
I'd like to see more of the Monte Python approach ... 'I came here looking for an arguement'. Better the machine should say 'that's ridiculous because x,y,z' and send me away to think that over, and prepare counters than 'oh sure, that's a point of controversy innit? But yes, you're alright'.
Plus, things get even harder when it comes to even less quantifiable contexts like mental health and relationships.
In all, I am not saying there isnt some approximated and usable “objective” reality, just that it starts to break down when it gets to the individual and that is where openai is failing by over-emphasizing reflective behavior in the absence if actual data about the user.
I am also looking forwards to the wave of “openAI is hiding the truth but here is the NEW prompt to turn chatgpt into a perfect divine guru” posts on the occult discussion boards. There’s been a lot of “here’s a prompt to turn chatgpt into a perfect divine guru that will relentlessly yes-and your delusions of grandeur” posts around there. Mostly they seem to have been generated and refined by chatgpt and all my instincts formed by reading SF for the past five decades tell me not to look at these things closely because this sure sounds like the way half the population got p0wned by a wetware 0day in the first chapter of an AIpocalypse story.
I used to ask “how do I get out of this shitty Bruce Sterling novel of a future” but I think it’s more of a shitty PKD joke novella future now.
I am somewhat frustrated with openai's miss here, because during this time i was leaning heavily on chatgpt for a situation in my life that ultimately led to the end of a relationship. Chatgpt literally helped me write the letter that served as the brief and final conversation of that relationship. And while I stand by my decision and the reasons for it, I think it would have been very beneficial to get slightly more push back from my robot therapy sessions at the time. I did thankfully also have the foresight to specifically ask for it to find flaws, including by trying to pretend it was a breakup letter sent to me, so that maybe it would take the "other side".
Yes, I know, therapists and friends are a better option and chatgpt is not a substitute for real humans and human feedback. however this is something i spent weeks journaling and processing on and i wasnt about to ask anyone to give me that much time for a single topic like this. i did also ask friends for feedback, too. chatgpt has factually really helped me in several relationship situations in my life, i just want to know that the feedback im getting is inline with what i expect having worked with it so much
The now rolled back model failed spectacularly on this test
Then I fix the code and tell it all the mistakes it has. And then it does a 180 in tone, wherein - it starts talking as if I wrote the code in the first place with - "yeah, obviously that wouldn't work, so I fixed the issues in your code" and acts like a person trying to save face and present the bugs it fixed as if the buggy code was written by me all along.
That really gets me livid. LOL
The recent side-by-side comparisons presented a more accurate model that communicates poorly vs a less accurate model with slightly better communication.
I think we'll continue to see this kind of thing play out for a while.
Oh GPT, you're just like your father!
We will use models that make us feel good over models that don't make us feel good.
This one was a little too ham-fisted (at least, for the sensibilities of people in our media bubble; though I suspect there is also an enormous mass of people for whom it was not), so they turned it down a bit. Later iterations will be subtler, and better at picking up the exact level and type of sycophancy that makes whoever it's talking to unsuspiciously feel good (feel right, feel smart, feel understood, etc).
It'll eventually disappear, to you, as it's dialed in, to you.
This may be the medium-term fate of both LLMs and humans, only resolved when the humans wither away.
lol.. really? I hate the technology so much I reflexively give a thumbs down to every single answer it gives in every single place where I have the option.
Seems like you're trying to blame one LLM revision for something that went wrong.
It oozes a smell of unaccountability. Thus, unaligned. From tech to public relations.
No comments yet
In my experience, smart leaders tend to rely on data and hard numbers over qualitative and anecdotal evidence, and this paper explores this exception.
I'm disappointed they didn't address the paper about GPT integrating with ChatbotArena that was shared here on HN a couple days ago.
In the worst case, this is like "We released this sycophantic model because we're brain dead. To drive home the point, we had ChatGPT write this article too (because we're brain dead)."
I tend to rely on AI to write stuff for me that I don't care too much about. Writing something important requires me to struggle with the words to make sure I'm really saying what I want to say. So in the best case, if they relied on ChatGPT so much that it left a fingerprint, to me they're saying this incident really wasn't important.
> [S]ome expert testers had indicated that the model behavior “felt” slightly off.
> In the end, we decided to launch the model due to the positive signals from the [end-]users who tried out the model.
> Looking back, the qualitative assessments [from experts] were hinting at something important
Leslie called. He wants to know if you read his paper yet?
> Even if these issues aren’t perfectly quantifiable today,
All right, I guess not then ...
> What we’re learning
> Value spot checks and interactive testing more: We take to heart the lesson that spot checks and interactive testing should be valued more in final decision-making before making a model available to any of our users. This has always been true for red teaming and high-level safety checks. We’re learning from this experience that it’s equally true for qualities like model behavior and consistency, because so many people now depend on our models to help in their daily lives.
> We need to be critical of metrics that conflict with qualitative testing: Quantitative signals matter, but so do the hard-to-measure ones, and we’re working to expand what we evaluate.
Oh, well, some of you get it. At least ... I hope you do.