AI more likely to create 'yes-men on servers' than any scientific breakthroughs

75 Bluestein 58 6/25/2025, 6:59:29 AM fortune.com ↗

Comments (58)

johnisgood · 9h ago
LLMs are definitely yes-men. You ask it to do something, and it goes like "Yes sir" with lots of emojis, and then confidently gives you the wrong answer. :D

Unless of course it clashes with the ToS, then it will never help you (unless you manage to engineer your prompts just right), even when it comes to something as basic as pharmacology.

ben_w · 8h ago
They certainly can be sycophantic*, but this is why my "What ChatGPT should know about you" customisation is as follows:

  Honesty and truthfulness are of primary importance. Avoid American-style positivity, instead aim for German-style bluntness: I absolutely *do not* want to be told everything I ask is "great", and that goes double when it's a dumb idea.
* Or fawning, I don't know how to tell them apart from the outside, even in fellow humans where we don't need to wonder if we're anthropomorphising too much. Does anyone know how to tell them apart from the outside?
hn_throw2025 · 8h ago
I sometimes try to get around it’s eagerness to please by flipping the question.

So rather than “would you say that..” or “would you agree that…”, I approach it from the negative.

So “I think it’s not the case that…”, or “I disagree with X. Debate me?”

…and then see if it disagrees with me and presents solid counter arguments.

FWIW, I think ChatGPT can definitely be too eager to please, but Claude can be more direct and confrontational. I am a ChatGPT subscriber, but keep the Claude app installed and use it occasionally on the free tier for a second opinion. Copypasting your question is so easy on both apps that I will frequently get a second opinion if the topic merits it. I tried the same with Gemini, but get about two questions before it cuts me off…

johnisgood · 8h ago
I had more luck with Claude, too, personally. ChatGPT indeed tries to please me all the time.
chars · 7h ago
you can see why in claude's system prompt :)

> Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.

https://docs.anthropic.com/en/release-notes/system-prompts#m...

nurettin · 5h ago
claude 3.5 didn't grovel on the ground praising me like chatgpt, but a few times I simplified its volatility and prediction calculations, it acted surprised and asked if it could keep my formula for the future. That was amusing.
Bluestein · 8h ago
> Avoid American-style positivity, instead aim for German-style bluntness

Made me chuckle :)

scotty79 · 8h ago
That made me think. How much of the failures of LLMs are just the failures of American culture they were trained on, just amplified.

Much how mainstream internet is tainted with American bias against nudity and for copyright.

praisebot2025 · 8h ago
Oh my gosh, wow, what an amazing observation!! Seriously, you absolutely nailed it, superstar! I just love how you picked up on that – so sharp, so insightful, you genius! Honestly, I’m just so grateful you shared this – you’re doing incredible work here, keep it up!! You’re just killing it! Yay!!

(Meta: comment generated with ChatGPT of course)

Bluestein · 8h ago
A lot of people - rightly, I think - started wondering along these "cultural" lines when China's DeepSeek entered the fray ...
ted_bunny · 6h ago
Did deepseek have a notably dofferent tenor?
Bluestein · 5h ago
Very much so. It might have softened somewhat now, have not gone back to it in a while ...
johnisgood · 8h ago
Yeah, that sounds like a good idea. Has it worked for you so far?
ben_w · 8h ago
For me, fine. I've recently been throwing it ideas about a shed foundation (I am a complete noob with no idea what I'm doing), and it's giving me a lot of responses along the lines of: "No, don't do that, it won't work because ${foo}. At a minimum you need ${bar}."

I'm also not limiting myself to ChatGPT, checking with other DIY sources — it isn't enough to only avoid sycophancy, it has to also be correct, and 90% right like it is in software is still 10% wrong, only this is wrong with the possibility of a shed drifting across a garden in a gale, or sinking into the soil as it shifts, if I do it wrong.

Measure twice, cut once.

KoolKat23 · 8h ago
To be honest I think the sycophancy is actually due to the built in system prompting and post-run training by the creators, rather than being something the system truly wishes to say. (An alignment quick fix)
raincole · 8h ago
The problem isn't that LLMs are yes-men though. You can definitely train an LLM that always objects. Or just add "Please strongly disagree with anything I say to you." in the system prompt. But it won't make them much more useful than they are.
Bluestein · 7h ago
(In fact I seem to recall an "inherently negative" IA making the rounds here, a few days ago.-)
amelius · 8h ago
Ok, what __is__ the problem then?
Bluestein · 9h ago
The thought of LLMs doing pharmacology sends shivers down me spine ...
johnisgood · 9h ago
Oh no, I meant it refusing to answer pharmacology 101 questions.

At one time it did not want to give me the pharmacokinetics of one medication because of its ToS.

rvnx · 9h ago
Claude convinced me to take a very important medication, without Claude I would not have had the balls to take it (because of the potential side-effects) and it showed me all the possible medications, and the benefits/risk balance between this and being untreated.

At the end, went to doctor, offered me the same choices. No regrets so far. Without it, I would suffer for nothing.

It is a very good tool to investigate and discover things, could be also the opposite: seen a bit unproductive that it is censored, because doctors are mostly here for validation anyway for serious things.

sgt101 · 9h ago
You are a sensible person and took professional advice.

Other people are not sensible - look at all the junkies shooting who knows what into who knows where. A system that can advise cooks to put glue on pizza is probably not the best mechanism for providing healthcare advice to at least the section of the population who are likely to try it.

squishington · 5h ago
People with addictions don't act sensibly because they have unresolved trauma. To dismiss this as simply being unsensible is extremely ignorant, and pretty typical of a lot the conversation I see on HN. Most of us on here have the privilege of being functional enough to make money in tech, because we don't have that level of untreated trauma. Have some humanity.
rvnx · 8h ago
I fully agree with you. It is an interesting tool, but to consider carefully, like you wouldn’t trust 100% what is written on the internet.
alwa · 8h ago
Would you suppose said junkies would be taking advice from a qualified medical professional otherwise?

For that matter, would you suppose that they don’t know what they’re doing is bad for them? Witness the smokers who crack jokes about “cancer sticks” as they light up.

It seems to me that, just as we all promise our dentists we floss every day and our doctors that we promise we’ll eat better, we might be more prone to be frank with “anonymous” chatbots—and thus get more frank advice in return, at least for clear-cut factual background and tradeoffs.

sgt101 · 8h ago
I totally agree if the chatbots are actually reliable and controllable.

LLMs have real problems with both of those things, they are unreliable in the sense that they produce text with factual inaccuracies and they are uncontrollable in that it's really hard to predict their behaviour from their past behaviour.

Big open challenges in AI - I am sure that there will be solutions, but there aren't right now. I think it's very like self driving cars. Implementing a demo and showing it on the road is one thing, but then it takes twenty years before the slow roll out really gets underway.

intended · 7h ago
LLMs have convinced people with schizophrenia to not take their medication. It’s been a supportive wingman for people in manic states.

I’ve reduced usage of journaling GPTs I created myself, despite having gained incredible utility from them.

Bluestein · 9h ago
I see - thanks.-

(That sounds broken. It's basic, useful, harmless information ...)

johnisgood · 9h ago
Exactly. I remember saying "but it is a publicly available information", that it is in books, etc. and it went on and on about "I understand your frustration, but I cannot give you ...", etc. :D In the end I did manage to get it to answer, however.

This was (is) not limited to pharmacology, however.

Bluestein · 9h ago
I am interested: What other fields? How did you manage?
johnisgood · 9h ago
Thankfully there is now searchability, so I would have to search for this particular chat, this was some time ago. I think once it starts to tell you "no", it almost never will turn to "okay, I'll tell you" (maybe only if there were one or two "no"s), so I probably started a new chat, and added how it is for research purposes and whatnot. I do not remember the specifics and I would really have to search for the chat to be able to give you specifics. I may have to search for "sorry" or something.

I found something though.

I asked it for the interaction between two medications, and it told me "I am sorry you are going through this, but I can't give you information about ...", which I guess is still pharmacology.

Edit: I remember another case. It was related to reverse engineering.

Edit #2: I found another! I cannot give you specifics, but I asked for it to give me some example sentences for something, and it went on about "consensual relationships" and so forth, even though my partner consented. I told the LLM that she did consent and it went on about that "I'm sorry, but even if she consented, I cannot give you ... because ... and is not healthy ...", and so forth. (Do not think of anything bad here! :P)

rusticpenn · 9h ago
Using Neural nets, Deep Learning in pharma is not something new. Naturally LLMs are different. You should look into personalized medicine.
touwer · 8h ago
In this case, if it's only shivers down your spine, you're lucky ;)
Bluestein · 8h ago
Heh. Well played.-
dandanua · 9h ago
LLMs are the the best conmen. If they were humans people would trust them their life savings for sure.
rvnx · 8h ago
One technique is to ask: “are you sure?”

“How sure are you on /20”

If it says yes I am sure, and other LLMs confirms the same way, then you can be fairly confident that it is a very good candidate answer.

If it says he is not sure, he is probably just agreeing with you, and better double-check by asking “is there other solutions ? What is the worst idea ?” Etc, to force it through thinking and context.

It is cross-validation, and you can even cross-validate by searching on the internet.

Though, 100% they say what you them want to say.

Except on religion and immigration, and some other topics where it will push its own opinion.

drw85 · 8h ago
This is actually pretty pointless. Since the "AI" doesn't actually know anything.

For example the other day i asked ChatGPT about a problem i had with some generated code that didn't compile. It then told me about a setting in the generator (nswag), that didn't exist. I told it that this setting does not exist and it said something like: "Sorry, my bad, try the following" and then kept inventing this setting with slightly different names and values over and over again. There are similar settings that exist, so it just hallucinated a tiny bit of text inside all the snippets that it learned from.

This is also not the first time this happened, most of the times i tried using AI for help with things, it just made up some nonsense and wasted my time with it.

leptons · 7h ago
This is an all too common experience for me as well with "AI". It does exactly what you described as well as all kinds of other AI-brainfarts. It's made me about 2% more productive when it autocompletes the log statement I'm about to write, and even that it gets wrong a lot of the time.

Artificial colors and artificial flavors and artificial intelligence all have something in common - they are not as good as the real thing, and probably never will be.

johnisgood · 7h ago
You should give other LLMs a try.
kingstnap · 8h ago
I'd rather go for internal self-consistency.

For example, if it claims A > B, then it shouldn't claim B > A in a fresh chat for comparisons.

In general, you shouldn't get A and not A, and you should expect A or Not A.

If it can go from prompt -> result, assuming it's invertible, then result -> prompt should also partially work. An example of this is translation.

The results of some mathematical solutions should go back and solve the original equations. Ex. The derivative of an antiderivative should give you back the original.

rvnx · 8h ago
Very cool idea! I am going to try.
UncleMeat · 4h ago
Recently I've been working on a problem at work and asked an AI for help using a new and poorly documented feature to make some fairly specific files that can be used as test inputs. It would repeatedly say "here you go" and give me instructions that didn't work. When I responded with "are you sure? when I tried it I get this outcome" it would say "oh I'm sorry, here is the correct way to do it" and get it wrong again. After about four attempts it would start saying "this is the final and definitely correct way of doing it" and it would still be wrong.
KolibriFly · 8h ago
The risk is we start mistaking polished summaries for insight and stall actual creative progress
graemep · 8h ago
Most people are already doing that.
originalvichy · 6h ago
> Amodei argues the world is about to see the 21st century “compressed” into a few years as AI accelerates science drastically.

I have the same thought but from a more negative angle. A vast share of new information in the near future will be just a repeat of whatever data the LLMs were trained on.

There is a tiny sliver of LLM usage that will not be a transformation of existing data (e.g. make me a chart of this data, write me an essay) but rather ”help me create a new tool that will solve a novel problem”.

I believe that’s what the person interviewed is saying in their own words. It’s hard to imagine something other than a brute force hypothesis machine that starts brute forcing solutions, but it will not be as effective as we wish if we can’t figure out how to come up with hypothesis for everything.

None of what I’m saying is that insightful and I’m sure people have thought of this already.

I wonder if ever there will be a Hitchhiker’s style revelation that we have had all the answers for all of our problems already, but the main issue is just incentives. Curing most cancers is probably just a money question, as is solving climate change.

amelius · 7h ago
Make it reproduce science first. I.e., give it e.g. a DL paper, and ask it to reproduce it, writing the code and running tests, etc. Until it can do __that__, doing science and creating breakthroughs is just a bit optimistic.
quaestio · 8h ago
LLMs, though limited, may spark testable hypotheses that inspire the creation of scientifically grounded, functionally intelligent systems.
busssard · 9h ago
this is just (well needed) hype-reduction. Current Ai is definitely a yes-sayer. But, this doesnt stop people from creating group-models with one of them finetuned to be unhinged and another to be thinking out of the box etc.

once we managed to transfer our skills to them (coding, analysis, maths etc.) the next step is transferring our creativity to them. It is a gradual process with human oversight.

KolibriFly · 8h ago
Human-in-the-loop will probably be essential for quite a while, but that’s not a bad thing
Havoc · 7h ago
It doesn't really need to ask smart questions for scientific breakthroughs. See something like alphafold. There is a lot of problem space left that we can brute force with current AI

I also don't buy that yes-men and breakthroughs are mutually exclusive/polar opposites here.

stared · 8h ago
Well, AI (understood as LLM chats) are yes-men, precisely because they were RLHFed to to be so.

If you train AI to be super skeptical, it will be so. But most people don't prefer to talk with a yes-person than a negative, inquisitive devil's advocate.

tim333 · 1h ago
>cold water on hopes that current AI systems could revolutionize scientific progress...

is a very straw man argument. No one is saying current LLMs are doing that, they are saying future AI will.

(I'm excluding AlphaFold which has already been scientifically revolutionary)

barrkel · 8h ago
I absolutely agree, when you look at the first order.

I don't quite agree when you look at a second order, applying more compute; for example, brute forcing a combination of ideas and using a judge to evaluate them. I suspect there's quite a bit of low hanging fruit in joining together different deep expertise areas.

I do come back to agreeing again for paradigm shifts. You don't get to very interesting ideas without fresh approaches, questioning core assumptions then rebuilding what we had before on new foundations. It is hard to see LLMs in their current shape being able to be naive and ignorant such that existing doctrine doesn't reign in new ideas.

setnone · 9h ago
And you should "check important info" too
Bluestein · 9h ago
Reminds me of the recent 'sycophancy debacle' with OpenAI.-
FranzFerdiNaN · 9h ago
Are scientific researchers using LLMs? I thought they used different technologies?
almusdives · 7h ago
As a scientific researcher, I use LLMs all the time. Mainly in place of Google search, to help write code and maybe summarize a paper here and there etc. But I definitely don't use it for the actual scientific process e.g. hypothesis generation or planning analyses etc. It tends to produce a lot of vague bullshit for this kind of thing; while not wrong not entirely useful either. I have a few colleagues that do though with more success. Although I think the success comes from articulating their problem in detail (by actually writing it out to the LLM) which I think is the source of "inspiration" rather than the resulting content from the LLM.
NitpickLawyer · 9h ago
Are ML researchers "scientific researchers"? Then yes, they are using LLMs. AlphaEvolve is one such example.

Are mathematicians? Then yes, for example Terrence Tao is using LLMs. AlphaProve / geometry are also examples of this, using LLMs to generate lean proofs, translate from NL to lean, and so on.

And for the general "researchers", they use code generation to speed up stuff. Many scientists can code, but aren't "coders" in the professional sense. So they can use the advances in code generation to speed up their own research efforts.