White House releases health report written by LLM, with hallucinated citations

192 riffraff 51 5/30/2025, 4:31:37 AM nytimes.com ↗

Comments (51)

mola · 1d ago
I have this pet theory that this wave of AI won't end with amazing productivity gains.

I think it'll end in mass confusion. Dumbing down of the leadership and elites that now can cosplay as smart and eloquent.

countries that don't embrace AI will have a massive edge over other countries because their population will be smarter and more capable.

TYPE_FASTER · 21h ago
It’s all about trust. Which sources of journalism are trustworthy? Is the government administration trustworthy?

You can’t trust an administration that would release a report that cites sources that do not exist. They are either so incompetent that they cannot perform the most basic fact checking possible, or they think we are idiots who can be easily misled.

mandmandam · 20h ago
> they think we are idiots who can be easily misled.

... I mean... Do you want a list of times that the American public has been easily misled?

No comments yet

downboots · 1d ago
The error was retracted. This is a clear example for an industry or branch for ensuring accuracy. We could call it the ministry of truth.
SillyUsername · 1d ago
It makes me remember a story I'd read, about a certain "Comrade Ogilvy", who recently died a hero whilst serving. Theres no real records of this guy, but a few lines text and a couple of faked photographs seemed easy enough to do.
kevinventullo · 1d ago
I don’t think eloquence or the appearance of intelligence is all that valuable for political leadership these days.
acquisitionsilk · 22h ago
I have entertained a similar notion when imagining the direction industries that make software might go from here. There's a possible future where some sizeable percentage of companies goes extra hard in the direction of "LLMs in, costly programmers out", and end up getting completely smashed when the LLM systems fall apart.

There might even be a couple of months of "gains" as (pointless) metrics go up, and then we might see a proper crash when stuff stops working. Especially for software which businesses rely on, surely there must be a point where they'll say enough of this crap?

Maybe not, too. Capitalism is a very surprising system, capable of absorbing shocks and morphing itself seemingly endlessly.

mrtksn · 1d ago
Are the countries with limited Internet better off in regards to the issues that Internet has created? Maybe, but they are also missing out on the progress of the society because despite the issues it also created a lot of opportunities and changed how we do things in general.
jimmydddd · 16h ago
Or maybe they will come on-line with AI after it's been sufficiently improved, and not have wasted time and resources on the early versions. Kind of like countries that never had an installed copper landline phone infrastructure, and then leap-frogged the US by going directly to a mobile infrastructure.
tokioyoyo · 1d ago
Tolerance for error of an average human is higher than one expects. It’s a cultural problem that’s getting more prominent in the West, because people genuinely start looking down on education. I have over-educated family members who are firm set on “my children don’t have to study hard, I’m sure they’ll smart enough to make big money at the end” idea.
conception · 1d ago
dheera · 1d ago
We had mass confusion and politician hallucination well before LLMs.

I'm not really sure anything has changed.

aorloff · 1d ago
I assure you Mrs. Buttle, the Ministry is very scrupulous about following up and eradicating any error.

If you have any complaints which you'd like to make, I'd be more than happy to send you the appropriate forms.

Braaaaazil......

[where my mind goes on every news from this admin]

Animats · 1d ago
Someone should put in an FOIA request for the prompt. Or get a congressional committee to ask for it.
chneu · 1d ago
Lol nobody is keeping records in this admin.

The only records we have are in Signal chats and we need people to screenshot those.

RossBencina · 1d ago
It would also be helpful to know which models were consulted, and whether they are certified for use in guiding national healthcare policy.
eru · 1d ago
In a representative democracy, elected leaders need to be able to use their judgement to decide what tools they and their lieutenants should use to come up with and implement the policies in service of what the voters' want.

Who is supposed to do this certification? How is it supposed to be binding?

MattPalmer1086 · 23h ago
Leaders are also accountable and should be transparent about how policy is implemented, in the absence of any overriding security considerations.
eru · 22h ago
Sure, that's what elections (and the courts) are for.

Leaders can decide to be even more accountable and transparent, if they think that'll increase their chances at the ballot box (or if they are just nice people).

skeaker · 8h ago
Your idea seems to be outright incorrect. We need transparency to stop bad things -> Bad things do not help at the ballot box -> We won't be given transparency through the ballot box (unless there are no bad things in which case we didn't really need it). In other words, if what you say is true (and it seems it unfortunately is), we won't ever get transparency and bad things will happen unchecked and unseen.

Instead of elections I think transparency should be determined by rule of law so everyone has to participate, even those that would like to hide their bad actions.

llacb47 · 1d ago
Technically, this is title editorializing, since WH denied using AI… but they are probably lying.
zombot · 1d ago
And when you're lying on that scale, automating it with AI is likely an actual productivity gain.
esalman · 1d ago
First the bogus tariff equation, now this. Certainly not the last.

Jokes aside, it's really sad to see seemingly no competent people not wanting to work with this administration.

Arainach · 1d ago
Your causality is wrong. This adminstration has no interest in working with competent people. They're fired huge numbers of them, gone out of their way to make life hell for those left so they quit, and are moving faster than ever imagined possible to drive competent people in non-government jobs away from this country - by deporting them, taking away their visas, attacking them, and more.
dralley · 1d ago
The causality is bidirectional.
Arainach · 2h ago
I disagree. The first Trump administration showed that not only were some competent people interested in working in the Trump cabinet, plenty of other competent people were content to keep doing their non-political jobs well for the government, running things, fighting for normal people and what they right, pushing back against overreach.

The second Trump administration took aggressive measures against all competent people - selecting a horrifying cabinet with literally no one qualified for the role they fill and working as fast as possible to fire as many as possible, even through illegal means later rejected by the courts, and punishing any who remain. It's a one-direction causation.

ksynwa · 1d ago
This administration is a haven for charalatans and grifters. It's no surprise that they don't have competent people. The goals of competent people are diametrically opposed to what this government wants to do.
eru · 1d ago
No. There's still lots of overlap in goals.

For example, most competent people I know didn't want a nuclear war to start yesterday. Lo and behold, the administration also did not start nuclear war yesterday.

chneu · 1d ago
While there are a lot of incompetent idiots in this admin, don't forget that a lot of them are grifters who are only trying to get rich/powerful while they can.

I don't like doing the Hitler comparison, but the similarities are definitely there. A lot of the Nazis thought Hitler was a "useful idiot" that they could use and then get rid of. Trump is very similar.

ytpete · 14h ago
I would imagine this is true of many other dictators and authoritarians over time too: Putin, Kim Jong Un, etc. If you are looking for non-Hitler comparisons.
mikestew · 1d ago
duxup · 1d ago
These people even struggle to do a bad job well enough.
platevoltage · 1d ago
Next time, they'll just add "don't include citations" in the prompt. Problem solved.
rasz · 1d ago
Its all a big joke to these people, like asking about medical advice from Groks "not a doctor".
gloosx · 18h ago
At this point we can fully replace talking heads with AI. Let them generate reports, write fake citations, review each other’s work, sign it, worship it – all in imitation of what we once called a state. A huge win for humanity.
kcaseg · 1d ago
Reminds me of this “AI study” on climate change

https://factcheck.afp.com/doc.afp.com.39798G2

It is not the “minor citation errors” what are the most worrying, but rather the fact that LLMs aim to please you, even if you really believe your prompt was neutral (which I doubt when it comes to Kennedy and vaccines for example).

caseysoftware · 1d ago
Can we all agree to not cite any study that a) doesn't exist, b) cannot be reproduced, or c) includes fake data?

For bonus points, the participants of the b) and c) options should be forced to pay back whoever funded their research.

eru · 1d ago
What do you mean by 'cannot be reproduced'?

If I do a simple study where I flip a coin a million times and write down the result, you are unlikely to be able to reproduce the same result.

xboxnolifes · 22h ago
I'm pretty sure if we both flipped a million coins, our results would be very similar.
Y_Y · 22h ago
I think you and GP are using different metrics or definitions of "result".
eru · 18h ago
Well, I'm mostly just saying that you have to be careful with your definitions. And you can't just randomly demand refunds from people.
tgv · 21h ago
In case you're not trolling: to reproduce a paper means to achieve the reported result within an acceptable margin through independent replication of the experiment. Since nobody is referring to a paper which reports one million coin flips, reproduction is not in question.
harvey9 · 1d ago
In this context we would be talking about reproducing the experimental conditions and method. Obviously it is unlikely that someone else would get the same sequence you did.
eru · 22h ago
Basically, the whole thing is a bit more complicated.

However, it's useful to require pre-registration and sharing of data etc for studies you plan to fund.

Ideally, you give researchers an in-principle approval, but then hold the funds in escrow and only disburse them after they published their data etc.

(Financial markets can provide the bridge financing between in-principle approval and actual disbursal.)

There's many journals that have open data requirements, but more often than not they are flouted. The above suggestion would give them real teeth.

anal_reactor · 1d ago
The goal here is to normalize the US government doing random shit, in order to reduce transparency. Average US citizen won't be able to tell if another political scandal is just Trump being Trump, or actual power grab.
aristofun · 18h ago
Not really sure what’s better - hallucinating LLM or hallucinating politicians…
bananapub · 23h ago
this is obviously horrific, but they are succeeding at their broader goal - making the entire US Federal apparatus untrustworthy and operated completely by selfish idiots who do whatever they want at any given time, with the only binding force the idiotic whims of the president. (almost all of - AOC and Bernie and a few others are notable exceptions) congress has already completely conceded this fact and continues to do so by not impeaching the president.

to me, it is extremely hard to imagine how this deliberate destruction can be undone in less than decades.

staplung · 1d ago
Think the title here is a little misleading. The citations do indeed appear to be hallucinations but it is not known if the report was written partially or entirely with an LLM. I would have no trouble believing that it was but at this point there isn't even a leak at MAHA claiming that it is.
j4coh · 1d ago
Why would you write a report and ask an LLM to make up citations once done?
comex · 1d ago
Anthropic had a slip-up like this recently with a legal filing (totally unrelated to the report, just a similar example of bad citations). After being challenged by a judge, they said:

> A Latham & Watkins associate located that article as potential additional support for Ms. Chen’s testimony using a Google search. The article exists [...]

> [...] I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

Anthropic could be lying, but apparently the link is indeed correct, so the account seems plausible.

However, the current situation is less understandable. The article says that "some correctly cited papers were inaccurately summarized", which suggests that AI either was used for the report itself, or at least was told to add citations without the author's input, which would be far more irresponsible than what Anthropic did. The apparently completely hallucinated "paper on direct-to-consumer advertising of prescription drugs" also doesn't look good.

The article also mentions that "[a]n early copy of the report shared with reporters did not include citations", which does support the theory that citations were added after the fact (whether or not AI was also used for the report itself).

Source for Anthropic testimony: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

staplung · 1d ago
I'm not saying they didn't. It's just that the title here says "WH releases health report written by LLM" but the article it links to does not claim that. The headline from the article is about the fake citations.

Also, I could imagine that the report, whether drafted by human or not, could have been pasted into an LLM with a prompt like "make this sound more authoritative" and the LLM dutifully added some "citations" because, what's more authoritative looking than citations?