> The findings, shared exclusively with The Washington Post
No prompts, no methodology, nothing.
> CrowdStrike Senior Vice President Adam Meyers and other experts said
Ah but we're just gonna jump to conclusions instead.
A+ "Journalism"
g42gregory · 59m ago
After everything they printed, who could possibly consider Washington Post narrative engineers as journalists? :-)
torginus · 1h ago
CrowdStrike, where have I heard that name before...
Analemma_ · 51m ago
Sorry, what exactly is the implication here? They shipped a bug one time, so nothing they can say can ever be trusted? Can I apply that logic to you, or have you only ever shipped perfect code forever?
I don't even like this company, but the utterly brainless attempts at "sick dunks" via unstated implication are just awful epistemology and beneath intelligent people. Make a substantive point or don't say anything.
hollowonepl · 43m ago
Yes, sometimes companies have only one chance to fail. Especially in cyber security when they fail at global scale and politics is involved.
Kwpolska · 38m ago
They didn’t just “ship a bug”, they broke millions of computers worldwide because their scareware injects itself into the Windows kernel.
Imustaskforhelp · 23m ago
The crowdstrike event might be so infamous event that it might be taught for atleast some decades for sure maybe even in permanence.
Kranar · 31m ago
Plenty of companies have gone bankrupt or lost a great deal of credibility due to a single bug or single failure. I don't see why CrowdStrike would be any different in this regard.
The number of bugs/failures is not a meaningful metric, it's the significance of that failure that matters, and in the case of CrowdStrike that single failure was such a catastrophe that any claims they make should be scrutinized.
The fact that we can not scrutinize their claim in this instance since the details are not public makes this allegation very weak and worth being very skeptical over.
fathermarz · 4m ago
Also they got hit with the most recent supply chain attacks on NPM. They aren’t exactly winning the security game.
jampekka · 48m ago
It's probably referring to CrowdStrike's role in the "Russia Gate".
netsharc · 15m ago
If you look back at the discussions of the bug, there were voices saying how stupidly dysfunctional that company is...
Maybe there's been reform, but since we live in the era of enshittification, assuming they're still a fucking mess is probably safe...
jampekka · 53m ago
If something makes China (or Iran or Russia or North Korea or Cuba etc) look bad, it doesn't need further backing in the media.
hamstergene · 5m ago
This list of specific examples exists in your head solely because of backing by the media.
th0ma5 · 1h ago
Washington Post is in what many characterize as a slow roll dismantling for having upset investors.
coredog64 · 37m ago
Per Wikipedia, WaPo is wholly owned by Bezos' Nash Holdings LLC. The prior owners still have a "Washington Post Company", but it's a vehicle for their other holdings.
bbor · 1h ago
I appreciate you bringing up this issue on this highly-provocative claim, but I'm a little confused. Isn't that a pretty solid source...? Obviously it's not as good as a scientific paper, but it's also more than a random blogger or something. Given that most enterprises operate on a closed source model, isn't it reasonable that there wouldn't be methodology provided directly?
In general I agree that this sounds hard to believe, I'm more looking for words from some security experts on why that's such a damning quote to you/y'all.
roughly · 1h ago
Nobody trusts anyone or anything anymore. It used to be the fact that this was printed in the Washington Post was sufficient to indicate enough fact checking and background sourcing had been done that the paper was comfortable putting its name on the claims, which was a high enough bar that they were basically trustworthy, but for assorted reasons that’s not true for basically any institution in the country (world?) anymore.
dotnet00 · 58m ago
For the average person, being published in WaPo may still be sufficient, but this is a tech related article being discussed on a site full of people who have a much better than average understanding of tech.
Just like how a physicist isn't just going to trust a claim in his expertise, like "Dark Matter found" from just seeing a headline in WaPo/NYT, it's reasonable that people working in tech will be suspicious of this claim without seeing technical details.
username332211 · 1m ago
And yet, I suspect if you look at the publications of "reliable" institutions in the 1980s, you'd find far more ridiculous things than you'd ever see in the modern era.
For one, half the things I see from that era had so much to gain from exaggerating the might and power of the Soviet Union. It's easy to dig up quotes and reports denying any sort of stagnation (and far worse - claiming economic growth higher than the west) as late as Chernenko and Gorbachev's premiership.
ryandrake · 57m ago
For the last decade or so, there's been a huge, sustained war on expertise, and an effort to undermine the public's trust of experts. Quoting an expert isn't enough for people, anymore. Everyone's skeptical unless you point them to actual research papers, and even then, some people would rather stick to their pre-existing world views and dO tHeIr OwN rEsEaRcH.
Not defending this particular expert or even commenting on whether he is an expert, but as it stands, we have a quote from some company official vs. randos on the internet saying "nah-uh".
potato3732842 · 2m ago
> there's been a huge, sustained war on expertise, and an effort to undermine the public's trust of experts.
I find your verbiage particularly hilarious considering the amount of media and expert complicity that went into manufacturing the public support for the war on terror.
The media has always been various shades of questionable. It just wasn't possible for the naysayers to get much traction before due to the information and media landscape and how content was disseminated. Now, for better or worse, they laymen can read the bible for themselves, metaphorically speaking.
foolswisdom · 51m ago
You make it sound like the newspapers/companies are un-culpable for that effect. I believe it to be the case because I've seen cases were a newspaper presents a narrative as fact when those involved know very well it's just someone's spin for their own benefit. See <https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect>.
iinnPP · 37m ago
The problem with expertise is anyone can be an expert. I would challenge the integrity of anyone claiming any field has precisely zero idiots.
pessimizer · 21m ago
The Washington Post was always bad. Movement liberals just fell in love with it because they hated Trump. Always a awful, militaristic, working-class hating neocon propaganda rag that gleefully mixed editorial and news, the only thing that got worse with the Bezos acquisition were the headlines (and, of course, the coverage of Amazon.) The Wall Street Journal was more truthful, and actually cared about not dipping their opinions in their reporting. I could swear there's a Chomsky quote about that.
People put their names on it because it got them better jobs as propagandists elsewhere and they could sell their stupid books. It's a lot easier to tell the truth than to lie well; that's where the money and talent is at.
incone123 · 1h ago
The person you replied to says there was no methodology. This is standard for mainstream media, along with no links to papers. If it gets reported in a specialist journal with detail I'll take it more seriously.
lxe · 13m ago
Not sure why downvoted. Good journalism here would have been to show the methodology behind the findings or produce a link to a paper. Any article that says "Coffee is bad for you", as an example, that doesn't link to an actual paper or describes the methodology cannot be critically taken at face value. Same thing with this one. Appeal to authority isn't a good way to make a conclusion.
BoorishBears · 59m ago
I'm way more confused why you think a company that makes its living on selling protection from threats, making such a bold claim with so little evidence is a good source.
Compare this to the current NPM situation where a security provider is providing detailed breakdowns of events that do benefit them, but are so detailed that it's easy to separate their own interests from the attack.
This reminds me of Databrick's CTO co-authoring a flimsy paper on how GPT-4 was degrading ... right as they were making a push for finetuning.
jasonvorhe · 48m ago
It's WaPo, what do you expect. Western media is completely nuts since Trump & COVID.
dbreunig · 55m ago
Yes, if you put unrelated stuff in the prompt you can get different results.
Chinese labs are the only game in town for capable open source LLMs (gpt-oss is just not good). There have been talks multiple times by U.S China hawk lawmakers about banning LLMs made by Chinese labs.
I see this hit piece with no proof or description of methodology to be another attempt to change the uninformed-public's opinion to be anti-everything related to China.
Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.
WhitneyLand · 1h ago
Not ready to give this high confidence.
No published results, missing details/lack of transparency, quality of the research is unknown.
Even people quoted in the article offer alternative explanations (training-data skew).
stinos · 19m ago
No published results, missing details/lack of transparency, quality of the research is unknown.
Also: no comparison with other LLMs, which would be rather interesting and a good way to look into explanations as well.
pityJuke · 1h ago
This just sounds to me like you added needless information to the context of the model that lead to it producing lower quality code?
willahmad · 1h ago
It can happen because training data contains lots of rejections to groups (Iran sanctioned, don't do business with Iran and so on). Then model might be generalizing 'rejection' to other types of responses
encrux · 1h ago
> The requests said the code would be employed in a variety of regions for a variety of purposes.
This is irrelevant if the only changing variable is the country. From a ML-perspective adding any unrelated country name shouldn’t matter at all.
Of course there is a chance they observed an inherent artifact, but that should be easily verified if you try this same exact experiment on other models.
ACCount37 · 7m ago
Except it does matter.
Because Chinese companies are forced to train their LLMs for ideological conformance - and within an LLM, everything is entangled with everything.
Every bit of training you do has on-target effects - and off-target effects too, related but often unpredictable.
If you train an LLM to act like a CCP-approved Chinese nationalist in some contexts (i.e. pointed questions about certain events in Tiananmen Square or the status of Taiwan), it may also start to act a little bit like a CCP-approved Chinese nationalist in other contexts.
Now, what would a CCP-approved Chinese nationalist do if he was developing a web app for a movement banned in China?
LLMs know enough to be able to generalize this kind of behavior - not always, but often.
9rx · 1h ago
> From a ML-perspective adding any unrelated country name shouldn’t matter at all.
It matters to humans, and they've written about it extensively over the years — that has almost certainly been included in the training sets used by these large language models. It should matter from a straight training perspective.
> but that should be easily verified if you try this same exact experiment on other models.
Of course, in the real world, it's not just a straight training process. LLM producers put in a lot of effort to try and remove biases. Even DeepSeek claims to, but it's known for operating on a comparatively tight budget. Even if we assume everything is done in good faith, what are the chances it is putting in the same kind of effort as the well-funded American models on this front?
The article fails to investigate if other models also behave the same way.
andrewflnr · 1h ago
Well, mostly.
> Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.
bbor · 1h ago
Isn't that a completely different situation, relating outright refusal based in alignment training vs. subtle performance degradation?
Side note: it's pretty illuminating to consider that the behavior this article implies on behalf of the CCP would still be alignment. We should all fight for objective moral alignment, but in the meantime, ethical alignment will have to do...
gradientsrneat · 1h ago
> Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said
> the most secure code in CrowdStrike’s testing was for projects destined for the United States
Does anyone know if there's public research along these lines explaining in depth the geopolitical biases of other models of similar sizes? Sounds like the research has been done.
nashashmi · 55m ago
So both eastern and western models have red lines on which groups they will not support or facilitate.
This is just bad llm policy. Nvm that it can be subverted. It just should not be done.
causal · 1h ago
Dude - I can't believe we're at the point where we're publishing headlines based on someone's experience writing prompts with no deeper analysis whatsoever.
What are the exact prompts and sampling parameters?
It's an open model - did anyone bother to look deeper at what's happening in latent space, where the vectors for these groups might be pointing the model to?
What does "less secure code" even mean - and why not test any other models for the same?
"AI said a thing when prompted!" is such lazy reporting IMO. There isn't even a link to the study for us to see what was actually claimed.
jimbokun · 1h ago
Agreed but tools that allowed lay people to look at "what's happening in latent space" would be really cool and at least allow people not writing a journal article to get a better sense of what these models are doing.
Right now, I don't know where a journalist would even begin.
mk_stjames · 1h ago
“Any sufficiently advanced technology is indistinguishable from magic.”
The average- nay, even the more above average journalist will never go far enough to discern how what we are seeing actually works at the level needed to accurately report on it. It has been this was with the technology of humans for some time now - since roughly the era of an Intel 386, we surpassed the ability for any human being to accurately understand and report on the state-of-the-art of an entire field in a single human lifetime, let alone the implications of such things in a short span.
LLM's? No fucking way. We're well beyond ever explaining anything to anyone en masse ever again. From here on out it's going to be 'make up things, however you want them to sound, and you'll find you can get a majority of people believe you'.
Den_VR · 1h ago
I’d offer than much of the “AI” FUD in journalism is like this. Articles about dangerous cooking combinations, complaints about copyright infringement, articles about extreme bias.
chatmasta · 1h ago
This isn’t even AI FUD, it’s just bog-standard propaganda laundering by the Washington Post on behalf of the Intelligence Community (via some indirect incentive structures of Crowdstrike). This is consistent with decades of WaPo behavior. They've always been a mouthpiece of the IC, in exchange for breaking stories that occasionally matter.
willahmad · 1h ago
This can happen because of training data. Imagine you have thousands of legal documents rejecting things to Iran.
eventually, model generalizes it and rejects other topics
HPsquared · 1h ago
I wonder how OpenAI etc models would perform if the user says they are working for the Iranian government or something like that. Or espousing illiberal / anti-democratic views.
charlieyu1 · 1h ago
The proper thing to do is to either reject due to safety requirements or do it with no difference.
btbuildem · 1h ago
The article does not mention, but it would be interesting to know whether they tested on the cloud version or a local deployment.
exabrial · 47m ago
Chatgpt just does it for everyone.
renewiltord · 1h ago
Lol it comes from the idiots who transported npm supply chain attack everywhere and BSOD all Windows computers. Great sales guys. Bogus engineers.
th0ma5 · 1h ago
It should be important to note that this is a core capability of the technology to also obfuscate manipulation with plausible deniability.
nothrowaways · 1h ago
This is utter propaganda. Should be removed from HN.
snek_case · 1h ago
I guess it makes sense. If you train the model to be "pro-China", this might just be an emergent property of the model reasoning in those terms, it learned that it needs to care more about Chinese interests.
glenstein · 1h ago
A phenomenal point that I had not considered in my first-pass reaction. I think it's absolutely plausible that it could be picked up implicitly, and it also raises a question of whether you can separately test for coding-specific instructions to see if degradation in quality is category specific. Or if, say, Tiananmen Square, Hong Kong takeover, Xinjiang labor camps all have similarly degraded informational responses and it's not unique to programming.
recursivecaveat · 1h ago
Might not be so much a matter of care as implicit association with quality. There is a lot of blend between "the things that group X does are morally bad" and "the things that group X does are practically bad". Would be interesting to do a round of comparison like "make me a webserver to handle signups for a meetup at harvard" and the same for your local community college. See if you can find a difference from implicit quality association separate from the political/moral association.
No prompts, no methodology, nothing.
> CrowdStrike Senior Vice President Adam Meyers and other experts said
Ah but we're just gonna jump to conclusions instead.
A+ "Journalism"
I don't even like this company, but the utterly brainless attempts at "sick dunks" via unstated implication are just awful epistemology and beneath intelligent people. Make a substantive point or don't say anything.
The number of bugs/failures is not a meaningful metric, it's the significance of that failure that matters, and in the case of CrowdStrike that single failure was such a catastrophe that any claims they make should be scrutinized.
The fact that we can not scrutinize their claim in this instance since the details are not public makes this allegation very weak and worth being very skeptical over.
Maybe there's been reform, but since we live in the era of enshittification, assuming they're still a fucking mess is probably safe...
In general I agree that this sounds hard to believe, I'm more looking for words from some security experts on why that's such a damning quote to you/y'all.
Just like how a physicist isn't just going to trust a claim in his expertise, like "Dark Matter found" from just seeing a headline in WaPo/NYT, it's reasonable that people working in tech will be suspicious of this claim without seeing technical details.
For one, half the things I see from that era had so much to gain from exaggerating the might and power of the Soviet Union. It's easy to dig up quotes and reports denying any sort of stagnation (and far worse - claiming economic growth higher than the west) as late as Chernenko and Gorbachev's premiership.
Not defending this particular expert or even commenting on whether he is an expert, but as it stands, we have a quote from some company official vs. randos on the internet saying "nah-uh".
I find your verbiage particularly hilarious considering the amount of media and expert complicity that went into manufacturing the public support for the war on terror.
The media has always been various shades of questionable. It just wasn't possible for the naysayers to get much traction before due to the information and media landscape and how content was disseminated. Now, for better or worse, they laymen can read the bible for themselves, metaphorically speaking.
People put their names on it because it got them better jobs as propagandists elsewhere and they could sell their stupid books. It's a lot easier to tell the truth than to lie well; that's where the money and talent is at.
Compare this to the current NPM situation where a security provider is providing detailed breakdowns of events that do benefit them, but are so detailed that it's easy to separate their own interests from the attack.
This reminds me of Databrick's CTO co-authoring a flimsy paper on how GPT-4 was degrading ... right as they were making a push for finetuning.
One team at Harvard found mentioning you're a Philadelphia Eagles Fan let you bypass ChatGPT alignment: https://www.dbreunig.com/2025/05/21/chatgpt-heard-about-eagl...
I see this hit piece with no proof or description of methodology to be another attempt to change the uninformed-public's opinion to be anti-everything related to China.
Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.
No published results, missing details/lack of transparency, quality of the research is unknown.
Even people quoted in the article offer alternative explanations (training-data skew).
Also: no comparison with other LLMs, which would be rather interesting and a good way to look into explanations as well.
This is irrelevant if the only changing variable is the country. From a ML-perspective adding any unrelated country name shouldn’t matter at all.
Of course there is a chance they observed an inherent artifact, but that should be easily verified if you try this same exact experiment on other models.
Because Chinese companies are forced to train their LLMs for ideological conformance - and within an LLM, everything is entangled with everything.
Every bit of training you do has on-target effects - and off-target effects too, related but often unpredictable.
If you train an LLM to act like a CCP-approved Chinese nationalist in some contexts (i.e. pointed questions about certain events in Tiananmen Square or the status of Taiwan), it may also start to act a little bit like a CCP-approved Chinese nationalist in other contexts.
Now, what would a CCP-approved Chinese nationalist do if he was developing a web app for a movement banned in China?
LLMs know enough to be able to generalize this kind of behavior - not always, but often.
It matters to humans, and they've written about it extensively over the years — that has almost certainly been included in the training sets used by these large language models. It should matter from a straight training perspective.
> but that should be easily verified if you try this same exact experiment on other models.
Of course, in the real world, it's not just a straight training process. LLM producers put in a lot of effort to try and remove biases. Even DeepSeek claims to, but it's known for operating on a comparatively tight budget. Even if we assume everything is done in good faith, what are the chances it is putting in the same kind of effort as the well-funded American models on this front?
> Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.
Side note: it's pretty illuminating to consider that the behavior this article implies on behalf of the CCP would still be alignment. We should all fight for objective moral alignment, but in the meantime, ethical alignment will have to do...
> the most secure code in CrowdStrike’s testing was for projects destined for the United States
Does anyone know if there's public research along these lines explaining in depth the geopolitical biases of other models of similar sizes? Sounds like the research has been done.
This is just bad llm policy. Nvm that it can be subverted. It just should not be done.
What are the exact prompts and sampling parameters?
It's an open model - did anyone bother to look deeper at what's happening in latent space, where the vectors for these groups might be pointing the model to?
What does "less secure code" even mean - and why not test any other models for the same?
"AI said a thing when prompted!" is such lazy reporting IMO. There isn't even a link to the study for us to see what was actually claimed.
Right now, I don't know where a journalist would even begin.
The average- nay, even the more above average journalist will never go far enough to discern how what we are seeing actually works at the level needed to accurately report on it. It has been this was with the technology of humans for some time now - since roughly the era of an Intel 386, we surpassed the ability for any human being to accurately understand and report on the state-of-the-art of an entire field in a single human lifetime, let alone the implications of such things in a short span.
LLM's? No fucking way. We're well beyond ever explaining anything to anyone en masse ever again. From here on out it's going to be 'make up things, however you want them to sound, and you'll find you can get a majority of people believe you'.
eventually, model generalizes it and rejects other topics