Anthropic irks White House with limits on models’ use

112 mindingnever 45 9/17/2025, 5:57:39 PM semafor.com ↗

Comments (45)

impossiblefork · 49m ago
Very strange writing from semafor.com

>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.

This is of course quite false. They of course know the restriction when they sign the contract.

bri3d · 24m ago
This whole article is weird to me.

This reads to me like:

* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace

* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.

* Anthropic rejected the redline.

* Someone got mad and went to Semafor.

It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.

The article is also full of other weird nonsense like:

> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.

While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).

giancarlostoro · 13m ago
> due to a desire to extract more money from those customers

If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.

matula · 32m ago
There are (or at least WERE) entire divisions dedicated to reading every letter of the contract and terms of service, and usually creating 20 page documents seeking clarification for a specific phrase. They absolutely know what they're getting into.
darknavi · 28m ago
I have a feeling in today's administration which largely "leads by tweet" that many traditional "inefficient" steps have been removed from government processing, probably including software on-boarding.
bt1a · 29m ago
Perhaps it's the finetune of Opus/Sonnet/whatever that is being served to the feds that is the source of the refusal :)
andsoitis · 6m ago
Don’t tech companies change ToS quite frequently and sometimes in ways that’s against the spirit of what the terms were when you started using it?
ajross · 26s ago
This is a contract, not a click through license. You can't do that.

(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)

jdminhbg · 31m ago
Are you sure that every restriction that’s in the model is also spelled out in the contract? If they add new ones, do they update the contract?
mikeyouse · 24m ago
The contracts will usually say “You agree to the restrictions in our TOS” with a link to that page which allows for them to update the TOS without new signatures.
giancarlostoro · 11m ago
Usually, contracts will note that you will be notified of changes ahead of time, if it's a good faith contract and company that is.
impossiblefork · 7m ago
Here in Sweden contracts are a specific thing, otherwise it's not a contract, so agreeing to conditions that can be changed by the other party simply isn't a contract and therefore is just a bullshit paper of very dubious legal validity.

I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.

owenthejumper · 26m ago
This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):

"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."

cbm-vic-20 · 19s ago
There's nothing stopping Microsoft hammering out different terms for certain customers, like governments.
saulpw · 52m ago
Gosh, I guess the SaaS distribution model might give companies undesirable control over how their software can be used.

Viva local-first software!

nathan_compton · 43m ago
In general I applaud this attitude but I am glad they are saying no to doing surveillance.
saulpw · 37m ago
Me too, actually, but this is some "leopards ate their face" schaudenfraude that I'm appreciating for the moment.
_pferreir_ · 38m ago
EULAs can impose limitations on how you use on-premises software. Sure, you can ignore the EULA, but you can also do so on SaaS, to an extent.
ronsor · 36m ago
With SaaS, you can be monitored and banned at any moment. With EULAs, at worse you can be banned from updates, and in reality, you probably won't get caught at all.
MangoToupe · 36m ago
Are EULAs even enforceable? SaaS at least have the right to terminate service at will.
sfink · 5m ago
First, contracts often come with usage restrictions.

Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.

And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?

This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."

LeoPanthera · 49m ago
One of the very few tech companies who have refused to bend the knee to the United States' current dictatorial government.
jschveibinz · 12m ago
This is a false statement and doesn't belong on this forum
tene80i · 7m ago
Which part?
jimbo808 · 36m ago
It's startling how few are willing to. I'm rooting for them.
chrsw · 10m ago
Can we trust this though? “Cooperate with us and we’ll leak fake stories about how frustrated we are with you as cover”.

And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.

chatmasta · 33m ago
Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?

It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?

bri3d · 28m ago
1) Anthropic are US based, maybe you're thinking of Mistral?

2) Are government agencies sending prompts to model inference APIs on remote servers?

Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).

3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.

jjice · 30m ago
> It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.

chatmasta · 29m ago
Ah my mistake. I thought they were French. I got them confused with Mistral.

The concern remains even if it’s a US corporation though (not government owned servers).

toxik · 26m ago
Anthropic is pretty clearly using the Häagen-Dasz approach here, call yourself Anthropic and your product Claude so you seem French. Why?
mcintyre1994 · 17m ago
According to Claude, it’s named after Claude Shannon, who was American.
chatmasta · 24m ago
Hah, it was indeed the Claude name that had me confused :D
bt1a · 25m ago
Everyone spies and abuses individuals' privacy. What difference does it make? (Granted I would agree with you if Anthropic were indeed a foreign based entity, so am I contradicting myself wonderfully?)
jjice · 27m ago
Ah yes - Mistral is the largest of the non-US, non-Chinese AI companies that I'm aware of.

> The concern remains even if it’s a US corporation though (not government owned servers).

Very much so, I completely agree.

itsgrimetime · 30m ago
Anthropic is US-based - unless you meant something else by "foreign corporation"?
SilverbeardUnix · 1h ago
Honestly makes me think better of Anthropic. Lets see how long they stick to their guns. I believe they will fold sooner rather than later.