>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
This is of course quite false. They of course know the restriction when they sign the contract.
bri3d · 2h ago
This whole article is weird to me.
This reads to me like:
* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace
* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.
* Anthropic rejected the redline.
* Someone got mad and went to Semafor.
It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.
The article is also full of other weird nonsense like:
> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).
axus · 1h ago
A classic:
THIS SOFTWARE PRODUCT MAY CONTAIN SUPPORT FOR PROGRAMS WRITTEN IN JAVA. JAVA TECHNOLOGY IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED, OR INTENDED FOR USE OR RESALE AS ONLINE CONTROL EQUIPMENT IN HAZARDOUS ENVIRONMENTS REQUIRING FAILSAFE PERFORMANCE, SUCH AS IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, DIRECT LIFE SUPPORT MACHINES, OR WEAPONS SYSTEMS, IN WHICH THE FAILURE OF JAVA TECHNOLOGY COULD LEAD DIRECTLY TO DEATH, PERSONAL INJURY OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE.
GLdRH · 1h ago
I never knew Java was so dangerous
gowld · 41m ago
Everything is dangerous by default. That's the point.
dgfitz · 34m ago
Look up DoD (DoW?) 882 and LOR ratings. This is a fancy way of saying “Java can’t do that because we haven’t certified a toolchain for it”
And for bonus points, go find the last certified compilers for LOR1 rating that follow 882 guidelines.
Now you’ve scratched the surface of safety-critical software. Actually writing it is a blast. I think most web developers would weep in frustration. “Wait, I can’t allocate memory that way? Or that way? Or in this way not at all?! There’s no framework?! You mean I need to do all this to verify a button click??!!”
salynchnew · 1h ago
Could also be an article placed by a competitor + a squeaky wheel.
giancarlostoro · 1h ago
> due to a desire to extract more money from those customers
If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.
matula · 2h ago
There are (or at least WERE) entire divisions dedicated to reading every letter of the contract and terms of service, and usually creating 20 page documents seeking clarification for a specific phrase. They absolutely know what they're getting into.
darknavi · 2h ago
I have a feeling in today's administration which largely "leads by tweet" that many traditional "inefficient" steps have been removed from government processing, probably including software on-boarding.
anjel · 45m ago
I have a legal education but reading TOS and priv policy docs at account creation is purposefully too time consuming by design.
One my fave new AI prompts: you are my Atty and a expert in privacy law and online contracts-of-adhesion. Review the TOS aggreement at [url] and privacy policies at [url] and brief me on all areas that should be of concern to me.
Takes 90 seconds from start to finish, and reveals how contemptuously illusory these agreements are when SO MANY reserve the right to change anything with no duty to disclose changes.
dannyisaphantom · 1h ago
Can confirm these teams are still around. There is now an additional "SME review group" that must comb through any and all AI-related issues that were flagged, sends it back down for edits and must give final approval for before docs are sent over to provider for response. Turnaround has gotten much slower (relatively)
gowld · 33m ago
Or you can use personal accounts to bypass red tape for government business.
bt1a · 2h ago
Perhaps it's the finetune of Opus/Sonnet/whatever that is being served to the feds that is the source of the refusal :)
andsoitis · 1h ago
Don’t tech companies change ToS quite frequently and sometimes in ways that’s against the spirit of what the terms were when you started using it?
ajross · 1h ago
This is a contract, not a click through license. You can't do that.
(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)
jdminhbg · 2h ago
Are you sure that every restriction that’s in the model is also spelled out in the contract? If they add new ones, do they update the contract?
mikeyouse · 2h ago
The contracts will usually say “You agree to the restrictions in our TOS” with a link to that page which allows for them to update the TOS without new signatures.
PeterisP · 1h ago
All the US megacorps tend send me emails saying "We want to change TOS, here's the new TOS that's be valid from date X, and be informed that you have the right to refuse it" (in which case they'll probably terminate the service, but I'm quite sure that if it's a paid service with some subscription, they would have to refund the remaining portion) - so they can change the TOS, but not without at least some form of agreement, even if it's an implicit one 'by continuing to use the service'.
giancarlostoro · 1h ago
Usually, contracts will note that you will be notified of changes ahead of time, if it's a good faith contract and company that is.
impossiblefork · 1h ago
Here in Sweden contracts are a specific thing, otherwise it's not a contract, so agreeing to conditions that can be changed by the other party simply isn't a contract and therefore is just a bullshit paper of very dubious legal validity.
I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.
mindcrime · 1h ago
I know that some things like this are accepted in America
Not really. Everything you said about contracts above applies to contracts in America last time I checked. Disclaimer: IANAL, my legal training amounts of 1 semester of "Business Law" in college.
impossiblefork · 53m ago
In theory yes, but you also have this stuff where people agree to get medical treatment and the price isn't specified.
This would be a non-contract in Swedish law, for example.
gowld · 32m ago
The contact's restriction is on the usage of the model, not the behavior of the model.
owenthejumper · 2h ago
This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):
"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."
cbm-vic-20 · 1h ago
There's nothing stopping Microsoft hammering out different terms for certain customers, like governments.
gowld · 26m ago
Semafor is Ben Smith's blog, trying to imitate a reputable newspaper like Financial Times.
LeoPanthera · 2h ago
One of the very few tech companies who have refused to bend the knee to the United States' current dictatorial government.
jimbo808 · 2h ago
It's startling how few are willing to. I'm rooting for them.
chrsw · 1h ago
Can we trust this though? “Cooperate with us and we’ll leak fake stories about how frustrated we are with you as cover”.
And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.
astrange · 4m ago
They don't do that. They're not capable of cooperating with anyone, it's maximum punishment all the time. It's unclear if they can keep secrets either.
sitzkrieg · 35m ago
because of this they're probably on borrowed time in this political climate
FortuneIIIPick · 1h ago
Dictatorial suggests a "ruler with total power". The US has three branches of government. That hasn't changed, ever.
izzydata · 1h ago
The definition of dictatorial government is either a single person or a small group of people. So there being three branches of government doesn't necessarily prohibit a government from being a dictatorship if they are all working together to enact their authoritarian control without constitutional limits.
But really this is just pointless semantics. It doesn't matter what it is called it is still a problem.
PeterisP · 1h ago
Technically (and legally) the USSR also had the same three branches of government; just all controlled by the same party.
vkou · 1h ago
Two of them jump at the command one the other one, one out of fear (because he has ended the careers of every rep that has crossed him), and the other has been packed with life-time-appointment sycophants who put loyalty to the cut over anything else.
Russia (or literally any other dictatorial tyre pyre) also has three branches of government and a token opposition, for all the good it does.
Just because you have a nice piece of paper that outlines some kind of de jure separation of powers, doesn't mean shit in practice. Russia (and prior to it, the USSR) has no shortage of such pieces of paper.
FortuneIIIPick · 1h ago
That's a ridiculous take. Seriously outlandish. The US has always had and continues to have three working branches of government. That is a factual statement because it is indeed a fact.
otterley · 1h ago
It’s not a fact, because it depends upon a subjective interpretation of the word “working.” Some might argue, for example, that if the President can cow Congress into subservience, then the three branches of government are no longer in balance with each other, and thus the constitution is no longer “working” as intended.
mjparrott · 1h ago
It can be true that the constitution is not working as intended, AND the US is a far cry from a country like Russian in terms of it operating as a constitutional republic / democracy. It is not subjective to say the US is more of a democratic country than Russia.
otterley · 45m ago
I think Russia was being used as an extreme example. It wasn't being lauded as a model nation, far from it.
chankstein38 · 48m ago
I always think it's funny how people who have strong opinions based on nothing love to out themselves by just repeating that something is fact. clap clap We're all convinced, for sure! ;)
otterley · 47m ago
It's straight out of Charlie Kirk's playbook.
mrbombastic · 55m ago
The power of the purse is currently being usurped by the executive branch with no pushback from a republican congress, armed forces are being deployed to American cities, media corporations are being forced to have admin installed bias police, due process is a joke, museums are being forced to remove information the admin find objectionable. You can bury your head in the sand if you like but there are plenty of us who won’t.
ricardobeat · 47m ago
How do you explain Trump unilaterally renaming the Ministry of Defense, without legislative approval? Is it a “working branch” if their constitutionally granted power is easily sidestepped?
otterley · 44m ago
Do you mean the Department of Defense? The U.S. doesn't have ministries.
jschveibinz · 1h ago
This is a false statement and doesn't belong on this forum
9dev · 47m ago
I don’t get this notion. Politics has a place on HN like any other interesting topic does, whether you like it or not
tene80i · 1h ago
Which part?
baggy_trough · 1h ago
I should think that it was very obvious that America does not have a dictatorial government; this is hyperbole.
tene80i · 31m ago
No need for that. I was just asking someone to clarify what they were saying.
Regardless of what you think about the government, that wasn’t a statement in the above. The statement was about tech companies. So it wasn’t clear.
Gosh, I guess the SaaS distribution model might give companies undesirable control over how their software can be used.
Viva local-first software!
nathan_compton · 2h ago
In general I applaud this attitude but I am glad they are saying no to doing surveillance.
saulpw · 2h ago
Me too, actually, but this is some "leopards ate their face" schaudenfraude that I'm appreciating for the moment.
_pferreir_ · 2h ago
EULAs can impose limitations on how you use on-premises software. Sure, you can ignore the EULA, but you can also do so on SaaS, to an extent.
ronsor · 2h ago
With SaaS, you can be monitored and banned at any moment. With EULAs, at worse you can be banned from updates, and in reality, you probably won't get caught at all.
MangoToupe · 2h ago
Are EULAs even enforceable? SaaS at least have the right to terminate service at will.
Terretta · 57m ago
Here's an entertaining example from 20 years ago:
By using the Apple Software, you represent and warrant that you ... also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of missiles, or nuclear, chemical or biological weapons. -- iTunes
No production of missiles with iTunes? Curses, foiled again.
j2kun · 1h ago
The US government can train their own damn LLM if they want an unrestricted one so bad.
First, contracts often come with usage restrictions.
Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.
And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?
This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."
SilverbeardUnix · 2h ago
Honestly makes me think better of Anthropic. Lets see how long they stick to their guns. I believe they will fold sooner rather than later.
FrustratedMonky · 1h ago
Wasn't a big part of AI 2027 that government employees became overly reliant on AI and couldn't function without it. So guess we are still on track to hit that timeline.
gowld · 29m ago
> The policy doesn’t specifically define what it means by “domestic surveillance” in a law enforcement context and appears to be using the term broadly, creating room for interpretation.
> Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.
This is unintentionally (for the author) hilarious. It's a blatant misinterpretation of the language, while complimenting the clarity of the lanuage. Who "authorizes" "monitoring of individuals"? If an executive agency monitors an individual in violation of a court order, is that "authorized" ?
chatmasta · 2h ago
Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?
It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?
bri3d · 2h ago
1) Anthropic are US based, maybe you're thinking of Mistral?
2) Are government agencies sending prompts to model inference APIs on remote servers?
Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).
3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.
itsgrimetime · 2h ago
Anthropic is US-based - unless you meant something else by "foreign corporation"?
jjice · 2h ago
> It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.
chatmasta · 2h ago
Ah my mistake. I thought they were French. I got them confused with Mistral.
The concern remains even if it’s a US corporation though (not government owned servers).
toxik · 2h ago
Anthropic is pretty clearly using the Häagen-Dasz approach here, call yourself Anthropic and your product Claude so you seem French. Why?
mcintyre1994 · 1h ago
According to Claude, it’s named after Claude Shannon, who was American.
astrange · 1m ago
But it might also be the albino alligator in the California Academy of Sciences in SF.
chatmasta · 2h ago
Hah, it was indeed the Claude name that had me confused :D
bt1a · 2h ago
Everyone spies and abuses individuals' privacy. What difference does it make? (Granted I would agree with you if Anthropic were indeed a foreign based entity, so am I contradicting myself wonderfully?)
jjice · 2h ago
Ah yes - Mistral is the largest of the non-US, non-Chinese AI companies that I'm aware of.
> The concern remains even if it’s a US corporation though (not government owned servers).
Very much so, I completely agree.
g42gregory · 1h ago
No judgement here, but a US-based corporation refusing services to the US Government?
While the terms of service are what they are, the US Government can withdraw its military contracts from Anthropic (or refuse future contracts if they don't have any so far). Or softly suggest to its own contractors to limit their business dealings with Anthropic. Then Anthropic will have hard time securing computing from NVIDIA, AWS, Google, MSFT, Oracle, etc...
This won't last.
e_i_pi_2 · 57m ago
I'm sure this sort of unofficial blacklisting is fairly common, but it does seem very opposed to the idea of a free market. It definitely doesn't seem like Anthropic was trying to make some sort of point here, but it would be cool if all the AI companies had a ToS saying it can't be used for any sort of defense/police/military purposes
g42gregory · 39m ago
I am not even sure what free market is, aside from Economics textbooks and foreign policy positioning. Whatever it may be, I don't think we had it for quite some time.
>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
This is of course quite false. They of course know the restriction when they sign the contract.
This reads to me like:
* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace
* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.
* Anthropic rejected the redline.
* Someone got mad and went to Semafor.
It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.
The article is also full of other weird nonsense like:
> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).
THIS SOFTWARE PRODUCT MAY CONTAIN SUPPORT FOR PROGRAMS WRITTEN IN JAVA. JAVA TECHNOLOGY IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED, OR INTENDED FOR USE OR RESALE AS ONLINE CONTROL EQUIPMENT IN HAZARDOUS ENVIRONMENTS REQUIRING FAILSAFE PERFORMANCE, SUCH AS IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, DIRECT LIFE SUPPORT MACHINES, OR WEAPONS SYSTEMS, IN WHICH THE FAILURE OF JAVA TECHNOLOGY COULD LEAD DIRECTLY TO DEATH, PERSONAL INJURY OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE.
And for bonus points, go find the last certified compilers for LOR1 rating that follow 882 guidelines.
Now you’ve scratched the surface of safety-critical software. Actually writing it is a blast. I think most web developers would weep in frustration. “Wait, I can’t allocate memory that way? Or that way? Or in this way not at all?! There’s no framework?! You mean I need to do all this to verify a button click??!!”
If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.
One my fave new AI prompts: you are my Atty and a expert in privacy law and online contracts-of-adhesion. Review the TOS aggreement at [url] and privacy policies at [url] and brief me on all areas that should be of concern to me.
Takes 90 seconds from start to finish, and reveals how contemptuously illusory these agreements are when SO MANY reserve the right to change anything with no duty to disclose changes.
(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)
I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.
Not really. Everything you said about contracts above applies to contracts in America last time I checked. Disclaimer: IANAL, my legal training amounts of 1 semester of "Business Law" in college.
This would be a non-contract in Swedish law, for example.
"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."
And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.
But really this is just pointless semantics. It doesn't matter what it is called it is still a problem.
Russia (or literally any other dictatorial tyre pyre) also has three branches of government and a token opposition, for all the good it does.
Just because you have a nice piece of paper that outlines some kind of de jure separation of powers, doesn't mean shit in practice. Russia (and prior to it, the USSR) has no shortage of such pieces of paper.
Regardless of what you think about the government, that wasn’t a statement in the above. The statement was about tech companies. So it wasn’t clear.
Viva local-first software!
By using the Apple Software, you represent and warrant that you ... also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of missiles, or nuclear, chemical or biological weapons. -- iTunes
No production of missiles with iTunes? Curses, foiled again.
https://www.reuters.com/business/retail-consumer/anthropic-o...
Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.
And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?
This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."
> Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.
This is unintentionally (for the author) hilarious. It's a blatant misinterpretation of the language, while complimenting the clarity of the lanuage. Who "authorizes" "monitoring of individuals"? If an executive agency monitors an individual in violation of a court order, is that "authorized" ?
It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?
2) Are government agencies sending prompts to model inference APIs on remote servers?
Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).
3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.
"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.
The concern remains even if it’s a US corporation though (not government owned servers).
> The concern remains even if it’s a US corporation though (not government owned servers).
Very much so, I completely agree.
While the terms of service are what they are, the US Government can withdraw its military contracts from Anthropic (or refuse future contracts if they don't have any so far). Or softly suggest to its own contractors to limit their business dealings with Anthropic. Then Anthropic will have hard time securing computing from NVIDIA, AWS, Google, MSFT, Oracle, etc...
This won't last.