Anthropic revokes OpenAI's access to Claude

105 minimaxir 41 8/1/2025, 9:50:28 PM wired.com ↗

Comments (41)

ankit219 · 1h ago
The article does not say anything substantial, but just some opposite viewpoints.

1/ Openai's technical staff were using Claude Code (API and not the max plans).

2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.

3/ Openai said it's using the APIs for benchmarking.

I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.

hinkley · 38m ago
Feels like something a Jepsen or such should be doing instead of competitors trying to clock each other directly. I can see why they would feel uncomfortable about this situation.
vineyardmike · 26m ago
I feel like it’s not hard to “get caught” if they were doing something wrong. It’s pretty standard practice to inspect traffic and requests to ensure they’re fitting TOS. LLM input is literally text, so it’s quite easy to audit, even in an automated manner.

The API keys are surely associated with OpenAI, so they can take additional care with any customer that would be associated with banned behaviors like model development.

I think the only interesting news story, and the exclusive reason Anthropic would run the story, is the implication that OpenAI maybe uses Sonnet+claude code for development - even on the eve of the GPT-5 release. Implying that your competitors don’t even use their flagship product because yours is better is a big opportunity.

luke-stanley · 3h ago
"OpenAI was plugging Claude into its own internal tools using special developer access (APIs)"

Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.

Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D Seriously though, why phrase API use of Claude as "special developer access"?

I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!

Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?

dylan604 · 3h ago
> Seriously though, why phrase API use of Claude as "special developer access"?

Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app

stavros · 2h ago
If I'm an OpenAI employee, and I use Claude Code via the API, I'm not doing some hacker-fu, I'm just using a tool a company released for the purpose they released it.

I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.

viraptor · 3h ago
Because it's not "special developer access". It's just "normal developer access". The phrasing gives an impression they accessed something other users cannot.
dgfitz · 1h ago
Normal users use the API constantly, they just don’t realize it.

Isn’t half the schtick of LLMs making software development available for the layman?

chowells · 3h ago
> According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models”

That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.

ethan_smith · 2h ago
These anti-competitive clauses are becoming standard across all major AI providers - Google, Microsoft, and Meta have similar terms. The industry is converging on a licensing model that essentially creates walled gardens for model development.
dougSF70 · 2h ago
Also Twitter TOS when accessing firehose was that you could not recreate a Twitter client.
dude250711 · 3h ago
Well, Open AI had been whining about DeepSeek back in the day, so it is fair in a way.
beefnugs · 29m ago
Dumbest thing they could do, why would you cut off insight into what your competitors are doing?
ramoz · 23m ago
Because they don't blatantly read people's prompts. They have a confidential inference architecture.

They don't target and analyze specific user or organizations - that would be fairly nefarious.

The only exception would be if there are flags for trust and safety. https://support.anthropic.com/en/articles/8325621-i-would-li...

bitwize · 3h ago
For years it was a license violation to use Microsoft development tools to build a word processor or spreadsheet. It was also a violation of your Oracle license to publish benchmark results comparing Oracle to other databases.

If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.

gruez · 2h ago
>For years it was a license violation to use Microsoft development tools to build a word processor or spreadsheet.

source?

ack_complete · 1h ago
You have to go pretty far back, it was in the Visual C++ 6.0 EULA, for instance (for lack of a better link):

https://proact.eu/wp-content/uploads/2020/07/Visual-Basic-En...

It wasn't a blanket prohibition, but a restriction on some parts of the documentation and redistributable components. Definitely was weird to see that in the EULA for a toolchain. This was removed later on, though I forget if it's because they changed their mind or removed the components.

sroussey · 3h ago
Doesn’t the ban on benchmarking Oracle still stand today?
DaSHacka · 3h ago
Hmm so "because you split spending between us and a competitor, we'll force you to give the competitor the whole share instead!"

Certainly a mindset befitting microsoft and Oracle, if I ever saw one.

david38 · 2h ago
I can understand the benchmark issue. It often happens when someone benchmarks something, it’s biased or wrong in some way.

I don’t believe it should be legal, but I see why they would be butt-hurt

valtism · 3h ago
Would something like that hold up in court?
compootr · 2h ago
they can choose who they do & don't want to do business with
manquer · 1h ago
Law does not work like that.

- Contracts can have unenforceable terms that can be declared null and void by a court, any decision not to renew the contract in future would have no bearing on the current one.

- there are plenty of restrictions on when/ whether you can turn down business for example FRAND contracts or patents don’t allow you choose to not work with a competitor and so on.

ygjb · 3h ago
Good luck with that! Most of the relevant model providers include similar terms (Grok, OpenAI, Anthropic, Mistal, basically everyone with the exception of some open model providers).
chowells · 3h ago
You're like 50% of the way there...
palata · 2h ago
Can't we say it's "fair use"? They do whatever they want saying it's "fair use", I don't see why I couldn't.
modeless · 3h ago
OpenAI Services Agreement: "Customer will not [...] use Output to develop artificial intelligence models that compete with OpenAI’s products and services"

Live by the sword, die by the sword.

spwa4 · 1h ago
Didn't a whole bunch of AI companies make the news that they refuse to respect X law in AI training. So far, X has been:

* copyright law

* trademark law

* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)

* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)

* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)

So far, they seem to have gotten away with everything.

raincole · 1h ago
> defamation law

Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?

spwa4 · 35m ago
Given that they publish everything their AI says? That that's in fact the business model? (in other words, they publish everything their AI says for money) Quite frankly, yes.

If I told people you are a murderer, for money, I'd expect to be sued and I'd expect to be convicted.

Taylor_OD · 3h ago
So it begins!
throwawayoldie · 3h ago
Let's hope.
lossolo · 3h ago
Buttons840 · 2h ago
Who will pay me for my AI chat histories?

Seriously, make a browser extension that people can turn on and off (no need to be dishonest here), and pay people to upload their AI chats, and possibly all the other content they view.

If Reddit wont let you scrape, pay people to automatically upload the Reddit comments they view normally.

If Claude cuts you off, pay people to automatically upload their Claude conversations.

Am I crazy, am I hastening dystopia?

bit1993 · 2h ago
Than I would simply use AI to generate chat histories and get paid (:
manquer · 1h ago
That is not a problem if the price paid is lower than what generating synthetic data of similar size will cost .
bit1993 · 1h ago
Great point. Verifying the synthetic data also has a cost, I wonder if it is cheaper than generating it?
BoorishBears · 2h ago
I've done a lot of post training and data collection for post-training

I think if you're not OpenAI/Anthropic sized (in which case you can do better) you're not going to get much value out of it

It's hard to usefully post-train on wildly varied inputs, and post-training is all most people can afford.

There's too much noise to improve things unless you do a bunch of cleaning and filtering that's also somewhat expensive.

If you constrain the task (for example, use past generations from your own product) you get much further along though.

I've thought about building a Chrome plugin to do something useful for ChatGPT web users doing a task relevant to what my product does, then letting them opt into sharing their logs.

That's probably a bit more tenable for most users since they're getting value, and if your extension can do something like produce prompts for ChatGPT, you'll get data that actually overlaps with what you're doing.

bethekidyouwant · 3h ago
This article says absolutely nothing and appears to be an ad for anthropic
rs186 · 3h ago
Do you have adblocker on?
trilogix · 3h ago
Deep state enters chat...put your houses in order, else new ceo in Q ;)