I like that they are being transparent. But are these partners also going to be transparent?
ctoth · 1d ago
I actively enjoy programming with Aider/Claude Code, and still there's just something that feels very squicky about this.
"Pay to train your replacement! We'll give you a discount!"
After all, why do they need my tokens of telling the AI what to do if not to teach the AI to tell itself what to do? They're collecting our developer workflows to build the next level of automation. And we're still paying for it!
disqard · 1d ago
Not related directly to the partner program, but:
"By default, Anthropic doesn’t train our generative models on your content. This commitment to privacy is core to how we build our products and services."
IIUC, this is actually different from Gemini and ChatGPT, which sets Claude apart.
alphabettsy · 18h ago
Those are opt-out and Google afaik doesn’t give you the option unless you have a paid membership or use paid APIs.
ramesh31 · 1d ago
Nice, but input tokens are dirt cheap anyways. We need a 10x drop in output costs.
bionhoward · 23h ago
They seem so slimy, first they train on all of the Internet, then they turn around and say we can’t use their outputs to develop “competing products or services” (aka, anything),
THEN they act like user data isn’t used for training by default, but if you look at the training part of their terms, it says they do train on “feedback” and then the decompiled version of Claude code reads “
// By using Claude Code, you agree that all code acceptance or rejection decisions you make,
// and the associated conversations in context, constitute Feedback under Anthropic's Commercial Terms,
// and may be used to improve Anthropic's products, including training models.
//- You are responsible for reviewing any code suggestions before use.“
This just seems like cover your ass bullshit if every single use of Claude code counts as feedback they train on
rgbrenner · 16h ago
you could just not upvote / downvote the ai answer? what do you expect those buttons to do if they don’t send anthropic the vote?
astrange · 8h ago
Apparently they don't do anything.
> To date we have not used any customer or user-submitted data to train our generative models.
"Pay to train your replacement! We'll give you a discount!"
After all, why do they need my tokens of telling the AI what to do if not to teach the AI to tell itself what to do? They're collecting our developer workflows to build the next level of automation. And we're still paying for it!
"By default, Anthropic doesn’t train our generative models on your content. This commitment to privacy is core to how we build our products and services."
IIUC, this is actually different from Gemini and ChatGPT, which sets Claude apart.
THEN they act like user data isn’t used for training by default, but if you look at the training part of their terms, it says they do train on “feedback” and then the decompiled version of Claude code reads “ // By using Claude Code, you agree that all code acceptance or rejection decisions you make, // and the associated conversations in context, constitute Feedback under Anthropic's Commercial Terms, // and may be used to improve Anthropic's products, including training models. //- You are responsible for reviewing any code suggestions before use.“
This just seems like cover your ass bullshit if every single use of Claude code counts as feedback they train on
> To date we have not used any customer or user-submitted data to train our generative models.
https://www.anthropic.com/news/claude-3-5-sonnet