For those that use AI/LLM's that retrain on your input, I assume you realize this commoditizes your intellectual work? And effectively makes use of it like they already used copyrighted intellectual property. This is effectively the same as the commons appropriations made for railroad development, reinterpreting fair use, etc.
WaxProlix · 3h ago
At least in the Settings pane, the slider is kinda ambiguous as to whether you're opted in or not.
How is it that in 2025 UI is worse than what we had in Windows 98? A checkbox would be unambiguous here.
4b11b4 · 2h ago
Thought so too, I assume it was checked by default, so it hit it once
bloomca · 2h ago
I don't understand why would you opt in to share your data. Is it because you believe that it would help to improve the model and you would benefit from it? Or something altruistic?
jimmont · 1h ago
I think it's just general lack of awareness of the effect, or in many instances having alternate economic incentives, like academics who want to commoditize their intellectual outputs to all available distribution channels. Tyler Cowen for example. The AI companies are in a race to the bottom.
dpcx · 3h ago
I don't love that this is opt-in by default, but I'm happy that they're at least offering an opt-out.
roughly · 2h ago
I dunno, I feel like we’ve seen this play often enough - “option to opt-out” is absolutely going to be the first feature slated for elimination on the product roadmap - “after all, only 5% of customers are using it.”
jkaplowitz · 2h ago
I agree with everything you’ve said, but also am happy that they’re forcing users both new and existing to make a choice to continue using Claude under the new terms, rather than silently starting to train for existing users who take no action.
Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.
lostmsu · 2h ago
Were they not using the data from Claude Code for training before this change? After this change, will they not train on my code if I switch this off (Claude Pro sub)?
jkaplowitz · 2h ago
From their FAQ at the bottom of the linked page:
“Previous chats with no additional activity will not be used for model training.”
So, I guess they weren’t. You can switch off and keep that the case.
lenerdenator · 2h ago
I mean, it's great that it's at least got an opt-out, but the whole appeal for me of Anthropic and giving them money was they explicitly didn't do anything with your data, or that was the impression I had.
When you see this kind of thing it makes you wonder what else they'll try to do to get around your opt-out.
owebboy · 4h ago
eek. opt-in default. 5 year retention. i knew that something like this was coming, but it's a hard pill to swallow
https://postimg.cc/2V7mM77C vs https://postimg.cc/1nF1HGzh
Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.
“Previous chats with no additional activity will not be used for model training.”
So, I guess they weren’t. You can switch off and keep that the case.
When you see this kind of thing it makes you wonder what else they'll try to do to get around your opt-out.