The Default Trap: Why Anthropic's Data Policy Change Matters

50 laurex 11 8/30/2025, 5:12:06 PM natesnewsletter.substack.com ↗

Comments (11)

furyofantares · 2h ago
You have to choose in order to use Claude, it's not the type of default where you're opted in unless you go find the setting. This blog post misrepresents this.

I haven't seen what the screen for new users looks like, perhaps it "nudge"s you in the direction they want by starting the UI with it checked and you have to check it off. That is what the popup for existing users looks like from Anthropic's linked blog post. That post says they require you to choose when signing up and that existing users have to choose in order to keep using Claude. In Claude Code I had to choose and it was just a straight question in the terminal.

I think the nudge-style defaults are worth criticism but you lose me when your article makes false implications.

tln · 2h ago
Yeah this blog post is wrong on multiple points.

The new user prompt looks the same as far as I can tell, defaults to on, and uses the somewhat oblique phrasing "You can help improve Claude"

serf · 1h ago
Shame that their raison d'etre pre-dominant-model (we won't train on you) changed the moment the model and software became dominant and sought after.

their customer service (or total lack thereof) burned me into a cancellation before hand, the policy changes would have probably had a similar effect. Shame because I love the product (claude-code) -- oh well, the behavior is going to kick up a lot of alternatives soon I bet.

kukkeliskuu · 51m ago
The risk is that if I have created something propietary and novel, it becomes trivial for somebody else to recreate it in using Claude Code, if that same thing has been used to train the model that is being used.

Somebody (tm) will probably turn this against Anthropic and use Claude Code to recreate an open source Claude Code.

jaggederest · 24m ago
It's already not too hard to feed the obfuscated javascript into claude code and get it to spit out what it does. It's not 100%, but it's pretty surprising what it can do.
rectang · 3h ago
I look forward to Claude's improvements after it learns from conversations with users about suicide.
ChrisArchitect · 58m ago
rkagerer · 31m ago
The presently-top comment thread in that first link was enlightening: https://news.ycombinator.com/item?id=45062852

If true, someone should grab a quick screencap vid of the dark pattern.

Madmallard · 1h ago
How is this legal?

"1. Help improve Claude by allowing us to use your chats and coding sessions to improve our models

With your permission, we will use your chats and coding sessions to train and improve our AI models. If you accept the updated Consumer Terms before September 28, your preference takes effect immediately.

If you choose to allow us to use your data for model training, it helps us:

    Improve our AI models and make Claude more helpful and accurate for everyone
    Develop more robust safeguards to help prevent misuse of Claude
We will only use chats and coding sessions you initiate or resume after you give permission. You can change your preference anytime in your Privacy Settings."

The only way to interpret this validly is that it is opt-in.

But it's LITERALLY opt out.

"Help improve Claude

Allow the use of your chats and coding sessions to train and improve Anthropic AI models."

This is defaulted to toggling on.

This should not be legal.

Aeolun · 2h ago
Hmm, so now your options for data retention are 30 days, or 5 years. Not really a great or reasonable choice.
sheepscreek · 2h ago
TL;DR This is the money shot

> So here's my advice: Treat every AI tool like a rental car. Inspect it every time you pick it up.

Disappointed in Anthropic - especially the 5 year retention, regardless of how you opt.