If you have a Claude account, they're going to train on your data moving forward

126 diggan 30 8/29/2025, 11:39:26 AM old.reddit.com ↗

Comments (30)

AlecSchueler · 2h ago
Am I the only one that assumed everything was already being used for training?
hexage1814 · 31m ago
This. It's the same innocence of people who believe when you delete a document on Google/META/Apple/Microsoft servers, it "really" gets deleted. Google most likely has a backup of every piece of information indexed by them in the last 20 years or so. It would cause envy to the Internet Archive.
giancarlostoro · 15m ago
With the privacy laws out there, I do genuinely think they eventually get purged even from backups. I remember there being a really cool YouTube video shared here on HN that google no longer has publicly, it was about the process of an email and all the behind the scenes things, like physical security into a data center, to their patented hard drive shredders they use once the hard drives are to be tossed. I wish Google had kept that video public and online, it was a great watch.

I know once you delete something on Discord its poof, and that's the end of that. I've reported things that if anyone at Discord could access a copy of they would have called police. There's a lot of awful trolls on chat platforms that post awful things.

diggan · 5m ago
> I know once you delete something on Discord its poof, and that's the end of that. I've reported things that if anyone at Discord could access a copy of they would have called police. There's a lot of awful trolls on chat platforms that post awful things.

That's not what Discord themselves say, is that coming from Discord, the police or someone else?

> Once you delete content, it will no longer be available to other users (though it may take some time to clear cached uploads). Deleted content will also be deleted from Discord’s systems, but we may retain content longer if we have a legal obligation to preserve it as described below. Public posts may also be retained for 180 days to two years for use by Discord as described in our Privacy Policy (for example, to help us train models that proactively detect content that violates our policies). - https://support.discord.com/hc/en-us/articles/5431812448791-...

Seems to be something that decides if the content should be deleted faster, or kept for between 180 days - 2 years. So even for Discord, "once you delete something on Discord its poof" isn't 100% accurate.

bwillard · 6m ago
Officially, up to you if you believe they are following their policies, all of the companies have published statements on how long they keep their data after deletion (which customers broadly want to support recovery if something goes wrong).

- Google: active storage for "around 2 months from the time of deletion" and in backups "for up to 6 months": https://policies.google.com/technologies/retention?hl=en-US

- Meta: 90 days: https://www.meta.com/help/quest/609965707113909/

- Apple/iCloud: 30 days: https://support.apple.com/guide/icloud/delete-files-mm3b7fcd...

- Microsoft: 30-180 days: https://learn.microsoft.com/en-us/compliance/assurance/assur...

So if it ends up that they are storing data longer there can be consequences (GDPR, CCPA, FTC).

lemonberry · 2h ago
You are not.
A4ET8a8uTh0_v2 · 1h ago
I mean, I am sure there are individuals, who still believe in the basic value of the word within the framework of our civilization, but, having seen those words not just twisted beyond recognition to fit a specific idea, but simply ignored when they were no longer convenient, it would be a surprise now that a cynical stance is not more common.

The question is: how does that affect their choices. How much ends up being gated what previously would have ended up in the open?

Me: I am using a local variant ( and attempting to build something I think I can control better ).

0xbadc0de5 · 10m ago
I kind of already assumed they were. I've got some pretty niche use-cases that I'd like to see the models get better at thinking their way through. I benefit from their training on my interactions. So I'll opt in. But I'll also recognize that others might not feel that way, so the services should provide a way for users to opt out.
macintux · 2h ago
diggan · 2h ago
At least this submission has the original text Anthropic sent out to people :) But yeah, Perplexity gives a better summary for outsiders I guess.
esafak · 15m ago
"If you use Claude for Work, via the API, or other services under our Commercial Terms or other Agreements, then these changes don't apply to you."
I_am_tiberius · 1h ago
Criminal, evil thieves.
flerchin · 38m ago
Maybe a value to users if done correctly. The way it is right now, you can't teach the model anything. When it gets something wrong, it will probably get the same thing wrong again in another chat.
phallus · 31m ago
That's not how LLMs work.
javier_e06 · 1h ago
I use AI to solve problems, not to check the weather or deciding what to wear. As such it makes sense for AI to remember when it hits the nail on the head.
leetbulb · 1h ago
Agreed. Typically I would be against something like this, but in this case, have it.
AlexandrB · 23m ago
How do you feel about this data being used to target advertising at you in the inevitable rush to monetize these AI products?
homarp · 1h ago
internet2000 · 1h ago
I’m fine with that.
dudefeliciano · 21m ago
you are fine with paying, 20, 90 or 200 euros a month AND having your data mined? i must be getting old...
wat10000 · 1h ago
Rather misleading title. Missing the important “unless you ask them not to” part. Sounds like a bit of a dark pattern to push you into accepting it and that’s not cool, but you do get a choice.
ratg13 · 1h ago
I can understand training AIs on books, and even internet forums, but I can't help but think that training an AI on lots of dumb questions with probably an excessive amount of grammar and spelling errors will somehow make it smarter.
nrclark · 51m ago
Depends on how you’re using the data. There’s a pretty strong correctness signal in the user behavior.

Did they rephrase the question? Probably the first answer was wrong. Did the session end? Good chance the answer was acceptable. Did they ask follow-ups? What kind? Etc.

dudefeliciano · 16m ago
> Did the session end? Good chance the answer was acceptable.

Or that the user just ragequit

mrweasel · 21m ago
They train AI on Reddit and Stack Overflow questions, I can't see it getting any worse.
dahsameer · 1h ago
> and even internet forums

i would consider internet forums also includes a lot of dumb questions

ratg13 · 55m ago
Agree, but people generally take a small pause before saying stuff online.

In 'private', people are less ashamed of their ignorance, and also know they can say gibberish and the AI will figure it out.

hkon · 1h ago
With the amount of times Claude is visiting my websites I'd say they are very desperate for data.
SirFatty · 1h ago
"going forward" ;-)
gooob · 1h ago
and now the LLM gets to observe itself, heh heh heh