Ask HN: Why hasn't x86 caught up with Apple M series?
434 points by stephenheron 3d ago 616 comments
Ask HN: Best codebases to study to learn software design?
106 points by pixelworm 5d ago 90 comments
If you have a Claude account, they're going to train on your data moving forward
210 diggan 73 8/29/2025, 11:39:26 AM old.reddit.com ↗
The cynic in me wonders if part of Anthropic's decision process here was that, since nobody believes you when you say you're not using their data for training, you may as well do it anyway!
Giving people an opt-out might even increase trust, since people can now at least see an option that they control.
This is why I love-hate Anthro, the same way I love-hate Apple. The reason is simple: Great product, shitty MBA-fueled managerial decisions.
But including paid accounts and doing 5 year retention however is confounding.
I know once you delete something on Discord its poof, and that's the end of that. I've reported things that if anyone at Discord could access a copy of they would have called police. There's a lot of awful trolls on chat platforms that post awful things.
That's not what Discord themselves say, is that coming from Discord, the police or someone else?
> Once you delete content, it will no longer be available to other users (though it may take some time to clear cached uploads). Deleted content will also be deleted from Discord’s systems, but we may retain content longer if we have a legal obligation to preserve it as described below. Public posts may also be retained for 180 days to two years for use by Discord as described in our Privacy Policy (for example, to help us train models that proactively detect content that violates our policies). - https://support.discord.com/hc/en-us/articles/5431812448791-...
Seems to be something that decides if the content should be deleted faster, or kept for between 180 days - 2 years. So even for Discord, "once you delete something on Discord its poof" isn't 100% accurate.
Yes, of course, to both of those. Discord is a for-profit business with limited amount of humans who can focus on things, so the less they can focus on, the better (in the mind of the people running the business at least). So why do anything when you can do nothing, and everything stays the same? Of course when someone has an warrant, they really have to do something, but unless there is, there is no incentive for them to do anything about it.
- Google: active storage for "around 2 months from the time of deletion" and in backups "for up to 6 months": https://policies.google.com/technologies/retention?hl=en-US
- Meta: 90 days: https://www.meta.com/help/quest/609965707113909/
- Apple/iCloud: 30 days: https://support.apple.com/guide/icloud/delete-files-mm3b7fcd...
- Microsoft: 30-180 days: https://learn.microsoft.com/en-us/compliance/assurance/assur...
So if it ends up that they are storing data longer there can be consequences (GDPR, CCPA, FTC).
The question is: how does that affect their choices. How much ends up being gated what previously would have ended up in the open?
Me: I am using a local variant ( and attempting to build something I think I can control better ).
It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.
This idea that "no one wants to share their data" is just assumed, and permeates everything. Like soft-ball interviews that a popular science communicator did with DeepMind folks working in medicine: every question was prefixed by litany of caveats that were all about 1) assumed aversion of people to sharing their data 2) horrors and disasters that are to befall us should we share the data. I have not suffered any horrors. I'm not aware of any major disasters. I'm aware of major advances in medicine in my lifetime. Ultimately the process does involve controlled data collection and experimentation. Looks a good deal to me tbh. I go out of my way to tick all the NHS boxes too, to "use my data as you see fit". It's an uphill struggle. The defaults are always "deny everything". Tick boxes never go away, there is no master checkbox "use any and all of my data and never ask me again" to tick.
As we’ve seen LLMs be able to fully regenerate text from their sources (or at least close enough), aren’t you the least bit worried about your personal correspondence magically appearing in the wild?
I am sure they will have a coorporate carve out, otherwise it makes them unusuable for some large corps.
I wonder on how much they can rely on the data and what kind of "knowledge" they can extract. I never give feedback and most time (let's say 5 out of 6) the result cc produce it simply wrong. How can they know the result is valuable or not?
At the end of the day it doesn't matter. You got the wrong answer and didn't complain, so why would they care?
Edit: I just logged in to opt out, they presented me with the switch directly. It was two clicks.
Disclaimer: not a Claude user (not even a prospective one)
It’s the reverse. This was opt-in and is now opt-out. Opt means choose so when “the default is opt-in” it means the option is “no” by default and you have the option to make it “yes”.
This is what the comment I was replying to said. I took that to mean "you have to opt out (ie you're opted in by default)".
Feels like the complaint is precisely that people don’t want them to make this change.
> this is exactly how I'd want them to do it.
Sees naive to believe it will always be done like this, especially for new users.
Anyway, I’ll block them like I do everything.
But now that you bring up ads, I guarantee you that those will somehow be incorporated in Claude soon.
"ccusage" is telling me I would have spent $2010.90 in the last month if I was paying via the API, rather than $200.
But also I do feel Claude Code is quite a bit better than other things I've used, when using the same model. I'm not sure why though, it's a fairly simple program with only a few prompts and only a few tools, it seems like others could catch up immediately by learning some lessons from it.
I upgraded after I hit the equivalent spend in API fees in a month.
Did they rephrase the question? Probably the first answer was wrong. Did the session end? Good chance the answer was acceptable. Did they ask follow-ups? What kind? Etc.
Or that the user just ragequit
i would consider internet forums also includes a lot of dumb questions
In 'private', people are less ashamed of their ignorance, and also know they can say gibberish and the AI will figure it out.