OpenAI is retaining all ChatGPT logs "indefinitely." Here's who's affected

116 Bender 54 6/6/2025, 3:21:24 PM arstechnica.com ↗

Comments (54)

gnabgib · 21h ago
Related: OpenAI slams court order to save all ChatGPT logs, including deleted chats (1101 points, 2 days ago, 906 comments) https://news.ycombinator.com/item?id=44185913
ChrisArchitect · 19h ago
ViktorRay · 20h ago
Here is the direct link to OpenAi's official response:

https://openai.com/index/response-to-nyt-data-demands/

That official response was discussed on this website yesterday. Here is a link to the discussion:

https://news.ycombinator.com/item?id=44196850

JumpCrisscross · 18h ago
“This does not impact ChatGPT Enterprise or ChatGPT Edu customers.

This does not impact API customers who are using Zero Data Retention endpoints under our ZDR amendment.”

The court order seems reasonable. OpenAI must retain everything is can, and has not promised not to, retain.

senko · 16h ago
I cannot find it now because OpenAI blocks archive.org (oh the irony), but previously their API privacy policy said no data retention beyond 30 (or 90, I can't recall) day period for abuse/monitoring. I know because I was researching this for an (EU) customer.

Now, the promise is still to not train on API inputs/outputs, but the retention promise is nowhere to be found, unless you're an Enterprise customer (ZDR).

Moreover, at least in my understanding of the order, the court ordered them to keep ALL, not "all unless for those you promised not to keep":

> OpenAI is NOW DIRECTED to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying), whether such data might be deleted at a user’s request or because of “numerous privacy laws and regulations” that might require OpenAI to do so.

So in effect, the statement you quoted is false, and OpenAI is actually in breach of their privacy policies.

I asked ChatGPT to parse this in case I misunderstood it, and its interpretation was:

> The company is ordered to preserve all output log data that would otherwise be deleted, going forward, regardless of the reason for deletion (user request, privacy law, default retention policy, etc.).

> In other words: They must stop deleting any output log data, even if it would normally be deleted by default or at a user's request.

> This is more than just keeping data they would normally retain — it's a preservation order for data they would otherwise destroy.

As a result, OpenAI is now unusable for serious business use in Europe. Since the competition (Anthropic, Google) is not affected by this court order, the only loser here is OpenAI.

JumpCrisscross · 16h ago
> in effect, the statement you quoted is false, and OpenAI is actually in breach of their privacy policies

The statement is from OpenAI’s own press release. I still wouldn’t argue that their lies somehow make the court order unreasonable.

> OpenAI is now unusable for serious business use in Europe

This is entirely in OpenAI’s control. They could convert everyone in the EU to ZDR. They choose not to, and that’s their right. (As it is the right of the EU to deem that noncompliance.)

throwaway314155 · 17h ago
From bio: > Trade private equity

> The court order seems reasonable.

Checks out.

JumpCrisscross · 16h ago
Fair enough, I’m by default deferential to our courts.

In this case, however, there is a simple solution if OpenAI doesn’t want to save data: don’t retain it in the first place. If OpenAI committed to privacy as a value, their protests would have merit. But in this case it sounds like they want the ability to retain the data as well as delete it despite being in the midst of litigation. That’s simply not a privilege anyone is afforded by the courts. (We’re also talking about the Times versus a $100+ bn tech giant whose CEO has direct lines to heads of state. There is no David to default sympathy to.)

thegrim33 · 17h ago
Could also just look at their submission history; all they do is post political content every single day of their life. Such a person is not going to have overly impartial/intelligent takes on issues. I wish there was a plugin that could do sentiment analysis and auto-filter out such users from submissions/comments shown to me. Or just a button I could manually click to never see a person's posts again.
kleiba · 19h ago
> Late Thursday, OpenAI confronted user panic over a sweeping court order requiring widespread chat log retention—including users' deleted chats—after moving to appeal the order that allegedly impacts the privacy of hundreds of millions of ChatGPT users globally.

When "delete" actually means "hide from my view", you can only hope that you live in a country with strong privacy and data protection laws.

sixothree · 19h ago
Even when companies are honest about how "delete" works, they will use weasel language such as "delete from history" or "delete from inbox" instead of actually doing the thing the user intends.
bombcar · 18h ago
Part of that is the company doesn't even know what they do internally - sometimes it purges instantly, other times it gets marked deleted in a database table that never gets purged until the fifteen billion row table takes down the service.
fizx · 17h ago
At a tech company of >1k engineers, this gets audited regularly by a dedicated data retention team.
jajko · 18h ago
Company that wants to know, knows. The more these data are to the core of their reason for existence the better and more current this knowledge is. Don't give these aholes a pass they never ever deserved, they don't have any moral higher ground or moral 'credits' to burn. openAI is not different.
xeromal · 18h ago
This half the time due to people wanting to recover their accidental mistakes from deleting stuff.

IMO hard deleting things is generally a bad practice until the user wants to delete their entire account or some other very explicit action is executed.

josefritzishere · 18h ago
It is still the process at most large firms to delete data after certain intervals of time. the failure to have and follow a data deletion policy is a huge legal risk... even if that interval is 10 years. Forever is the one definitively wrong answer.
xeromal · 18h ago
That's what I meant by very explicit action.
yreg · 16h ago
I have no experience with this, but I imagine actually deleting files from some giant collection of data that needs to be safely backed up is borderline impossible, no?

I expect that the big tech companies have all kinds of cold storage backups and no one is going to actually go spelunking in those archives to physically delete my data when I delete an email. It's more likely that they will delete the keys to decrypt it, but even the keys must be safely stored somewhere, so it's the same problem just with less data.

LadyCailin · 16h ago
> When "delete" actually means "hide from my view", you can only hope that you live in a country with strong privacy and data protection laws.

I do, but presumably that doesn’t matter, as the US thinks its legal code outweighs the legal code for Europeans living in Europe. Jokes on Europeans for allowing Americans to take over the world stage for too long, I suppose.

longnguyen · 20h ago
Shameless plug: I build Chat Vault to help you import all chat data from ChatGPT, Claude, Grok and Le Chat (Mistral) to a native mac app.

You can search, browse and continue your chats 100% offline.

It’s free while in beta https://testflight.apple.com/join/RJx6sP6t

subarctic · 18h ago
Is this something I could do with openwebui?
accrual · 17h ago
Open WebUI does support import/export from a JSON file, but may need a translation for ChatGPT data.
nopelynopington · 19h ago
Nice
pier25 · 19h ago
> The order impacts users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API)

So how does this work with services using the API like Copilot or Cursor?

Is OpenAI now storing all the code sent to the API?

dawnerd · 19h ago
I think it's only safe to assume the answer is yes.
tintor · 18h ago
Depends if Cursor / Copilot are using Zero Data Retention API or not.
Dachande663 · 18h ago
Question for the crowd: if using the OpenAI service in Azure, is that included in the retention? OpenAI say API access but don’t specify if that’s just their endpoints or anyone running their models.
filmgirlcw · 17h ago
You’d have to check with Microsoft. OpenAI says that this doesn’t apply to customers with a Zero Data Retention endpoint policy, but my recollection is that Azure OpenAI doesn’t fall into that category unless it’s something that is explicitly paid for. That said, OpenAI also says that ChatGPT Enterprise customers aren’t impacted (aside from their standard policies around how long it takes to delete data, which they say is within 30 days), but only Microsoft would know if their API usage counts as “enterprise” or not.
triceratops · 20h ago
It would be a bit much if this linked to an NYT article.
creaturemachine · 20h ago
With the requisite paywall-bypass link as the first comment.
puppycodes · 20h ago
I'd be very suprised if they weren't already doing this, the major change however might be attribution to those queries?

If you think this is scary you should see what google has been doing for a decade.

Trasmatta · 19h ago
I seriously doubt they were already doing this. What would the benefit have been? The vast majority of users will never delete their chats, so it's not like they lose a ton of data by hard deleting conversations.
puppycodes · 18h ago
The benefit is you train models on all the data you receive and also you want some audit trail of how that data got there. This is just a hunch though!
Trasmatta · 18h ago
What I mean is that the vast majority of conversations never get deleted by users anyway. So why risk breaking privacy laws (and their own privacy policy) for the percentage that do?
gmuslera · 19h ago
Not only the users of ChatGPT are affected, also the users of other hosted LLMs, most of which can get the same kind of orders from the jurisdictions they belong to.
pxc · 21h ago
> Magistrate Judge Ona Wang granted the order within one day of the NYT's request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls.

Are users who deliberately skirt paywalls ever shy about it? Since when?

paulddraper · 20h ago
Since lawsuits I guess.
pxc · 19h ago
Are they going after individual readers like those old RIAA and MPAA campaigns? Or does this just refer to the lawsuit been these two companies?
jonplackett · 20h ago
If removing paywalls works then why do they need everyone’s chats to prove it?

Just set up your own account, bypass a paywall, save that chat log.

This seems completely unnecessary.

stuffoverflow · 20h ago
This whole case is even more stupid when you take in to account how NYT's paywall is implemented. Anyone can bypass it just by refreshing the page and stopping it immediately after contents become visible.

I don't know what ChatGPT uses to browse web but it wouldn't surprise me if it repeated stuff from those paid articles because it uses wget or something similar that doesn't support js and therefore the paywalls weren't effective.

howard941 · 16h ago
Didn't this stop working about 6 months ago when it added a delay around 4 paragraphs in saying it was checking your IP address before sending the rest of the article?
wvenable · 19h ago
The paywall news sites want to have their cake and eat it to -- they want web crawlers, like Google, to read the full contents of the article but hide it from site visitors.

If they simply put all their content behind a paywall entirely and effectively then this would be a non-issue.

If ChatGPT is getting this content it's literally because they allow it.

gruez · 19h ago
>The paywall news sites want to have their cake and eat it to -- they want web crawlers, like Google, to read the full contents of the article but hide it from site visitors.

There's nothing contradictory about this? Plenty of companies give free access to journalist/reviewers/influencers, with the hope that they'd draw in paying customers. Wanting to only give free access to certain people isn't "want to have their cake and eat it to". It's standard business practice and well within the rights of publishers/rights holders to do.

wvenable · 18h ago
Yes there is. They don't want ChatGPT to have access but they don't prevent access by ChatGPT. Technically they're actually giving everyone free access. By actually legitimately preventing access they would completely mitigate this problem.
lcnPylGDnU4H9OF · 16h ago
> They don't want ChatGPT to have access but they don't prevent access by ChatGPT.

They don't want ChatGPT to do certain things after accessing it and they don't prevent access by ChatGPT. They don't mind if ChatGPT accesses it.

nhinck2 · 19h ago
I mean you can also walk out of a store without paying before they can stop you. Doesn't change the nature of the offence.
pxc · 18h ago
The point is that there's no need to resort to using something like ChatGPT to do avoid the paywall, so most people who want to avoid the paywall wouldn't bother with using ChatGPT to do it.
mjburgess · 19h ago
In the air, at the moment, seems to be a counter-revolutionary sentiment against the managerial class, the administrative state, and institutions of mass monitoring and the like. There have been several events (, forces, ) intruding themselves into the lives the masses: covid, ai, adtech -- which are making-visible the "exorbitant privilege" of being a rent seeker on top of people's mass communication.

Impositions like this, of course, put the rent-seeking in people's faces -- but privilege always remains, pending a court order or an executive demand. It feels that there's mass loss of patience with this, a kind of "lay anarchism" that is presently across the popular internet. Many are exhausted at the power which accrues to AI companies from the non-consensual theft of the online social commons. Others, at political censorship. Others, at the secruity state. And so on. But it's all directed towards the same privilege made possible by internet centralisation.

Who will channel this politically, where it will go -- of course -- isn't clear at the moment. The anger at this seems febrile, and its more than some fussy libertarians. The economic conditions of egregious rent-seeking and the political conditions of mass control are visible to the people. It all seems a little mid-19th C.

crmd · 19h ago
I personally agree fully with your sentiment, but I don’t see any place in today’s American polity where this spark could catch fire, because there’s no donor class that would ultimately benefit. It would have to be like the French Revolution.
mjburgess · 19h ago
Well each of these institutions are targeting people in power, it may only be a matter of "one more wrong move" and there's a critical mass of the political elite willing to tear it down. I think DOGE was never actually a cost saving exercise, that was just a disguise for an attempt to destroy the counter-political power of the administrative state -- it isnt so hard to imagine a political party trying again, this time, competently. And so on with each of these instituions. There's quite a lot of bipartisan support now for breaking up monopolies, dramatic restricts on the power of executive police forces, and the like. For different reasons, either side, but the targets are the same.
JumpCrisscross · 18h ago
> DOGE was never actually a cost saving exercise, that was just a disguise for an attempt to destroy the counter-political power of the administrative state

It was a distraction from tax policy. Raising regressive taxes on imports, cutting services to the poor and cutting taxes on the rich.

JumpCrisscross · 18h ago
> It would have to be like the French Revolution

The elites after the French Revolution were not only mostly the same as before, they escaped with so much money and wealth that it’s actually debated if they increased their wealth share through the chaos [1].

If America started teetering towards a meltdown-type revolution, the first folks to move in would be foreign powers.

[1] https://www.jstor.org/stable/650023

nancyminusone · 18h ago
Plus, they had an emperor just 5 years after the revolution was finished
JumpCrisscross · 16h ago
I used to find the popular fascination with the myth of the French Revolution having been terrible for the rich and empowering for the poor bemusing. But I’m wondering, now, if it’s dangerous.