Proposal: AI Content Disclosure Header

44 exprez135 28 8/26/2025, 9:08:34 PM ietf.org ↗

Comments (28)

nrmitchi · 21m ago
This seems like a (potential) solution looking for a nail-shaped problem.

Yes, there is a huge problem with AI content flooding the field, and being able to identify/exclude it would be nice (for a variety of purposes)

However, the issue isn't that content was "AI generated"; as long as the content is correct, and is what the user was looking for, they don't really care.

The issue is content that was generated en-masse, is largely not correct/trustworthy, and serves only to to game SEO/clicks/screentime/etc.

A system where the content you are actually trying to avoid has to opt in is doomed for failure. Is the purpose/expectation here that search/cdn companies attempt to classify, and identify, "AI content"?

weddpros · 2h ago
Maybe we should avoid training AI with AI-generated content: that's a use case I would defend.

Still I believe MIME would be the right place to say something about the Media, rather than the Transport protocol.

On a lighter note: we should consider second order consequences. The EU commission will demand its own EU-AI-Disclosure header be send to EU citizens, and will require consent from the user before showing him AI generated stuff. UK will require age validation before showing AI stuff to protect the children's brains. France will use the header to compute a new tax on AI generated content, due by all online platform who want to show AI generated content to french citizens.

That's a Pandora box I wouldn't even talk about, much less open...

ronsor · 24m ago
> The EU commission will demand its own EU-AI-Disclosure header be send to EU citizens, and will require consent from the user before showing him AI generated stuff. UK will require age validation before showing AI stuff to protect the children's brains. France will use the header to compute a new tax on AI generated content, due by all online platform who want to show AI generated content to french citizens.

I think the recent drama related to the UK's Online Safety Act has shown that people are getting sick of country-specific laws simply for serving content. The most likely outcome is sites either block those regions or ignore the laws, realizing there is no practical enforcement avenue.

blibble · 1h ago
> Maybe we should avoid training AI with AI-generated content: that's a use case I would defend.

if this takes off I'll:

   - tag my actual content (so they won't train on it)
   - not tag my infinite spider web of automatically generated slop output (so it'll poison the models)
win win!
ronsor · 27m ago
then they'll start ignoring the header and it'll be useless

(of course, it was never going to be useful)

AKSF_Ackermann · 4h ago
It feels like a header is the wrong tool for this, even if you hypothetically would want to disclose that, would you expect a blog cms to offer the feature? Or a web browser to surface it?
throwaway13337 · 4h ago
Can we have a disclosure for sponsored content header instead?

I'd love to browse without that.

It does not bother me that someone used a tool to help them write if the content is not meant to manipulate me.

Let's solve the actual problem.

handfuloflight · 53m ago
We already have those legally mandated disclosures per the FTC.
judge123 · 1h ago
I'm genuinely torn. On one hand, transparency is good. But on the other, I can totally see this header becoming a lazy filter for platforms to just automatically demote or even block any AI-assisted content. What happens to artists using AI tools, or writers using it for brainstorming?
xgulfie · 1h ago
They can adapt or get left behind
woah · 1h ago
Seems like someone just trying to get their name on a published IETF standard for the bragging/resume rights
userbinator · 19m ago
Approximately as useless as "do not track".
xgulfie · 2h ago
This is like asking the fox to announce itself before entering the henhouse
rossant · 4h ago
Interesting initiative but I wonder if the mode provides sufficient granularity. For example, what about an original human-generated text that is entirely translated by an AI?
kelseyfrog · 26m ago
It certainly doesn't cover the case of mixed-origin content. Say for example, a dialog between a human and AI or even mixed-model content.

For those, my instinct is to fallback to markup which would seem to work quite well. There is the pesky issue of AI content in non-markup formats - think JSON that don't have the same orthogonal flexibility in annotating metadata.

dijksterhuis · 4h ago
> what about an original human-generated text that is entirely translated by an AI?

probably ai-modified -- the core content was first created by humans, then modified (translated into another language). translating back would hopefully return you the original human generated content (or at least something as close as possible to the original).

    | class             | author | modifier/reviewer | 
    | ----------------- | ------ | ----------------- | 
    | none              | human  | human/none        | 
    | ai-modified       | human  | ai                | <--*
    | ai-originated     | ai     | human             |
    | machine-generated | ai     | ai/none           |
patrickhogan1 · 2h ago
The bigger challenge here is that we already struggle with basic metadata integrity. Sites routinely manipulate creation dates for SEO - I regularly see 5-year-old content timestamped as "published yesterday" to game Google's freshness signals.

While this doesn't invalidate the proposal, it does suggest we'd see similar abuse patterns emerge, once this header becomes a ranking factor.

grumbel · 3h ago
Completely the wrong way around. We are heading into a future where everything will be touched by AI in some way, be it things like Photoshop Generative Fill, spell check, subtitles, face filters, upscaling, translation or just good old algorithmic recommendations. Even many smartphones already run AI over every photo they make.

Doing it in a HTTP header is furthermore extremely lossly, files get copy around and that header ain't coming with them. It's not a practical place to put that info, especially when we have Exif inside the images themselves.

The proper way to handle this is mark authentic content and keeping a trail of how it was edited, since that's the rare thing you might to highlight in a sea of slop, https://contentauthenticity.org/ is trying to do that.

layer8 · 2h ago
Why only for HTTP? This would be appropriate for MIME multipart/mixed part headers as well. ;)

Maybe better define an RDF vocabulary for that instead, so that individual DIVs and IMGs can be correctly annotated in HTML. ;)

ivape · 31m ago
This is a Gentlemen’s agreement humans will not keep. Not how our species works.
vntok · 3h ago
This feels like the Security Flag proposal (https://www.ietf.org/rfc/rfc3514.txt)
gruez · 2h ago
or end up like california prop 65 warnings: https://en.wikipedia.org/wiki/1986_California_Proposition_65
ugh123 · 3h ago
Hoping I don't need to click on something, or have something obstructing my view.
odie5533 · 2h ago
The cookie banner just got 200px taller.
GuinansEyebrows · 3h ago
Maybe an ignorant question but at the dictionary level, how would one indicate that multiple providers/models went into the resulting work (based on the example given)? Is there a standard for nested lists?
shortrounddev2 · 4h ago
Years ago people were arguing that fashion magazines should have to disclose if they photoshopped pictures of women to make them look skinnier. France implemented this law, and I believe other countries have as well. I believe that we should have similar laws for AI generated content.
xhkkffbf · 4h ago
I'm all for some kind of disclosure, but where do we draw the line. I use a pretty smart grammar and spell checker, one that's got more "AI" in it to analyze the sentence structure. Is that AI content?
stillpointlab · 3h ago
According to the spec, yes a grammar checker would be subject to disclosure:

> ai-modified Indicates AI was used to assist with or modify content primarily created by humans. The source material was not AI-generated. Examples include AI-based grammar checking, style suggestions, or generating highlights or summaries of human-written text.