From the Conclusion: "In applying current law, we conclude that several stages in the development of generative AI involve using copyrighted works in ways that implicate the owners’ exclusive rights. The key question, as most commenters agreed, is whether those acts of prima facie infringement can be excused as fair use. ... But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries. ... These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public."
yieldcrv · 23h ago
So many issues with that, the copyright office doesn’t police access, which involves consuming, the copyright office polices distributing.
So then for them to determine fair use, they need the department of justice involved to say the access was illegal? since when. just to highlight the absurdity. “Illegal” meaning a terms of service violation despite the fact that everyone using the service can consume copyrighted works? This circles back to the now paradoxical issue about it not being copyright infringement to consume, but requires policing the terms of service by the copyright office which is impossible.
This is too paradoxical to even entertain, but thats why the office led with “current law”, because it is completely unaccommodating to a real social problem. A lot of artists and people are uncomfortable with the current law, and generative AI. New law could patch this except:
Artists don't actually like the generative AI that isn't trained on copyrighted works either.
The laws are going to change too slow and there are already models that fulfill the high bar that detractors started with.
New works that were specifically licensed for use in AI training and compensated.
The outcome is still the same. More people can express themselves. People with years of discipline are no longer needed.
By the time any law could actually address noncompliant models - to this new imagined standard - compliant models will already have obsoleted the same trade.
jawon · 14h ago
This is a standard book copyright notice:
All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except as permitted by U.S. copyright law.
“Reproduced” and “electronic” are the relevant terms here.
I remember when gpt-3 came out and you could get it to spit out chunks of Harry Potter and I wondered why no-one was being sued.
The models are built on copyright infringement. Authors and publishers of any kind should be able to opt out of being included in training data and ideally opt-in should be the default.
And I hope one day someone trains a model without the use of works of fiction and we find a qualitative difference in their performance. Does a coding model really need to encode the customs, mores and concerns of Victorian era fictional characters to write a python function?
yieldcrv · 11h ago
> except as permitted by U.S. copyright law.
these are the relevant terms to me, that notice isn’t law at all, where the exceptions make the rule.
comex · 18h ago
FYI, the Copyright Office doesn’t enforce copyright law or determine its correct interpretation. Courts do. The legal analysis in this report is really just a suggestion, and judges probably won’t give it too much weight.
As for illegal access, I agree that the report uses the term a bit too loosely. But as we’ve seen in the Meta case, some companies have obtained training material not through TOS-violating downloads but through literal (unauthorized) torrents. As we’ve also seen in the Meta case, even torrenting is technically not copyright infringement if you’re not seeding. But the process does rely on someone else seeding, so the report doesn’t seem wholly unreasonable in suggesting that this could “reflect bad faith” or “bear on the character of the use”.
MoonGhost · 14h ago
Did they manage to come up with recommendations? Other than to stop it all. In this case we have DeepSeek R1. China will be happy as Trump will have to force NVidia to send best chips there.
I couldn't actually find any articles about this news on your substack. The newest post I saw was from last month. Could you link where you discuss OP?
kelseyfrog · 13h ago
Footnote one is where the whole thing goes off the rails. The Copyright Office asserts that the works in question are not merely "data" in the ordinary sense, but somehow "embody creative expression" in a way that constitutes protected authorship.
This is metaphysics, not law or computer science.
They're smuggling in a kind of authorial transubstantiation, as if creative essence somehow imbues the bits themselves, rendering them qualitatively different from any other arrangement of bytes. The implication is that once a work has passed through the sacrament of human intention, it permanently carries a kind of spiritual copyright residue, regardless of its subsequent transformation or use.
But that's not how data works. A copy of a copyrighted work in a training corpus is still just data. It doesn't emit rights. It's not radioactive. There's no Platonic form of "authorship" that permeates the latent space. What matters, legally, and practically, is what the system does with that data, not some mystical essence the data supposedly contains.
This is authorial essentialism dressed up as policy. And it doesn’t hold up under inspection.
Greed · 7h ago
You speak of intentionality beyond the explicit reality of the data involved as some great irrationality in their statement, but we literally have a corresponding term for that. Spirit of the law. If the law were as black and white and ends-oriented as you're implying it is, we wouldn't need judges for the interpretation of it. The fact that they have prioritized the underlying authors affected over the traditional interpretation of the law here is not the condemnation you think it is.
kelseyfrog · 3h ago
I think you're missing the deeper point. Whether or not the Copyright Office intends to assert authorial essentialism, it's doing so in effect. And when metaphysical language about "creative essence" becomes encoded in policy and enforced by courts, it's not just metaphor. It's law.
Calling it "spirit of the law" doesn't let them off the hook. If you enshrine a metaphysics that treats human-authored works as ontologically distinct kinds of data, imbued with some persistent essence that radiates rights regardless of use, you're not interpreting the law, you're institutionalizing a theology of authorship.
And yes, I care less about their intentions than about the system they're building. That system is now enforcing metaphysical categories with legal teeth. That's the problem.
Yeah, and painting is just oil, and music is just an arrangement of noises. And only the original manuscript touched by an author is protected by rights, and every book printed ("copied") afterwards is not covered by any rights.
So then for them to determine fair use, they need the department of justice involved to say the access was illegal? since when. just to highlight the absurdity. “Illegal” meaning a terms of service violation despite the fact that everyone using the service can consume copyrighted works? This circles back to the now paradoxical issue about it not being copyright infringement to consume, but requires policing the terms of service by the copyright office which is impossible.
This is too paradoxical to even entertain, but thats why the office led with “current law”, because it is completely unaccommodating to a real social problem. A lot of artists and people are uncomfortable with the current law, and generative AI. New law could patch this except:
Artists don't actually like the generative AI that isn't trained on copyrighted works either.
The laws are going to change too slow and there are already models that fulfill the high bar that detractors started with.
New works that were specifically licensed for use in AI training and compensated.
The outcome is still the same. More people can express themselves. People with years of discipline are no longer needed.
By the time any law could actually address noncompliant models - to this new imagined standard - compliant models will already have obsoleted the same trade.
All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except as permitted by U.S. copyright law.
“Reproduced” and “electronic” are the relevant terms here.
I remember when gpt-3 came out and you could get it to spit out chunks of Harry Potter and I wondered why no-one was being sued.
The models are built on copyright infringement. Authors and publishers of any kind should be able to opt out of being included in training data and ideally opt-in should be the default.
And I hope one day someone trains a model without the use of works of fiction and we find a qualitative difference in their performance. Does a coding model really need to encode the customs, mores and concerns of Victorian era fictional characters to write a python function?
these are the relevant terms to me, that notice isn’t law at all, where the exceptions make the rule.
As for illegal access, I agree that the report uses the term a bit too loosely. But as we’ve seen in the Meta case, some companies have obtained training material not through TOS-violating downloads but through literal (unauthorized) torrents. As we’ve also seen in the Meta case, even torrenting is technically not copyright infringement if you’re not seeding. But the process does rely on someone else seeding, so the report doesn’t seem wholly unreasonable in suggesting that this could “reflect bad faith” or “bear on the character of the use”.
Part 2 (copyrightability) https://copyright.gov/ai/Copyright-and-Artificial-Intelligen...
Part 3 (GenAI training) https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
Analysis in previous and upcoming editions of The Memo: https://lifearchitect.ai/memo/
I couldn't actually find any articles about this news on your substack. The newest post I saw was from last month. Could you link where you discuss OP?
This is metaphysics, not law or computer science.
They're smuggling in a kind of authorial transubstantiation, as if creative essence somehow imbues the bits themselves, rendering them qualitatively different from any other arrangement of bytes. The implication is that once a work has passed through the sacrament of human intention, it permanently carries a kind of spiritual copyright residue, regardless of its subsequent transformation or use.
But that's not how data works. A copy of a copyrighted work in a training corpus is still just data. It doesn't emit rights. It's not radioactive. There's no Platonic form of "authorship" that permeates the latent space. What matters, legally, and practically, is what the system does with that data, not some mystical essence the data supposedly contains.
This is authorial essentialism dressed up as policy. And it doesn’t hold up under inspection.
Calling it "spirit of the law" doesn't let them off the hook. If you enshrine a metaphysics that treats human-authored works as ontologically distinct kinds of data, imbued with some persistent essence that radiates rights regardless of use, you're not interpreting the law, you're institutionalizing a theology of authorship.
And yes, I care less about their intentions than about the system they're building. That system is now enforcing metaphysical categories with legal teeth. That's the problem.
https://ansuz.sooke.bc.ca/entry/23