Bill to Restrict AI Companies Unauthorized Use of Copyrighted Works for Training

27 OutOfHere 11 7/22/2025, 2:42:43 AM deadline.com ↗

Comments (11)

CaptainFever · 5h ago
This is an expansion of copyright law, which, just as a reminder, is already pretty insane with its 100 year durations and all.
Frieren · 6h ago
> and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent

This part is even more important. Personal data is being used to train models. All is very dystopian with a cyber punk flavor.

pyman · 6h ago
We failed to stop Microsoft and Facebook from using our private data and WhatsApp messages to train their algorithms. Now we need to learn from the mess they created and stop Microsoft and OpenAI from using our conversations with AI to train their models, build LLM versions of ourselves, and sell them to banks, recruiters, or anyone willing to pay good money to get inside our minds.
OutOfHere · 50m ago
This is the AI killer bill that would give a hard victory to China.
nunez · 2h ago
Finally glad to see big name politicians rally around this. That this is a bipartisan effort was extremely surprising to see.
nullc · 2h ago
This would basically grant facebook and google a monopoly on AI -- as they'll put training on your material as part of their TOS and then be the only players with enough market power to get adequate amounts of training material.
OutOfHere · 49m ago
It would grant China an even bigger victory since China's models do not have to abide by any US copyrights.
pyman · 6h ago
Imagine if we stole all the documents stored on Google's private servers and all their proprietary code, research, and everything they've built, and used it to create a new company called Poogle that competes directly with them.

And just like that, after 24hs of stealing all their IP, we launch:

- Poogle Maps

- Poogle Search

- Poogle Docs

- Poogle AI

- Poogle Phone

- Poogle Browser

And here's the funny part: we claim the theft of their data is "fair use" because we changed the name of the company, and rewrote their code in another language.

Doesn't sound right, does it? So why are Microsoft (OpenAI, Anthropic) and Google financing the biggest act of IP theft in the history of the internet and telling people and businesses that stealing their private data and content to build competing products is somehow "fair use"?

Just like accountants log every single transaction, companies should log every book, article, photo, or video used to train their models, and compensate the copyright holders every time that content is used to generate something new.

The whole "our machines are black boxes, they’re so intelligent we don't even know what they're doing" excuse doesn't cut it anymore.

Stop with the nonsense. It's software, not voodoo.

pyman · 5h ago
Also, did OpenAI made its API publicly available to generate revenue, or share responsibility and distribute the ethical risk with developers, startups, and enterprise customers, hoping that widespread use would eventually influence legal systems over time?

Let's be honest, the US government and defence sector has massive budgets for AI, and OpenAI could have taken that route, just like SpaceX did. Especially after claiming they're in a tech war with China. But they didn't, which feels contradictory and raises some red flags.

edgineer · 2h ago
Poor analogy Also, AI companies do hobble their models so they can't e.g. draw Mickey Mouse
pyman · 1h ago
So are you saying the theft is selective and intentional and they don't target Disney because they have a global army of top lawyers? You've just reinforced my point.