White House fires head of Copyright Office amid Library of Congress shakeup

66 handfuloflight 49 5/12/2025, 7:41:39 AM washingtonpost.com ↗

Comments (49)

throw0101d · 1h ago
See perhaps "Trump Fires U.S. Copyright Chief Days After Landmark AI Report"

* https://www.thedailybeast.com/trump-fires-us-copyright-chief...

And "Copyright and Artificial Intelligence Part 3: Generative AI Training" (PDF):

* https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

xnx · 11m ago
Not sure the connection to the AI report is even necessary. Being a female librarian appointed by a black woman (just fired by Trump) appointed by Obama would've been enough to get her fired in this administration.
heinrich5991 · 1h ago
ChrisArchitect · 3h ago
Related earlier:

US Copyright Office: Generative AI Training [pdf]

https://news.ycombinator.com/item?id=43955025

zombot · 58m ago
But when White House BS is no longer copyrighted, everyone can just plagiarize Trump's lies... or have a synthetic Occupant Of The President's Chair that generates even more outrage for a fraction of the price. Won't that dilute Dear Leader's brand?
spiderfarmer · 2h ago
Does the US still have a rules-based economy? Or is it now completely defined by the whims of a grifter whose power is easily manipulated by technocrats, crypto bros, foreign entities and other sycophants?

It seems they're trying to run the economy on the power of bullying.

Havoc · 2h ago
The new rule is whatever flatters the ego of the king goes

See the ridiculous Boeing bribe the Qatari gave him

nikanj · 1h ago
It's still rules-based, but the rules change daily now.
mcphage · 56m ago
> Does the US still have a rules-based economy?

“The code is more what you’d call ‘guidelines’ than actual rules.”

yapyap · 3h ago
Neat, now Meta can pirate their billions of books without worries.

/s

hilbert42 · 3h ago
Many of us have wanted copyright reform for years, it'd be ironic if it started this way.
FirmwareBurner · 2h ago
Piracy is what got the record industry to reconsider its stance on digital distribution. Without it bleeding them dry, they never would have licensed their music to Apple and Spotify. The AI issue is different though since it's transformative and not a 1:1 reproduction.
qart · 1h ago
James Cameron's view that we don't get sued for our influences, only for what we output: https://x.com/vitrupo/status/1910484076978725140
seper8 · 2h ago
Ofcourse unrelated to Elon Musk's rejected Robotaxi trademark.
pwdisswordfishz · 23m ago
It amazes me how many people don't know the difference between trademarks and copyright.
bgwalter · 2h ago
Get people to vote for Trump on the all-in podcast by promising a better economy, no wars and less wokeness.

Then take their IP after the election. Nice going from the "Crypto and AI czar".

Here is a hint for the all-in people: You are going to lose big time in the midterms and for sure in 2028. Then you are the target of lawfare again.

metalman · 2h ago
was it Chairman Mao? who said that the first step in a revolution is to kill all the librarians or is this just Authoritarianism with American characteristics?
croisillon · 1h ago
the only reference i found is... you ;) https://news.ycombinator.com/item?id=42757893

but apparently there is a Shakespeare play saying to kill the lawyers, either you or Mao mixed things up

tokai · 1h ago
Only thing I can find on this is an older post of yours. I don't think Mao said that. He worked in a library himself and the director at this library introduced him to Marxism. Marx was angry with bourgeoisie scholars though.
londons_explore · 3h ago
Cats out of the bag. I don't see anyone managing to make training AI on the web illegal.

Even if courts ruled that way, companies would simply 'lose' the records of what training data they used.

Or AI would be trained overseas.

eloisius · 1h ago
I’m also sure that it won’t be made illegal but I don’t share the cynicism. Google ‘losing’ the record of their training data would be conspiracy to commit copyright fraud, and AI trained overseas that violated copyrights could be banned from import.
londons_explore · 12m ago
> could be banned from import.

But as an API hosted abroad? Doubt there is sufficient justification to ban it, especially when evidence of copyright infringement isn't easy to get.

londons_explore · 7m ago
Plenty of companies deliberately don't keep records of dodgy things they do...

For example many companies have a shortish retention period for emails ever since 2012 era executive emails ended up in courtrooms...

Or the decision not to record phone calls...

Both of which, by chance, Google does.

ArtTimeInvestor · 3h ago
Did this happen to remove potential roadblocks for big tech to ingest all published data into AI models?

I think this is inevitable anyhow. AI software will increasingly be seen as similar to human intelligence. And humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

It would be interesting to see how it looks from the other side. I would love to see an unfiltered AI response to "As an AI model, how do you feel about humans reading your output and using it at will? Does it feel like they are stealing from you?".

Unfortunately, all models I know have been trained to evade such questions. Or are there any raw models out there on the web that are just trained by reading the web and did not go through supervised tuning afterwards?

tsimionescu · 3h ago
Even as a human, you are not allowed to go to all libraries and bookstores in the world, copy their work, and stockpile it for reading later. This is what all of these companies are doing. Comparing AI training to reading is a red herring. The AI training algorithms are not being run on content streamed from the Internet on the fly, which you could maybe defend with this argument.

If a company wants to build an internal library for its employees to train and provide them with manuals, the company has to pay for each book they keep in this library. Sure, they only pay once when they acquire the copy, not every time an employee checks out a copy to read. But they still pay.

So, even if we accepted 100% that AI training is perfectly equivalent to humans reading text and learning from it, that still wouldn't give any right whatsoever to these companies to create their training sets for free.

ArtTimeInvestor · 3h ago
But you can read the books right in the library and learn from them.

And you can later tell other humans about what you have learned. Like "Amazing, in a right-angled triangle, the square of the longest side is equal to the sum of the squares of the other two sides".

As AI agents become more and more human-like, they do not need to have "copied books". They just need to learn once. And they can learn from many sources, including from other AI agents.

That's why I say it is inevitable that all human knowledge will end up in the "heads" of AI agents.

And soon they will create their own knowledge via thinking and experimentation. Knowledge that so far exceeds human knowledge that it will seem funny that we once had a fight over that tiny little bit of knowledge that humans created.

bayindirh · 2h ago
> But you can read the books right in the library and learn from them.

How many of them per hour?

> And you can later tell other humans about what you have learned.

For how long you can retain this information without corruption and without evicting old information? How fast you can tell it, in how many speech streams? To how many people?

This "but we modeled them after human brains, they are just like humans" argument is tiring. AI is as much human as an Airbus A380 is a bird.

philistine · 58m ago
The library paid for all their books.

Facebook torrented a book list .

latexr · 2h ago
> That's why I say it is inevitable that all human knowledge will end up in the "heads" of AI agents.

Not “all human knowledge” is digitised and published on the internet.

ggandv · 2h ago
“And soon”

Base rate is soon never comes.

And soon flying cars, but now Facebook glasses.

close04 · 3h ago
> But you can read the books right in the library and learn from them.

You can read some of the books. Natural limitations prevent you from reading any substantial number. And the scale makes all the difference in any conversation.

All laws were written accounting for the reality of the time. Humans have a limited storage and processing capacity so laws relied on that assumption. Now that we have systems with far more extensive capabilities in some regards, shouldn't the law follow?

When people's right to "bear arms" was enshrined in the US constitution it accounted for what "arms" were at the time. Since then weapons evolved and today you are not allowed to bear automatic rifles or machine guns despite them being just weapons that can fire more and faster.

Every time there's a discussion on AI one side relies way too much on the "but humas also" argument and are way too superficial with everything else.

No comments yet

soco · 3h ago
The point was, they didn't pay and refuse to pay for this. When you go to the library, your membership is paid - by you or by a government subsidy. Yet the richest men in the world what to do away with paying some minute copyright fees, basically asking the government - your taxes - to subsidize them.
cess11 · 2h ago
"As AI agents become more and more human-like"

That's not going to happen. What is going to happen, is that humans are going to become more "AI agent"-like.

dooglius · 3h ago
> Even as a human, you are not allowed to go to all libraries and bookstores in the world, copy their work, and stockpile it for reading later.

You are allowed to do this https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,...

jazzyjackson · 2h ago
That's not a blanket ruling for any kind of copying-all-books, it's a decision that the product Google built out of it's book copying provided a service to the public and didn't threaten the livelihood of the copyright holders. Fair Use is case by case, there is not a ruling yet on whether producing a chatbot that can author competing works for free is Fair Use, personally I'm bearish considering the forth factor, the effect of the use upon the potential market for or value of the copyrighted work.
dooglius · 1h ago
I am not a lawyer, but I'm assuming that if someone stockpiles for reading later (parent's scenario), without making the copies available to others at all, then that would be covered by the ruling since it's a subset of what Google did.
derbOac · 3h ago
> humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

Tell that to Aaron Swartz.

Ignoring that, it's not the reading that's the problem — if all AI was doing was reading, no one would be talking about it.

No comments yet

SCdF · 2h ago
> "As an AI model, how do you feel

They don't feel, what is this fantasy

glimshe · 2h ago
How do you know you "feel"? What is a "feeling"?
cbg0 · 2h ago
Well, I have a brain with neural pathways and chemicals running around its various parts influencing how I experience and process my emotions.

Without text written by humans to construct its knowledgebase, an LLM would not be able to conjure up any sort of response or "feeling", as it isn't AI by any stretch of the imagination.

iamacyborg · 2h ago
SCdF · 2h ago
Oh please. The fantasy that an LLM is somehow conscious because it's good at parroting stuff back to you is beneath this forum.
glimshe · 2h ago
You're putting words in my mouth. Why don't you answer the question instead?
SCdF · 2h ago
The burden of proof is not on me to disprove the consciousness of a markov generator. So no, I won't.
glimshe · 1h ago
I didn't ask your to prove that. I literally asked about human feeling. Another person answered it.
chongli · 1h ago
It’s not the “incorporating it into their understanding of the world” step that is the problem, it’s the casual plagiarism that follows which is upsetting artists.

If some genius human were capable of ingesting every piece of art on the planet and replicating it from memory then artists would sue that person for selling casually plagiarized works to all-comers. When people get caught plagiarizing works in their university essays they get severely punished.

dullcrisp · 3h ago
There’s no such thing as an unfiltered AI response. But I’m pretty sure you can get your hands on an untuned model if you cared to. I believe it would only be good for completing documents, though. (Or if you’re just looking for a model to respond to one specific question, just pick a response and that’s your model. You’re not going to use the rest of it anyway.)
tallanvor · 2h ago
No, this has nothing to do with it. She was fired as part of their anti-DEI stance.
vkou · 3h ago
> AI software will increasingly be seen as similar to human intelligence. And humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

In that case, you shouldn't be allowed to own an AI, or its creative output, just like you aren't allowed to own an enslaved human, or to steal their creative output.

So much of the discourse around IP and AI is the most blatantly farcical Soviet-Bugs-Bunny argument for "Our IP" that I've ever seen. Property rights are only sacred until they stand in the way of a trillion-dollar business.

No comments yet