Web-scraping AI bots cause disruption for scientific databases and journals

26 tchalla 8 6/10/2025, 8:25:01 PM nature.com ↗

Comments (8)

atonse · 6h ago
How was this not a problem before with search engine crawlers?

Is this more of an issue with having 500 crawlers rather than any single one behaving badly?

Ndymium · 54m ago
Search engine crawlers generally respected robots.txt and limited themselves to a trickle of requests, likely based on the relative popularity of the website. These bots do neither, they will crawl anything they can access and send enough requests per second to drown your server, especially if you're a self hoster running your own little site on a dinky server.

Search engines never took my site down, these bots did.

OutOfHere · 7h ago
Requiring PoW (proof-of-work) could take over for simple requests, rejecting requests until a sufficient nonce is included in the request. Unfortunately, this collective PoW could burden power grids even more, wasting energy+money+computation for transmission. Such is life. It would be a lot better to just upgrade the servers, but that's never going to be sufficient.
Bjartr · 6h ago
OutOfHere · 6h ago
Yes, although the concept is simple enough in principle that a homegrown solution also works.
Zardoz84 · 6h ago
We are wasting power on feeding statistics parrots, and we need to waste additional power to avoid being DoS by that feeding.

We will be better without that useless waste of power.

treyd · 6h ago
What do you suppose we as website owners do to prevent our websites from being DoSed in the meantime? And how do you suppose we convince/beg the corporations running AI scraping bots to be better users of the web?
OutOfHere · 4h ago
This should be an easy question for an engineer. It depends on whether the constraint is CPU or memory or database or network.