> Reddit claims in its complaint that Anthropic’s scraper bots ignored the social network’s robots.txt files
A custom scraper can ignore robots.txt. Why couldn’t reddit detect the scraping and/or use rate limiting (which will only make it longer to scrape but not stop). Would letting the scraping proceed cause legal jeopardy and liability for Anthropic? And paying whatever fines is okay for Anthropic as cost of doing business??
A custom scraper can ignore robots.txt. Why couldn’t reddit detect the scraping and/or use rate limiting (which will only make it longer to scrape but not stop). Would letting the scraping proceed cause legal jeopardy and liability for Anthropic? And paying whatever fines is okay for Anthropic as cost of doing business??