Create Missing RSS Feeds with LLMs

2 alastairr 1 5/9/2025, 4:00:16 PM taras.glek.net ↗

Comments (1)

PaulHoule · 12h ago
The general story about the LLM-scraper problem is that (1) "companies like OpenAI run badly implemented web crawlers to get training data" but there is (2) with LLMs scrapers could do content understanding (inference) that would make them more useful and I think the even more impactful (3) LLMs will empower people to write scrapers that would never have written them before.

I kinda laugh at (3) because it's been a running gag for me that management vastly overestimates the effort to write scrapers and crawlers because they've been burned with vastly underestimating the effort to develop what look like simple UI applications.

They usually think "this will be a hassle to maintain" but it usually isn't because: (a) the target web sites usually never change in a significant way because UI development is such a hassle and (b) the target web sites usually never change in a significant way because Google will punish them if they do [1]

It is like 10 minutes to write a scraper if you do it all the time and have an API like beautifulsoup on your fingertips, probably 20 minutes to vibe code it if you don't.

I am still using the same HTML scraper to process image galleries today that I used to process Flickr galleries back in the 00's, for a while the pattern was "fight with the OAuth to log into an API for 45 minutes" or "spend weeks figuring out how to parse MediaWiki markup" and then "get the old scraper working in less than 15 minutes". Frequently the scraper works perfectly out of the box, sometimes it works 80% out of the box, always it works 100% by adding a handful of rules.

I work on a product that has a React-based site and it seems the "state of the art" in scraping a URL [2] like

   https://example.com/item/8788481
is to download the HTML and then all the Javascript and CSS and other stuff with no cache (for every freaking page) and run the Javascript and have something scrape the content out of the DOM whereas they could just go to

   https://example.com/api/item/8788481
and get the data they want in a JSON format which could be processed like item["metadata"]["title"] or just stuffed into a JSONB column and queries any way you like. Login is not "fight with OAuth" but something like "POST username and password to https://example.com/api/login with a client that has a cookie jar" I don't really think "most people are stupid" that often but I think it all the time when web scraping is involved.

[1] they even have a patent for it! people who run online ad campaigns A/B test anything, but the last thing Google wants is for an SEO to be able to settle questions like "will my site rank higher if I put a certain phrase in a <b>?"

[2] ... as in, we see people doing it in our logs