I was wrong about robots.txt

54 EPendragon 40 7/17/2025, 12:35:57 AM evgeniipendragon.com ↗

Comments (40)

Falkon1313 · 2h ago
This is kinda amusing.

robots.txt main purpose back in the day was curtailing penalties in the search engines when you got stuck maintaining a badly-built dynamic site that had tons of dynamic links and effectively got penalized for duplicate content. It was basically a way of saying "Hey search engines, these are the canonical URLs, ignore all the other ones with query parameters or whatever that give almost the same result."

It could also help keep 'nice' crawlers from getting stuck crawling an infinite number of pages on those sites.

Of course it never did anything for the 'bad' crawlers that would hammer your site! (And there were a lot of them, even back then.) That's what IP bans and such were for. You certainly wouldn't base it on something like User-Agent, which the user agent itself controlled! And you wouldn't expect the bad bots to play nicely just because you asked them.

That's about as naive as the Do-Not-Track header, which was basically kindly asking companies whose entire business is tracking people to just not do that thing that they got paid for.

Or the Evil Bit proposal, to suggest that malware should identify itself in the headers. "The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem – simply ignore any messages with the evil bit set and trust the rest."

MiddleMan5 · 56m ago
It should be noted here that the Evil Bit proposal was an April Fools RFC https://datatracker.ietf.org/doc/html/rfc3514
pi_22by7 · 1h ago
So it did the same work that a sitemap does? Interesting.

Or maybe more like the opposite: robots.txt told bots what not to touch, while sitemaps point them to what should be indexed. I didn’t realize its original purpose was to manage duplicate content penalties though. That adds a lot of historical context to how we think about SEO controls today.

tbrownaw · 12m ago
> And you wouldn't expect the bad bots to play nicely just because you asked them.

Well, yes, the point is to tell the bots what you've decided to consider "bad" and will ban them for. So that they can avoid doing that.

Which of course only works to the degree that they're basically honest about who they are or at least incompetent at disguising themselves.

dumbfounder · 2h ago
I created a search engine that crawled the web way back in 2003. I used a proper user agent that included my email address. I got SO many angry emails about my crawler, which played as nice as I was able to make it play. Which was pretty nice I believe. If it’s not Google people didn’t want it. That’s a good way to prevent anyone from ever competing with Google. It isn’t just about that preview for LinkedIn, it’s about making sure the web is accessible by everyone and everything that is trying to make its way. Sure, block the malicious ones. But don’t just assume that every bot is malicious by default.
Jach · 9m ago
I guess back in 2003 people would expect an email to actually go somewhere, these days I would expect it to either go nowhere or just be part of a campaign to collect server admin emails for marketing/phishing purposes. Angry emails are always a bit much, but I wonder if they aren't sent as much anymore in general or if people just stopped posting them to point and laugh at and wonder what goes through people's minds to get so upset to send such emails.

My somewhat silly take on seeing a bunch of information like emails in a user agent string is that I don't want to know about your stupid bot. Just crawl my site with a normal user agent and if there's a problem I'll block you based on that problem. It's usually not a permanent block, and it's also usually setup with something like fail2ban so it's not usually an instant request drop. If you want to identify yourself as a bot, fine, but take a hint from googlebot and keep the user agent short with just your identifier and an optional short URL. Lots of bots respect this convention.

But I'm just now reminded of some "Palo Alto Networks" company that started dumping their garbage junk in my logs, they have the audacity to include messages in the user agent like "If you would like to be excluded from our scans, please send IP addresses/domains to: scaninfo@paloaltonetworks.com" or "find out more about our scans in [link]". I put a rule in fail2ban to see if they'd take a hint (how about your dumb bot detects that it's blocked and stops/slows on its own accord?) but I forgot about it until now, seems they're still active. We'll see if they stop after being served nothing but zipbombs for a while before I just drop every request with that UA. It's not that I mind the scans, I'd just prefer to not even know they exist.

TylerE · 1h ago
That's easy to say when it's your bot, but I've been on the other side to know that the problem isn't your bot, it's the 9000 other ones just like it, none of which will deliver traffic anywhere close to the resources consumed by scraping.
kijin · 29m ago
True. Major search engines and bots from social networks have a clear value proposition: in exchange for consuming my resources, they help drive human traffic to my site. GPTBot et al. will probably do the same, as more people use AI to replace search.

A random scraper, on the other hand, just racks up my AWS bill and contributes nothing in return. You'd have to be very, very convincing in your bot description (yes, I do check out the link in the user-agent string to see what the bot claims to be for) in order to justify using other people's resources on a large scale and not giving anything back.

An open web that is accessible to all sounds great, but that ideal only holds between consenting adults. Not parasites.

tomrod · 2h ago
> But don’t just assume that every bot is malicious by default.

I'll bite. It seems like a poor strategy to trust by default.

ronsor · 2h ago
I'll bite harder. That's how the public Internet works. If you don't trust clients at all, serve them a login page instead of content.
tickettotranai · 1h ago
In fairness this appears to be the direction we are headed anyway
__loam · 2h ago
It sucks that we're living in a landscape where bad actors take advantage of that way of doing things.
KTibow · 1h ago
It sucks more that Cloudflare/similar have responded to this with "if your handshake fingerprints more like curl than like Chrome/Firefox, no access for you".
edoceo · 1h ago
Or getting a CAPTCHA from Chrome when visiting a site you've been to dozens of times (Stack Overflow). Now I just skip that content, probably in my LLM already anyway.
realusername · 3m ago
It's the same thing as the anti pirate ads, you only annoy legit customers, this agressive captcha campaign just makes Stackoverflow drop down even faster than it would normally.
sltkr · 1h ago
The really bad actors are going to ignore robots.txt entirely. You might as well be nice to the crawlers that respect robots.txt.
PeterStuer · 30m ago
Even if you want to play nice, robots.txt is a catch-22, as accessing it is taken as a signal you are a 'bot' by malconfigured anti-bot 'solutions'.
chasebank · 1h ago
Bad actors will always exploit whatever systems are available to them. Always have, always will.
acosmism · 6m ago
[delayed]
yodon · 1h ago
What astounds me is there are no readily available libraries crawler authors can reach for to parse robots.txt and meta robots tags, to decide what is allowed, and to work through the arcane and poorly documented priorities between the two robots lists, including what to do when they disagree, which they often do.

Yes, there's an ancient google reference parser in C++11 (which is undoubtedly handy for that one guy who is writing crawlers in C++), but not a lot for the much more prevalent Python and JavaScript crawler writers who just want to check if a path is ok or not.

Even if bot writers WANT to be good, it's much harder than it should be, particularly when lots of the robots info isn't even in the robots.txt files, it's in the index.html meta tags.

PaulKeeble · 3h ago
The problem is the robots that do follow robots.txt its all the bots that don't. Robots.txt is largely irrelevant now they don't represent most of the traffic problem. They certainly don't represent the bots that are going to hammer your site without any regard, those bots don't follow robots.txt.
anonnon · 2h ago
Not sure why you were downvoted. I have zero confidence that OpenAI, Anthropic, and the rest respect robots.txt however much they insist they do. It's also clear that they're laundering their traffic through residential ISP IP addresses to make detection harder. There are plenty of third-parties advertising the service, and farming it out affords the AI companies some degree of plausible deniability.
s-mon · 2h ago
Having worked on bot detection in the past. Some really simple old fashioned attacks happened by doing the opposite of what the robots.txt file says.

While I doubt it does much today, that file really only matters to those that want to play by the rules which on the free web is not an awful lot of the web anymore I’m afraid.

kookamamie · 34m ago
You shouldn't worry about LinkedIn, the cancer of the internet.
donohoe · 1h ago
I try to stay away from negative takes here, so I’ll keep this as constructive as I can:

It’s surprising to see the author frame what seems like a basic consequence of their actions as some kind of profound realization. I get that personal growth stories can be valuable, but this one reads more like a confession of obliviousness than a reflection with insight.

And then they posted it here for attention.

spookie · 10m ago
They posted it here because they wouldn't appear on Google otherwise (:
zem · 29m ago
it's mostly that they didn't think of the page preview fetcher as a "crawler", and did not intend for their robots.txt to block it. it may not be profound but it's at the least not a completely trivial realisation. and heck, an actual human-written blog post can okay improve the average quality of the web.
ceautery · 1h ago
LinkedIn is by far the worst offender in post previews. The doctype tag must be all lowercase. The HTML document must be well-formed (the meta tags must be in an explicit <head> block, for example). You must have OG meta tags for url, title, type, and image. The url meta tag gets visited, even if it's the same address the inspector is already looking at.

Fortunately, the post inspector helps you suss out what's missing in some cases, but c'mon, man, how much effort should I spend helping a social media site figure out how to render a preview? Once you get it right, and to quote my 13 year old: "We have arrived, father... but at what cost?"

orionblastar · 3h ago
This is a mistake that many websites make, trying to block all robots, and the robots that serve their blog posts to users can't function anymore.
xnx · 3h ago
Agree. If you don't want it out there, put it in your journal or require a login.
reaperducer · 2h ago
Not every web site is a blog. Not every web site can be legally put behind a login.
dylan604 · 2h ago
What kind of information legally cannot be put behind a login?
PeterStuer · 24m ago
Worst offenders I come across: official government information that needs to be public, placed behind Cloudflare, preventing even their M2M feeds (RSS, Atom, ...) to be accessed
therein · 1h ago
Maybe he is talking about stuff you're required by law to disclose but you don't really want to be seen too much. Like code of conduct, terms of service, retractions or public apologies.
bryanhogan · 1h ago
Yes, there's often not much reason to block bots that abide by the rules. It just makes your site not show up on other search indexes and introduces problems for users. Malicious bots won't care about your robots.txt anyway.
happymellon · 27m ago
Most bots don't serve the blog post to users.
qwerty2000 · 1h ago
Dude can't even grammar.
babuloseo · 1h ago
isnt LinkedIn dead.
PeterStuer · 19m ago
I feel it is morphing into Twitter/Facebook/Instagram more each day.

It used to be this ultrafake eternal job interview site, but people now seem uninhibited to go on wild political rants even there.

kookamamie · 33m ago
One can dream.
nxpnsv · 30m ago
It can still be more dead