Forget IPs: using cryptography to verify bot and agent traffic

71 todsacerdoti 21 5/15/2025, 1:22:14 PM blog.cloudflare.com ↗

Comments (21)

lockhead · 7h ago
This would help detecting legit BOTs for sure, but as Origin you would still have the same issue than before, as you still need to be able to discern between "real" Users and all the malicious Traffic. The Amount of "good" bots is way smaller than that, and by good behavior and transparent data much easier to identify even without this kind of stuff. So to make real use of this, Users would also need to do this and suddenly "privacy hell" would be too kind to call this.
ok123456 · 5h ago
What about an SMTP proof of work extension? Smaller SMTP relays that would typically have a harder time sending mail could opt in to solve a problem to increase the chance of delivery. The difficulty of the problem could be inversely related to reputation.
ipdashc · 2h ago
PaulHoule · 8h ago
There is a lot of talk about AI training being a driver of bot activity, but I think AI inference is also a driver, in two ways.

(1) It's always been easy to write bots [1] [2]. If you knew beautifulsoup well you could often write a scraper in 10 minutes, now people will ask ChatGPT to write a scraper for them and have a scraper ready in 15 minutes so they're discovering how easy it is, how you don't have to limit yourself to public APIs that are usually designed to limit access, not expand it.

(2) Instead of using content to train an AI you can feed it into an AI for inference. For instance, you can tell the AI to summarize pages or to extract specific facts from pages or to classify pages. It's increasingly possible to develop a workflow like: classify 30,000 RSS feed items, select 300 items that the user will probably find interesting, crawl those 300 pages looking for hyperlinks to scientific journal articles or other links that would be better to post, crawl those links to see if the journal articles are open access, weigh various factors to decide what's likely to be the best link, do specialized image extraction so I can make a good social post, etc. It's not too hard to do but it all comes falling down if the bot has to click on fire hydrants endlessly.

[1] Polite crawlers limit how many threads they have running against a single server. If you only have one thread per server you are unlikely to overload it. If you want to make a crawler with a large thread count that is crawling a large number of servers it can be a hassle to implement this, particularly if you want to maximize performance or run a large distributed crawler. However a lot of times I do a crawling project that targets one site or five sites or that maybe crawls 1000 documents a day and in those cases the single-threaded crawler is fine.

[2] For some reason, my management has always overestimated the work of building scrapers, I think because they've been burned by UI development which is always underestimated. The fact that UI development is such a bitch actually helps with crawler development -- you might be afraid that the target site is going to change but between the high cost of making changes and the fact that Google will trash your SEO if you change anything about your site, the target site won't change.

showerst · 7h ago
Agreed on all points except [2], I run many scrapers and sites change _all the time_, often changing markup for seemingly random reasons. One government site I scrape changes ids and classes between camel and snake case every couple of weeks, it makes me wonder if it's a developer pulling a fast one on the client.
dboreham · 7h ago
Hearing this makes me suspect some tool auto-generates the id's and its config is getting changed every couple weeks by some spaces vs tabs battle between devs.
unsolved73 · 7h ago
Interesting proposal.

The current situation is getting worse day after day because everybody want to ScRaPe 4lL Th3 W38!!

Verifying Ed25519 signature is almost free on modern CPUs, I just wonder why they go with an obscure RFC for HTTP signatures instead of using plain JSON Web Tokens in an header.

JWTs are universal. Parsing this custom format will certainly lead to a few interesting bugs.

nubinetwork · 8h ago
Using IPs requires next to no cpu power... if I have to start picking apart http requests and running algorithms on the traffic, I might as not well even run websites, including personal ones.
probably_wrong · 6h ago
Wasn't that the argument against https, namely, that it was too costly to run [1]? I also run fail2ban [2] in my servers and I rarely even notice it's there.

I'm not saying you should sit down with the iptables manual and start going through the logs, but I can see the idea taking off if all it takes is (say) one apt-get and two config lines.

[1] https://stackoverflow.com/questions/1035283/will-it-ever-be-...

[2] https://github.com/fail2ban/fail2ban

elithrar · 6h ago
IPs as identifiers aren’t great: in a world of both CGNAT (more shared IPs) and a ton of sketchy residential proxies, they’ve become poor proxies for identity of a “thing”.
kbolino · 6h ago
IPs are slowly getting worse as identifiers over time, but IP and IP range bans are like port-shifting SSH: you can often get a lot of defense against low-effort attacks for similarly low amounts of effort.
molticrystal · 7h ago
You are right that IP checks are lightweight, though you miss that setting up TCP/IP handshakes is algorithm-heavy, but it’s transparent because hardware and kernel optimizations keep it light on the CPU. TLS encryption, through certificate checks, key exchanges, that whole negotiation, is a CPU-heavy activity, especially on servers. Most of that asymmetric crypto, like verifying certificates, isn’t helped much by hardware accelerators like AES-NI, which mainly help with session encryption. TLS is already tons of work, so HTTP Message Signatures and mTLS are like piling more hay on the stack, it’s extra work, but you’re already doing a lot at that point.

The real complaint should be about having to adopt another standard, and whether they’ll discriminate against applications like legacy RSS readers, since they’re considered a type of bot.

kbolino · 6h ago
IP bans are usually enforced before the TCP handshake proceeds: server receives SYN packet, checks source address against blocklist, and if blocked then drops it before proceeding any further in the TCP state diagram.
ecb_penguin · 8h ago
This already happens with TLS, JWT verification, etc.

No comments yet

dboreham · 7h ago
The subtext surely is: "and we're going to charge for crawler traffic next".
senectus1 · 8h ago
This clever bunny did something very similar.. (but self hosted)

https://xeiaso.net/blog/2025/anubis/

I love the approach.. If I could be arsed blogging I'd probably set it up myself.

mshockwave · 6h ago
I might be wrong, but it seems like Anubis asks the _client_ to solve the cryptography challenges while the approach Cloudflare described here asks the server to verify the (cryptography) signature?
ralferoo · 6h ago
Altcha is a similar thing: https://github.com/altcha-org/altcha

I recently implemented a very similar thing to its obfuscation via proof-of-work (https://altcha.org/docs/obfuscation/) in my C++ REST backend and flutter front-end, and use it for rate-limiting on APIs that allow creation of a new account or sending sign-up e-mails.

I have an authentication token that's then wrapped with AES-GCM using a random IV and the client is given the key, IV stem and a maximum count for the IV.

superkuh · 8h ago
I do not think that more in-house cloudflare-only "standards" open washed through their IETF employees, both of which raise the friction to participation in the web even higher for actual humans, are the way to go. Especially setups which again rely on centralized CAs and have tiny expiring lifetimes. Seems like pretty soon there'll only be one or two browsers which can even hope to access sites behind cloudflare's infrastructure. They might as well just start releasing their own browser and the transformation to AOL will be complete.
ecb_penguin · 8h ago
> I do not think that more in-house cloudflare-only "standards" open washed through their IETF employees

As someone with multiple RFCs, this is the way it's always been done. Industry has a problem, there's some collaboration with other industry or academia, someone submits a draft RFC. People are either free to adopt it or not. Sometimes there's competing proposals that are accepted, and sometimes the topic dies entirely.

> both of which raise the friction to participation in the web even higher for actual humans

Absolutely nothing wrong with this, as it's site owners that make the decision for their own sites. Yep, I do want some friction. The tradeoff saves me a ton of money. Heck, I could block most ASNs and email domains and still keep 99% of my customers.

> Seems like pretty soon there'll only be one or two browsers which can even hope to access sites behind cloudflare's infrastructure

This proposal is about bots identifying themselves through open HTTP headers.

superkuh · 8h ago
>This proposal is about bots identifying themselves through open HTTP headers.

The problem is that to CF, everything that isn't Chrome is a bot (only a slight exaggeration). So browsers that aren't made by large corporations wouldn't have this. It's like how CF uses CORS.

CORS isn't only CF but it's an example of their requiring obscure things no one else really uses, and using them in weird ways that causes most browser to be unable to do it. The HTTP header CA signing is yet another of these things. And weird modifications of TLS flags fall right in there too. It's basically Proof-of-Chrome via Gish Gallop of new "standards" they come up with.

>Absolutely nothing wrong with this, as it's site owners that make the decision for their own sites.

I agree. It's their choice. I am just laying out the consequences of these mostly uninformed choices. They won't be aware that they're blocking a large number of their actual human visitors initially. I've seen it play out again and again with sites and CF. Eventually the sites are doing as much work maintaining their whitelists of UAs and IPs that one wonders why they use CF at all if they're doing the job instead.

And that's not even starting on the bad and aggressive defaults for CF free accounts. In the last month or two they have slightly improved this. So there's some hope. They know they are a problem because they're so big,

"It was a decision I could make because I’m the CEO of a major Internet infrastructure company." ... "Literally, I woke up in a bad mood and decided someone shouldn't be allowed on the Internet. No one should have that power." - Cloudflare CEO Matthew Prince

(ps. You made some good and valid points, re: IETF process status quo, personal choice, etc, it's not me doing the downvotes)