This is James Kettle, who more or less invented HTTP/1.1 desync attacks, and has delivered several years of Black Hat talks about them; he's basically the unofficial appsec keynote at Black Hat.
oidar · 7h ago
Isn't this just an announcement? I thought HN didn't allow "announcement" posts.
superkuh · 8h ago
HTTP/1.1 is inherentely more resistant to centralized political and social pressure than HTTP/2 and HTTP/3 as those have baked in (to 99.9999% of user agents and libs) requirements for CA TLS. It's also far more robust over long time periods.
I understand that for business and institutional use cases HTTP/1.1 is undesirable. But for human use cases, like long lasting and robust websites that don't just become unvisitable every ~3 years (with CA cert expirations, etc, etc) HTTP+HTTPS on HTTP/1.1 is irreplacable.
Browsers, lib devs, and web developers, should consider the needs of human persons and not just corporate persons. This is a misguided declaration at best and one who's context needs to be clearly defined.
altairprime · 8h ago
Desync attacks do not affect static and public content, which is the only form of “long lasting and robust websites” available; so it is perfectly reasonable to continue serving such content over HTTP with nothing to fear from desyncs.
dvfjsdhgfv · 4h ago
There is an enormous campaign, both by companies and security enthusiasts, which promotes the view that serving static content over HTTP should, as the article says, "die".
altairprime · 4h ago
It took them twenty years to remove ftp support, and the chances of squid proxy removing http/1.1 support are virtually nil. The rhetoric is unsurprising, though still disappointingly short-sighted, given the unwillingness of for-profit enterprises to invest in architectural advances such as HTTP/2 or QUIC/3.
Much more likely is that https:// URLs served by http/1.1 will be marked as Insecure in browsers, leading to autofill and form submission warnings, and eventually trigger security interstitials as badcerts do today; that is an immediately plausible case that can be argued and defended and scheduled for the future now by browsers (and by e.g. PCI compliance specs!), especially as attacks on http/1.1 improve. The vast majority of sites would then, once pressured, either upgrade to h1+ https:// with Alt-Svc to maintain basic compatibility, rather than downgrade to h1-only http://, resolving the majority of threats offered by downgrade attacks without having to cut it off altogether.
Perhaps someday it will be more difficult to access http/1.1 resources in a browser, but ftp (and gopher!) resources still exist to this day and remain accessible using tools for that purpose. Regardless, I encourage donating to Archive.org, and also ensuring that any static content you visit over http:// is mirrored by them.
ps. Let’s Encrypt would have to update http-01 to work with h2+ over 443/tcp,udp by specifying that it performs no validation whatsoever of the server’s existing TLS certificate properties, in order to continue functioning as intended in a post-h1 environment. They would likely specify that as http-02 and require h2+, bind https:// verification URLs to it, and eventually deprecate http-01. They could do this at any time - perhaps they will!
I understand that for business and institutional use cases HTTP/1.1 is undesirable. But for human use cases, like long lasting and robust websites that don't just become unvisitable every ~3 years (with CA cert expirations, etc, etc) HTTP+HTTPS on HTTP/1.1 is irreplacable.
Browsers, lib devs, and web developers, should consider the needs of human persons and not just corporate persons. This is a misguided declaration at best and one who's context needs to be clearly defined.
Much more likely is that https:// URLs served by http/1.1 will be marked as Insecure in browsers, leading to autofill and form submission warnings, and eventually trigger security interstitials as badcerts do today; that is an immediately plausible case that can be argued and defended and scheduled for the future now by browsers (and by e.g. PCI compliance specs!), especially as attacks on http/1.1 improve. The vast majority of sites would then, once pressured, either upgrade to h1+ https:// with Alt-Svc to maintain basic compatibility, rather than downgrade to h1-only http://, resolving the majority of threats offered by downgrade attacks without having to cut it off altogether.
Perhaps someday it will be more difficult to access http/1.1 resources in a browser, but ftp (and gopher!) resources still exist to this day and remain accessible using tools for that purpose. Regardless, I encourage donating to Archive.org, and also ensuring that any static content you visit over http:// is mirrored by them.
ps. Let’s Encrypt would have to update http-01 to work with h2+ over 443/tcp,udp by specifying that it performs no validation whatsoever of the server’s existing TLS certificate properties, in order to continue functioning as intended in a post-h1 environment. They would likely specify that as http-02 and require h2+, bind https:// verification URLs to it, and eventually deprecate http-01. They could do this at any time - perhaps they will!