I dont know, arguing that http/2 is safer overall is a... bold claim. It is sufficiently complex that there is no standard implementation in the Python standard library, and even third party library support is all over the place. requests doesn't support it; httpx has experimental, partial, pre-1.0 support. Python http/2 servers are virtually unavailable at all. And it's not just Python - I remember battling memory leaks, catastrophic deadlocks, and more in the grpc-go implementation of http/2, in its early days.
HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.
Bender · 4h ago
Speaking of http/2 [1] - August 14, 2025
The underlying vulnerability, tracked as CVE-2025-8671, has been found to impact projects and organizations such as AMPHP, Apache Tomcat, the Eclipse Foundation, F5, Fastly, gRPC, Mozilla, Netty, Suse Linux, Varnish Software, Wind River, and Zephyr Project. Firefox is not affected.
These sound to me like they are mostly problems with protocol maturity rather than with its fundamental design. If hypothetically the whole world decided to move to HTTP/2, there'd be bumps in the road, but eventually at steady state there'd be a number of battle-tested implementations available with the defect rates you'd expect of mature widely used open-source protocol implementations. And programming language standard libraries, etc., would include bindings to them.
jcdentonn · 6h ago
Not sure about servers, but we had http/2 clients in java for a very long time.
jiehong · 6h ago
nghttp2 is a C lib that can be used for serving as a server in many cases. Rust has the http2 crate.
Perhaps it isn’t that easy, but it could be put in common and used a bit everywhere.
cyberax · 6h ago
An HTTP/2 client is pretty easy to implement. Built-in framing automatically improves a lot of complexity, and if you don't need multiple streams, you can simplify the overall state machine.
Perhaps something like "HTTP/2-Lite" profile is in order? A minimal profile with just 1 connection, no compression, and so on.
spenczar5 · 6h ago
Isn't the original post about servers? A minimal client doesn't help with server security.
I would endorse your idea, though, speaking more broadly! That does sound useful.
mittensc · 6h ago
The article is a nice read on request smuggling.
It over-reaches with argument about disallowing http/1.1.
Parsers should be better.
Moving to another protocol won't solve the issue.
It will be written by the same careless engineers.
So same companies will have the same issues or worse...
We just lose readability/debuggability/accesibility.
ameliaquining · 5h ago
It's not correct to attribute all bugs to carelessness, and therefore assume that engineer conscientiousness is the only criterion affecting defect rates. Some software architectures, protocol designs, programming languages, etc., are less prone than others to certain kinds of implementation bugs, by leaving less room in the state space for them to hide undetected. Engineers of any skill level will produce far more defects if they write in assembly, than if they write the same code in a modern language with good static analysis and strong runtime-enforced guarantees. Likewise for other foundational decisions affecting how to write a program.
The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
mittensc · 4h ago
> It's not correct to attribute all bugs to carelessness
Sure, just the bugs in the link.
Content-Length+Transfer-Encoding should be bad request.
RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to
forwarding a message via a"
Content-Lenght: \r\n7 is also a bad request.
Just those mean whoever wrote the parser didn't even bother read the RFC...
No parsing failure checks either...
That kind of person will mess up HTTP/2 as well.
It's not a protocol issue if you can't even be bothered to read the spec.
> The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Fair enough, I disagree with that conclusion. I'm really curious what kind of bugs the engineers above would add with HTTP/2, will be fun.
The section "How secure is HTTP/2 compared to HTTP/1?" (https://portswigger.net/research/http1-must-die#how-secure-i...) responds to this. In short, there's an entire known class of vulnerabilities that affects HTTP/1 but not HTTP/2, and it's not feasible for HTTP/1 to close the entire vulnerability class (rather than playing whack-a-mole with bugs in individual implementations) because of backwards compatibility. The reverse isn't true; most known HTTP/2 vulnerabilities have been the kind of thing that could also have happened to HTTP/1.
Is there a reason you don't find this persuasive?
nitwit005 · 4h ago
The new features/behaviors in the new protocol inherently create new classes of vulnerabilities. That above link relates to an issue with RST_STREAM frames. You can't have issues with frames if you lack frames.
It's quite possible the old issues are worse than the new ones, but it's not obvious that's the case.
JdeBP · 5h ago
My WWW site has been served up by publicfile for many years now, and reading through this I kept having the same reaction, over and over, which is that the assumption that "websites often use reverse proxies" is upgraded in the rest of the article to everyone always uses back-ends and proxies. It's as if there is a monocultural world of HTTP/1.1 WWW servers; and not only does the author discount everything else apart from the monoculture, xe even encourages increasing the monoculture as a survival tactic, only then to state that the monoculture must be killed.
The irony that near the foot of the article it encourages people to "Avoid niche webservers" because "Apache and nginx are lower-risk" is quite strong, given that my publicfile logs show that most of the continual barrage of attacks a public WWW server like mine is subject to are query parameter injection attempts, and attacks quite evidently directed against WordPress, Apache, AWS, and these claimed "lower risk" softwares. (There was another lengthy probe to find out where WordPress was installed a couple of minutes ago, as I write this. Moreover, the attacker who has apparently sorted every potentially vulnerable PHP script into alphabetical order and just runs through them must be unwittingly helping security people, I would have thought. (-:)
Switching from my so-called "niche webserver", which does not have these mechanisms to be exploited, to Apache and nginx would be a major retrograde step. Not least because djbwares publicfile nowadays rejects HTTP/0.9 and HTTP/1.0 by default, and I would be going back to accepting them, were I foolish enough to take this paper's advice.
"Reject requests that have a body" might have been the one bit of applicable good advice that the paper has, back in October 1999. But then publicfile came along, in November, whose manual has from the start pointed out (https://cr.yp.to/publicfile/httpd.html) that publicfile httpd rejects requests that have content lengths or transfer encodings. It's a quarter of a century late to be handing out that advice as if it were a new security idea.
And the whole idea that this is "niche webservers" is a bit suspect. I publish a consolidated djbwares that incorporates publicfile. But the world has quite a few other cut down versions (dropping ftpd being a popular choice), homages that are "inspired by publicfile" but not written in C, and outright repackagings of the still-available original. It's perhaps not as niche as one might believe by only looking at a single variant.
I might be in the vanguard in the publicfile universe of making HTTP/0.9 and HTTP/1.0 not available in the default configuration, although there is a very quiet avalanche of that happening elsewhere. I'm certainly not persuaded by this paper, though, based entirely upon a worldview, that publicfile is direct evidence of not being universal truth, to consider that I need do anything at all about HTTP/1.1. I have no back-end servers, no reverse proxies, no CGI, no PHP, no WordPress, no acceptance of requests with bodies, and no vulnerability to these "desync" problems that are purportedly the reason that I should switch over to the monoculture and then switch again because the monoculture "must die".
superkuh · 6h ago
> If we want a secure web, HTTP/1.1 must die.
Yes, the corporations and insitutions and their economic transactions must be the highest and only priority. I hear that a lot from commercial people with commercial blinders on.
They simply cannot see beyond their context and realize the web, http/1.1 is used by human people that don't have the same use cases or incredibly stringent identity verification needs. Human use cases don't matter to them because they are not profitable.
Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all. So yeah, commercial companies, abandon HTTP, go to your HTTP/3 with all it's UDP only and CA TLS only and no self signing and no clear text. And leave the actual web on HTTP/1.1 HTTP+HTTPS alone.
GuB-42 · 6h ago
Yes!
Let's get real, online security is mostly a commercial thing. Why do you think Google pushed so hard for HTTPS? Do you really think it is to protect your political opinions? No one cares about them, but a lot of people care about your credit card.
That's something I disagree with the people who made Gemini, a "small web" protocol for people who want to escape the modern web with its ads, tracking and bloat. They made TLS a requirement. Personally, I would have banned encryption. There is a cost, but it is a good way to keep commercial activity out.
I am not saying that the commercial web is bad, it may be the best thing that happened in the 21th century so far, but if you want to escape from it for a bit, I'd say plain HTTP is the way to go.
Note: of course if you need encryption and security in general for non commercial reason, use it, and be glad for the commercial web for helping you with that.
jsnell · 6h ago
The author is only arguing against HTTP/1.1 for use between proxies and backends. Explicitly so:
> Note that disabling HTTP/1 between the browser and the front-end is not required
plorkyeran · 5h ago
The fact that this is a footnote at the end of a long article is a rather significant problem with the article.
layer8 · 6h ago
It requires rather careful reading to understand that. Most of the site sounds like they want to eliminate HTTP/1.1 wholesale.
cyberax · 6h ago
> Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all.
All you need is a faulty caching proxy in front of your PHP server. Or maybe that nice anti-bot protection layer.
HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.
The underlying vulnerability, tracked as CVE-2025-8671, has been found to impact projects and organizations such as AMPHP, Apache Tomcat, the Eclipse Foundation, F5, Fastly, gRPC, Mozilla, Netty, Suse Linux, Varnish Software, Wind River, and Zephyr Project. Firefox is not affected.
[1] - https://www.securityweek.com/madeyoureset-http2-vulnerabilit...
Perhaps it isn’t that easy, but it could be put in common and used a bit everywhere.
Perhaps something like "HTTP/2-Lite" profile is in order? A minimal profile with just 1 connection, no compression, and so on.
I would endorse your idea, though, speaking more broadly! That does sound useful.
It over-reaches with argument about disallowing http/1.1.
Parsers should be better.
Moving to another protocol won't solve the issue. It will be written by the same careless engineers. So same companies will have the same issues or worse...
We just lose readability/debuggability/accesibility.
The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Sure, just the bugs in the link.
Content-Length+Transfer-Encoding should be bad request.
RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"
Content-Lenght: \r\n7 is also a bad request.
Just those mean whoever wrote the parser didn't even bother read the RFC...
No parsing failure checks either...
That kind of person will mess up HTTP/2 as well.
It's not a protocol issue if you can't even be bothered to read the spec.
> The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Fair enough, I disagree with that conclusion. I'm really curious what kind of bugs the engineers above would add with HTTP/2, will be fun.
I'll note articles about HTTP2.0 vulnerabilities have been posted with some regularity here: https://news.ycombinator.com/item?id=44909416
Is there a reason you don't find this persuasive?
It's quite possible the old issues are worse than the new ones, but it's not obvious that's the case.
The irony that near the foot of the article it encourages people to "Avoid niche webservers" because "Apache and nginx are lower-risk" is quite strong, given that my publicfile logs show that most of the continual barrage of attacks a public WWW server like mine is subject to are query parameter injection attempts, and attacks quite evidently directed against WordPress, Apache, AWS, and these claimed "lower risk" softwares. (There was another lengthy probe to find out where WordPress was installed a couple of minutes ago, as I write this. Moreover, the attacker who has apparently sorted every potentially vulnerable PHP script into alphabetical order and just runs through them must be unwittingly helping security people, I would have thought. (-:)
Switching from my so-called "niche webserver", which does not have these mechanisms to be exploited, to Apache and nginx would be a major retrograde step. Not least because djbwares publicfile nowadays rejects HTTP/0.9 and HTTP/1.0 by default, and I would be going back to accepting them, were I foolish enough to take this paper's advice.
"Reject requests that have a body" might have been the one bit of applicable good advice that the paper has, back in October 1999. But then publicfile came along, in November, whose manual has from the start pointed out (https://cr.yp.to/publicfile/httpd.html) that publicfile httpd rejects requests that have content lengths or transfer encodings. It's a quarter of a century late to be handing out that advice as if it were a new security idea.
And the whole idea that this is "niche webservers" is a bit suspect. I publish a consolidated djbwares that incorporates publicfile. But the world has quite a few other cut down versions (dropping ftpd being a popular choice), homages that are "inspired by publicfile" but not written in C, and outright repackagings of the still-available original. It's perhaps not as niche as one might believe by only looking at a single variant.
I might be in the vanguard in the publicfile universe of making HTTP/0.9 and HTTP/1.0 not available in the default configuration, although there is a very quiet avalanche of that happening elsewhere. I'm certainly not persuaded by this paper, though, based entirely upon a worldview, that publicfile is direct evidence of not being universal truth, to consider that I need do anything at all about HTTP/1.1. I have no back-end servers, no reverse proxies, no CGI, no PHP, no WordPress, no acceptance of requests with bodies, and no vulnerability to these "desync" problems that are purportedly the reason that I should switch over to the monoculture and then switch again because the monoculture "must die".
Yes, the corporations and insitutions and their economic transactions must be the highest and only priority. I hear that a lot from commercial people with commercial blinders on.
They simply cannot see beyond their context and realize the web, http/1.1 is used by human people that don't have the same use cases or incredibly stringent identity verification needs. Human use cases don't matter to them because they are not profitable.
Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all. So yeah, commercial companies, abandon HTTP, go to your HTTP/3 with all it's UDP only and CA TLS only and no self signing and no clear text. And leave the actual web on HTTP/1.1 HTTP+HTTPS alone.
Let's get real, online security is mostly a commercial thing. Why do you think Google pushed so hard for HTTPS? Do you really think it is to protect your political opinions? No one cares about them, but a lot of people care about your credit card.
That's something I disagree with the people who made Gemini, a "small web" protocol for people who want to escape the modern web with its ads, tracking and bloat. They made TLS a requirement. Personally, I would have banned encryption. There is a cost, but it is a good way to keep commercial activity out.
I am not saying that the commercial web is bad, it may be the best thing that happened in the 21th century so far, but if you want to escape from it for a bit, I'd say plain HTTP is the way to go.
Note: of course if you need encryption and security in general for non commercial reason, use it, and be glad for the commercial web for helping you with that.
> Note that disabling HTTP/1 between the browser and the front-end is not required
All you need is a faulty caching proxy in front of your PHP server. Or maybe that nice anti-bot protection layer.
It really, really is easy to get bitten by this.