This site is more ads than content. At least on my mobile around 3/4 of the screen is covered by ads. With the smallest possible close buttons.
bc569a80a344f9c · 2h ago
Some random comments on the article:
Stateful packet filters usually perform sequence checking and verifying that there are no invalid connection states that affect the client, but the primary benefit is that it lets you put subsequent packets in a fast path. Imagine a firewall with many hundreds or thousands of rules. Usually, firewalls also have “object group” concepts where instead of listing only one each IP/subnet make tuple (be it a host or network) as the source and destination, you can create a lot of them and refer to the list. In the actual implementation that explodes to one rule per list item. Firewall rules are processed in order, either firing on first match or last match (hi, pf, we love you anyway). There’s certainly some really neat tricks to make that as efficient as it can be, but it’s still a lot of computation to determine whether a packet should pass. That doesn’t scale well. So instead, conceptually we only check the rules on the first packet of a connection and then slam the connections details into a data structure we can perform lookups against for subsequent package much more efficiently.
Packet filters aren’t all that useful anymore by themselves. They’re certainly one layer of defense, but won’t get you past any audits. Deep packet inspection (or L7 firewalls, or application firewalls, or whatever you want to call them) come with their own problems tho, and may require some interesting architectures to address. A huge amount of Internet traffic is encrypted (TLS), and rightly so. A network firewall that simply sits in the path like in the diagram in the article can’t inspect that traffic meaningfully. There’s some interesting techniques for finding certain markers even in encrypted packets, but obviously you can’t filter a specific HTTP verb (as a contrived example).
If you’re firewalling user traffic you have control over, you can install a wildcard certificate on the firewall and act as a proxy, transparently decrypting all traffic. Many enterprises do this. If you’re trying to protect your servers from Internet users you have no control over, you generally want to decrypt on a load balancer that can also do the security work, or pass the traffic from there unencrypted through a firewall, and then re-encrypt to the server if that’s warranted.
For logging, firewalls in front of busy networks can generate non-trivial amounts of log traffic, and unlike other network logging sampling usually won’t do. This can require using interesting protocols (not syslog, certainly not HTTP) that can be hardware accelerated.
Stateful packet filters usually perform sequence checking and verifying that there are no invalid connection states that affect the client, but the primary benefit is that it lets you put subsequent packets in a fast path. Imagine a firewall with many hundreds or thousands of rules. Usually, firewalls also have “object group” concepts where instead of listing only one each IP/subnet make tuple (be it a host or network) as the source and destination, you can create a lot of them and refer to the list. In the actual implementation that explodes to one rule per list item. Firewall rules are processed in order, either firing on first match or last match (hi, pf, we love you anyway). There’s certainly some really neat tricks to make that as efficient as it can be, but it’s still a lot of computation to determine whether a packet should pass. That doesn’t scale well. So instead, conceptually we only check the rules on the first packet of a connection and then slam the connections details into a data structure we can perform lookups against for subsequent package much more efficiently.
Packet filters aren’t all that useful anymore by themselves. They’re certainly one layer of defense, but won’t get you past any audits. Deep packet inspection (or L7 firewalls, or application firewalls, or whatever you want to call them) come with their own problems tho, and may require some interesting architectures to address. A huge amount of Internet traffic is encrypted (TLS), and rightly so. A network firewall that simply sits in the path like in the diagram in the article can’t inspect that traffic meaningfully. There’s some interesting techniques for finding certain markers even in encrypted packets, but obviously you can’t filter a specific HTTP verb (as a contrived example).
If you’re firewalling user traffic you have control over, you can install a wildcard certificate on the firewall and act as a proxy, transparently decrypting all traffic. Many enterprises do this. If you’re trying to protect your servers from Internet users you have no control over, you generally want to decrypt on a load balancer that can also do the security work, or pass the traffic from there unencrypted through a firewall, and then re-encrypt to the server if that’s warranted.
For logging, firewalls in front of busy networks can generate non-trivial amounts of log traffic, and unlike other network logging sampling usually won’t do. This can require using interesting protocols (not syslog, certainly not HTTP) that can be hardware accelerated.