Daniel Karrenberg, co-author of RFC1918, said this 2017-10-06 on the NANOG mailing list:
> On 05/10/2017 07:40, Jay R. Ashworth wrote:
> > Does anyone have a pointer to an *authoritative* source on why
> >
> > 10/8
> > 172.16/12 and
> > 192.168/16
> >
> > were the ranges chosen to enshrine in the RFC? ...
>
> The RFC explains the reason why we chose three ranges from "Class A,B &
> C" respectively: CIDR had been specified but had not been widely
> implemented. There was a significant amount of equipment out there that
> still was "classful".
>
> As far as I recall the choice of the particular ranges were as follows:
>
> 10/8: the ARPANET had just been turned off. One of us suggested it and
> Jon considered this a good re-use of this "historical" address block. We
> also suspected that "net 10" might have been hard coded in some places,
> so re-using it for private address space rather than in inter-AS routing
> might have the slight advantage of keeping such silliness local.
>
> 172.16/12: the lowest unallocated /12 in class B space.
>
> 192.168/16: the lowest unallocated /16 in class C block 192/8.
>
> In summary: IANA allocated this space just as it would have for any
> other purpose. As the IANA, Jon was very consistent unless there was a
> really good reason to be creative.
>
> Daniel (co-author of RFC1918)
>>> This is a fuzzy recollection of something I believe I read, which might
well be inaccurate, and for which I can find no corroboration. I
mention it solely because it might spark memories from someone who
actually knows:
>>> A company used 192.168.x.x example addresses in some early
documentation. A number of people followed the manual literally when
setting up their internal networks. As a result, it was already being
used on a rather large number of private networks anyway, so it was
selected when the RFC 1597 was adopted.
>> sun
> Wasn't 192.9.200.x Sun's example network?
of course you are correct. sorry. jet lag and not enough coffee.
---
So no answers.
nickdothutton · 5h ago
I worked in the early 90s getting UK companies connected. The number of people who had copied Suns (and HPs and others) addresses out of the docs was enormous. One of them was a very well known token ring network card vendor.
Not everyone thought this was a good idea, and I still maintain the alternative path would have led to a better internet than the one we today.
zokier · 4h ago
As the authors themselves note, RFC 1597 was merely formalizing already widespread common practice. If the private ranges were not standardized then people would still have created private networks, but just used some random squatted blocks. I can not see that being better outcome.
wongarsu · 4h ago
The optimist in me wants to claim that not assigning any range for local networks would have lead to us running out of IPv4 addresses in the late 90s, leading to the rapid adoption of IPv6, along with some minor benefits (merging two private networks would be trivial, much fewer NATs in the world leading to better IP based security and P2P connectivity).
The realists in me expects that everyone would have used one of the ~13 /8 blocks assigned to the DoD
jvanderbot · 4h ago
The realist in me thinks that we'd probably have had earlier adoption of V6 but the net good from that is nil compared to the headaches.
V6 is only good when V4 is exhausted, so it's tautological to call it a benefit of earlier exhaustion of V4, or am I missing something? I'm probably missing something.
saghm · 3h ago
I'm guessing the reason they think it would have been better is that right now the headaches are from us being a weird limbo state where we're kinda out of IPv4 addresses but also not really at the point where everything supports IPv6 out of necessity. If the "kinda" were more definitive, there would potentially have been enough of a forcing factor that everyone make sure to support IPv6, and the headaches would have been figured out.
high_priest · 5h ago
Can you please elaborate? How would such a minute change lead to "a better internet"?
emacsen · 4h ago
I'm not the OP or author, but the argument against private network addresses is that such addresses break the Internet in some fundamental ways. Before I elaborate on the argument, I want to say that I have mixed feelings on the topic myself.
Let's start with a simple assertion: Every computer on the Internet has an Internet address.
If it has an Internet Address, it should be able to send packets to any computer on the Internet, and any other computer on the Internet should be able to send packets to it.
Private networks break this assumption. Now we have machines which can send packets out, but can't receive packets, not without either making firewall rule exceptions or else doing other firewall tricks to try to make it work. Even then, about 10-25% of the time, it doesn't work.
But it goes beyond firewall rules... with IP addresses being tied to a device, every ISP would be giving every customer a block of addresses, both commercial and residential customers.
We'd also have seen fast adoption of IPv6 when IPv4 ran out. Instead we seem to be stuck in perpetual limbo.
On team anti-private networking addresses:
- Worse service from ISPs
- IPv4 still in use past when it should have been replaced
- Complex work around overcoming firewalls
I'm sure we all know the benefits of private networks, so I don't need to reiterate it.
tzs · 34m ago
> I'm sure we all know the benefits of private networks, so I don't need to reiterate it
That is I think the key. Private networks have sufficient benefit that most places will need one.
The computers and devices on our private network will fall into 3 groups: (1) those that should only communicate within our private network, (2) those that sometimes need to initiate communication with something outside our network but should otherwise have no outside contact, and (3) those that need to respond to communication initiated from something outside our network.
We could run our private network on something other than IP, but then dealing with cases #2 and #3 is likely going to be at least as complicated as the current private IP range approach.
We could use IP but not have private ranges. If we have actual assigned addresses that work from the outside for each device we are then going to have to do something at the router/firewall to keep unwanted outside traffic from reaching the #1 and #2 types of devices.
If we use IP but do not have assigned addresses for each device and did not have the private ranges I'd expect most places would just use someone else's assigned addresses, and use router/firewall rules to block them off from the outside. Most places can probably find someone else's IP range that they are sure contains nothing they will ever need to reach so should be safe to use (e.g., North Korea's ranges would probably work for most US companies). That covers #1, but for #2 and #3 we are going to need NAT.
I think nearly everyone would go for IP over using something other than IP. Nobody misses the days when the printer you wanted to buy only spoke AppleTalk and you were using DECnet.
At some point, when we are in the world where IP is what we have on both the internet and our private networks but we do not have IP ranges reserved for private networks, someone will notice that this would be a lot simpler if we did have such ranges. Routers can then default to blocking those ranges and using NAT to allow outgoing connections. Upstream routers can drop those ranges so even if we misconfigure ours it won't cause problems outside. Home routers can default to one of the private ranges so non-tech people trying to set up a simple home network don't have to deal with all this.
If for some reason IANA didn't step in and assign such ranges my guess is that ISPs would. They would take some range within their allocation, configure their routers to drop traffic using those address, and tell customers to use those on their private networks.
zokier · 4h ago
> every ISP would be giving every customer a block of addresses, both commercial and residential customers.
or more likely, you would still receive only handful of addresses and would have needed to be far more considerate what you connect to your network, thus restricting the use of IP significantly. Stuff like IPX and AppleNet etc would have probably then been more popular. The situation might have been more like what we had with POTS phones; residential houses generally had only one phone number for the whole house and you just had to share the line between all the family members etc.
emacsen · 2h ago
The phone company would have been happy to sell you more phone lines. I knew people who had some.
But you're right that as dumb as it is, it's likely that ISPs would have charged per "device" (ie per IP address).
Before 1983 in the US, you could only rent a phone, not own one (at least not officially) and the phone company would charge a rental fee based on how many phones you had rented from them. Then, when people could buy their own phones, they still charged you per phone that you had connected! You could lie, but they charged you.
Like I said, I have mixed feelings about NATs, but you're right that the companies would have taken advantage of customers.
Hilift · 3h ago
Most SMB companies did not have IP addresses in 1994 when RFC 1597 was published, although the range was known. However, the well known companies did, and some of those have the older full class B assignments. It was common for those companies to use those public IP addresses internally to this day, although RFC-1918 addresses were also in use.
Since Netware was very popular in businesses and it was possible/common to use only the IPX protocol for endpoints, you could configure endpoints to use a host that had both an IPX and IP address as the proxy, and not use an IP address on most endpoints. That was common due to Netware actually charged for DHCP and DNS add-ons. When Windows became more popular, IP on endpoints likely used RFC-1918 around ~1996.
B1FF_PSUVM · 2h ago
> It was common for those companies to use those public IP addresses internally to this day
Yep, a desktop PC with its own IPv4 address. Back in the day, no firewall afaik.
weinzierl · 5h ago
Since the posting does not give a real answer.
192 is 11000000 in binary.
So it is simply the block with the first two bits set in the netmask.
168 is a bit more difficult. It is 10101000, a nice pattern but I don't know why this specific pattern.
marcusb · 5h ago
I don't think this does anything to explain why 192.168/16 was chosen specifically. Three netblocks (10/8, 172.16/12, and 192.168/16) were selected from the class A, B, and C address spaces to accommodate private networks of various sizes. Class C addresses by definition have the two most significant bits set in their first octet and the third set to 0 (i.e., 192 - 223.)
192 in the first octet starts the class C space, but 10 and 172 do not have the same relationship in classes A and B.
weinzierl · 5h ago
Yes you are right. I researched a bit and there are other reserved blocks next the 168 that obviously don't have a nice pattern. So the 101010 is just a coincidence.
drewolbrich · 1h ago
101010 in decimal is 42.
Hikikomori · 5h ago
192 is the first C class, 168 likely next available when rfc1918 was written.
michaelcampbell · 4h ago
This is probably apocryphal, and I'm probably getting the details wrong anyway, but tangentially related to this, when I worked for a small network security firm (later purchased by Cisco, as most were), we had a customer that used, I'm told, the IP ranges typically seen in North Korea as their internal network. They TOLD us they did it because the addresses wouldn't conflict with anything they cared about, and no one had told them about 1918 + NAT, which I find dubious.
This was in the 10's of 1000's of devices.
zettabomb · 4h ago
Weirdly enough, there are a few systems at my workplace which are in the 192.9.200.x subnet! They're only about 20 years old, though. We are actively looking to replace the entire system.
EvanAnderson · 3h ago
I've done work for several municipalities and police departments in western Ohio and found 192.9.200.0/24 in several. They all had a common vendor who did work back in the 90s and was the source.
Aeolun · 4h ago
From another post on here:
> > Wasn't 192.9.200.x Sun's example network?
> of course you are correct. sorry. jet lag and not enough coffee.
Is it? What section do you mean? I don't see anything in there about private networks or 192.168.0.0/16 (in CIDR notation, which didn't exist at the time).
gausswho · 4h ago
While I've got some eyeballs on the subject, I'm tiring of mistyping this across my local network devices. How many of you folks alias this, and in what way? /etc/hosts works for my *nix machines, but not my phones, I think?
I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
somat · 3h ago
DNS (queue the "now you have two problems" meme)
Theoretically SRV records can be set in dns to solve the port issue, realistically Nothing uses them so.... You are probably out of luck there. The way SRV records work is you are supposed to ask a network "Where is the foo service at?"(SRV _foo._tcp.my.network.) and dns sez "it's at these machines and ports" (SRV 1(pri) 1(weight) 9980(port) misc.my.network.(target))
My personal low priority project is to put mac address in DNS, I am about as far as "I could fit them in an AAAA record"
As for specific software recomendations, I am probably not a good source. I run a couple of small openbsd machines(apu-2) that serve most of my home networking needs. But, I am a sys-admin by trade, while I like it, I am not sure how enjoyable others would find the setup.
denkmoon · 2h ago
DNS obviously. It’s easy, don’t let memes put you off.
For port mapping depends what specifically you’re aiming for. SVCB/HTTPS records are nice for having many https servers on a single system.
akerl_ · 4h ago
I just stick all my DNS records in a normal DNS server. In my case I’m terraforming some Route53 zones. So I havd a subdomain off a real domain I own that I use for LAN gear and they all have real DNS.
For ports, anything that can just be run on 443 on its own VM, I do that. For things that either can’t be made to run on 443, or can’t do their own TLS, etc, I have a VM running nginx that handles certificates and reverse proxying.
t-3 · 3h ago
Local proxies are nice for these kinds of things, but most phones are running some kind of mDNS service so try setting up avahi/openmDNS to advertise services.
Thorrez · 4h ago
10.0.0.1 or 10.1.1.1 would be a bit easier to type. You could migrate there.
jerkstate · 4h ago
mDNS works well for names on your local network, you can integrate it with your dhcp server, works on hosts and phones. I don't have a good answer for ports.
Sharlin · 5h ago
User bmacho cites this Superuser question [1] in a reply to a downvoted comment at the bottom of this thread. It’s much more illuminating than the OP emails; Michael Hampton’s answer in particular is amazing. I had never heard of Jon Postel before.
Reading this makes me a bit sad and reminds me that I'm older now and lucky to have grown up during the golden age of the Internet.
Sharlin · 2h ago
Mm. I’m an older millennial, so solidly in the Web 1.0 generation, but never had the chance to use the internet before the web took off. I missed BBSs too, which were big where I’m from (probably bigger than the pre-Web internet, outside universities at least). I was fourteen when Postel died in 1998. My earliest memories of internet use are probably from ’96 or so, using library or school computers after classes.
bluedino · 4h ago
Working a large company who was allocated a massive block of IPs in the early days, being one off of a reserved subnet has resulted in so many typos.
alvarete · 5h ago
Weirdly enough I grew up having inside my network during the 90s, 127.26.0.X instead of the widely spread 192.168.
It created a big trauma when I joined the uni and hit the wall. I suppose this how americans feel about the metric system :p
emmelaich · 5h ago
(2009)
der_gopher · 5h ago
For real? I thought it somehow relates to bits and bytes...
youknow123 · 6h ago
They needed private IP ranges that wouldn't conflict with the real internet. 192.168 was just sitting there unused, so they grabbed it along with 10.x.x.x and 172.16-31.x.x.
Symbiote · 5h ago
Read the article rather than making something up.
marcusb · 5h ago
It isn't an article, but a mailing list post, and the post starts out with:
This is a fuzzy recollection of something I believe I read, which might well be inaccurate, and for which I can find no corroboration. I mention it solely because it might spark memories from someone who actually knows:
Spoiler: it sparks one memory from one person, who winds up being mistaken.
Offering an alternative hypothesis seems reasonable given the content of the post.
* https://superuser.com/a/1257080/38062
>>> This is a fuzzy recollection of something I believe I read, which might well be inaccurate, and for which I can find no corroboration. I mention it solely because it might spark memories from someone who actually knows:
>>> A company used 192.168.x.x example addresses in some early documentation. A number of people followed the manual literally when setting up their internal networks. As a result, it was already being used on a rather large number of private networks anyway, so it was selected when the RFC 1597 was adopted.
>> sun
> Wasn't 192.9.200.x Sun's example network?
of course you are correct. sorry. jet lag and not enough coffee.
---
So no answers.
Not everyone thought this was a good idea, and I still maintain the alternative path would have led to a better internet than the one we today.
The realists in me expects that everyone would have used one of the ~13 /8 blocks assigned to the DoD
V6 is only good when V4 is exhausted, so it's tautological to call it a benefit of earlier exhaustion of V4, or am I missing something? I'm probably missing something.
Let's start with a simple assertion: Every computer on the Internet has an Internet address.
If it has an Internet Address, it should be able to send packets to any computer on the Internet, and any other computer on the Internet should be able to send packets to it.
Private networks break this assumption. Now we have machines which can send packets out, but can't receive packets, not without either making firewall rule exceptions or else doing other firewall tricks to try to make it work. Even then, about 10-25% of the time, it doesn't work.
But it goes beyond firewall rules... with IP addresses being tied to a device, every ISP would be giving every customer a block of addresses, both commercial and residential customers.
We'd also have seen fast adoption of IPv6 when IPv4 ran out. Instead we seem to be stuck in perpetual limbo.
On team anti-private networking addresses:
- Worse service from ISPs - IPv4 still in use past when it should have been replaced - Complex work around overcoming firewalls
I'm sure we all know the benefits of private networks, so I don't need to reiterate it.
That is I think the key. Private networks have sufficient benefit that most places will need one.
The computers and devices on our private network will fall into 3 groups: (1) those that should only communicate within our private network, (2) those that sometimes need to initiate communication with something outside our network but should otherwise have no outside contact, and (3) those that need to respond to communication initiated from something outside our network.
We could run our private network on something other than IP, but then dealing with cases #2 and #3 is likely going to be at least as complicated as the current private IP range approach.
We could use IP but not have private ranges. If we have actual assigned addresses that work from the outside for each device we are then going to have to do something at the router/firewall to keep unwanted outside traffic from reaching the #1 and #2 types of devices.
If we use IP but do not have assigned addresses for each device and did not have the private ranges I'd expect most places would just use someone else's assigned addresses, and use router/firewall rules to block them off from the outside. Most places can probably find someone else's IP range that they are sure contains nothing they will ever need to reach so should be safe to use (e.g., North Korea's ranges would probably work for most US companies). That covers #1, but for #2 and #3 we are going to need NAT.
I think nearly everyone would go for IP over using something other than IP. Nobody misses the days when the printer you wanted to buy only spoke AppleTalk and you were using DECnet.
At some point, when we are in the world where IP is what we have on both the internet and our private networks but we do not have IP ranges reserved for private networks, someone will notice that this would be a lot simpler if we did have such ranges. Routers can then default to blocking those ranges and using NAT to allow outgoing connections. Upstream routers can drop those ranges so even if we misconfigure ours it won't cause problems outside. Home routers can default to one of the private ranges so non-tech people trying to set up a simple home network don't have to deal with all this.
If for some reason IANA didn't step in and assign such ranges my guess is that ISPs would. They would take some range within their allocation, configure their routers to drop traffic using those address, and tell customers to use those on their private networks.
or more likely, you would still receive only handful of addresses and would have needed to be far more considerate what you connect to your network, thus restricting the use of IP significantly. Stuff like IPX and AppleNet etc would have probably then been more popular. The situation might have been more like what we had with POTS phones; residential houses generally had only one phone number for the whole house and you just had to share the line between all the family members etc.
But you're right that as dumb as it is, it's likely that ISPs would have charged per "device" (ie per IP address).
Before 1983 in the US, you could only rent a phone, not own one (at least not officially) and the phone company would charge a rental fee based on how many phones you had rented from them. Then, when people could buy their own phones, they still charged you per phone that you had connected! You could lie, but they charged you.
Like I said, I have mixed feelings about NATs, but you're right that the companies would have taken advantage of customers.
Since Netware was very popular in businesses and it was possible/common to use only the IPX protocol for endpoints, you could configure endpoints to use a host that had both an IPX and IP address as the proxy, and not use an IP address on most endpoints. That was common due to Netware actually charged for DHCP and DNS add-ons. When Windows became more popular, IP on endpoints likely used RFC-1918 around ~1996.
Yep, a desktop PC with its own IPv4 address. Back in the day, no firewall afaik.
192 is 11000000 in binary.
So it is simply the block with the first two bits set in the netmask.
168 is a bit more difficult. It is 10101000, a nice pattern but I don't know why this specific pattern.
192 in the first octet starts the class C space, but 10 and 172 do not have the same relationship in classes A and B.
This was in the 10's of 1000's of devices.
> > Wasn't 192.9.200.x Sun's example network?
> of course you are correct. sorry. jet lag and not enough coffee.
I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
Theoretically SRV records can be set in dns to solve the port issue, realistically Nothing uses them so.... You are probably out of luck there. The way SRV records work is you are supposed to ask a network "Where is the foo service at?"(SRV _foo._tcp.my.network.) and dns sez "it's at these machines and ports" (SRV 1(pri) 1(weight) 9980(port) misc.my.network.(target))
https://www.rfc-editor.org/rfc/rfc2782
My personal low priority project is to put mac address in DNS, I am about as far as "I could fit them in an AAAA record"
As for specific software recomendations, I am probably not a good source. I run a couple of small openbsd machines(apu-2) that serve most of my home networking needs. But, I am a sys-admin by trade, while I like it, I am not sure how enjoyable others would find the setup.
For port mapping depends what specifically you’re aiming for. SVCB/HTTPS records are nice for having many https servers on a single system.
For ports, anything that can just be run on 443 on its own VM, I do that. For things that either can’t be made to run on 443, or can’t do their own TLS, etc, I have a VM running nginx that handles certificates and reverse proxying.
[1] https://superuser.com/questions/784978/why-did-the-ietf-spec...
Reading this makes me a bit sad and reminds me that I'm older now and lucky to have grown up during the golden age of the Internet.
It created a big trauma when I joined the uni and hit the wall. I suppose this how americans feel about the metric system :p
Offering an alternative hypothesis seems reasonable given the content of the post.