The Internet itself is growing, so "50%" does still represent a growing number of users. Also Google's stats are missing half a billion v6 users from China.
There's no risk at all if you're using your own allocated prefix, because those are managed by IANA/RIRs/LIRs to not overlap.
Incidentally, if you find yourself experiencing an RFC1918 clash, one simple way of fixing it is to use NAT64 to map the remote side's RFC1918 into a /96 from your v6 allocation. You can write the last 32 bits of a v6 address in v4 format, so this leads to addresses like 2001:db8:abc:6401::192.168.0.10 and 2001:db8:abc:6402::192.168.0.10, which don't overlap from your perspective.
(If you wanted something simpler to type you could put them at e.g. fd01::192.168.0.10... but then you do start running the risk of collisions with other people who also thought they could just use a simple ULA prefix.)
Adding two bytes would have been just as much work as adding 12 bytes, and would have left us with too few addresses rather than too many. The MAC address space is now 64 bits and L3 is necessarily less dense than L2, so 128 bits is the smallest power of 2 where we can be reasonably sure we won't end up with too few addresses.
Considering how hard deploying a new L3 protocol is, we're only going to get one shot at it so it's a lot better to end up with too many addresses rather than too few.
Ehm but IPv6 packets still have the L2 layer as well right? Which already includes the MAC address. So that 64 address MAC space is doubled, it's not like you're saving any. It was a pretty arbitrary decision to accommodate the MAC address inside the IPv6 address and these days it's usually randomised anyway for privacy purposes, so the MAC part of an IPv6 packet doesn't have to be the size of the MAC address.
L3 has nothing to do with MAC addresses anyway so I've always found that a pretty weird decision anyway. Sure, it avoids having to implement ARP but we need that again now anyway with the randomisation. And ARP is like a one-time in a few minutes kinda thing anyway.
I'm pretty sure that if we'd just gone for "a couple bytes extra" we'd have long been completely over. It's the whole L3 transition itself that suffers from the complexity. I remember it well in the 2000s, nobody in telecoms wanted to touch it. And when IPv6 was invented in '93 or so, the installed base was extremely small. It'd have been a piece of cake to get it over with then.
The point of L3 is to aggregate hosts into networks, so that routing only has to keep track of network prefixes instead of the individual MAC address of every machine. (The amount of routing updates needed for the latter would scale as something like O(hosts²) which just wouldn't work for large networks, let alone the Internet.)
The aggregation necessarily "wastes" L3 addresses, so if you think you'll have enough machines to justify an L2 address size of n bits then that also implies needing an L3 address size of n+m bits, where m is a number that represents how densely packed your L3 address space is. Anything smaller than that will be too small to handle the full extent of your L2 address space.
> It was a pretty arbitrary decision to accommodate the MAC address inside the IPv6 address [...] Sure, it avoids having to implement ARP
You're thinking of SLAAC, which picks the address by slapping the MAC/EUI-64 into the right-hand 64 bits, but this is just a convenient way of picking the address bits. There's no special significance to those bits and you still need to do ARP.
> I'm pretty sure that if we'd just gone for "a couple bytes extra" we'd have long been completely over.
We still can't get people to stop hardcoding socket(AF_INET, ...) or manually crafting sockaddr_in structures. This is the minimum amount of work that will always be needed, regardless of how many extra bits are involved, and even this part hasn't been quick.
v4-mapped v6 (rfc4038), not 6to4, right? It was only a transition feature and not how v6 was rolled out by default. I don't even have such an address. You enable v6 and it suddenly means you're reachable with the new addresses, typically assigned via slaac. Also idk if v4-mapped was meant to be split beyond /32s.
But like the similar proposals, it fails to avoid a dual-stack scenario.
Note that going from v4 to v6 also doesn't mean changing all your routes and addresses; you just enable it and everything still works. Network components can get updated behind the scenes without user impact. There's no flag day though; ISPs can start using the new address space as soon as they want, rather than being forced to wait until everybody else is ready.
You do have to change everything to actually use v6 rather than having v4 on the side. Re: flag day, I guess you could use the new address space of v4x before everyone is ready, provided the other side supports it.
Having partial deployment of IPv4x is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
Getting "free" new-protocol addresses is also nothing new:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
Ok, I see what you mean then by 6to4 already supporting what v4x proposed, but that path wasn't taken either. It's a special feature with rare usage rather than the minimum default way of doing v6.
We did take that path. If you zoom in on the start of https://www.google.co.uk/intl/en/ipv6/statistics.html you can see that 6to4+Teredo was more common than native until 2010. (Okay, that's combined with Teredo, but I think probably 6to4 is the majority of it because Windows will use 6to4 without needing a registry change or custom application behavior.)
You see it as a special feature with rare usage because it competed with native and lost. People overwhelmingly preferred deploying v6 natively instead of deploying 6to4. Obviously more people would be using it if it was the only option, but if you're trying to come up with an alternative to the way v6 went then "an approach that v6 took but which nobody wanted to use when native v6 was the alternative" might not be a very promising place to start.
It's not like the v6 spec said, here are two options on equal footing, go figure out which is better. 6to4 was a temp bolt-on. I had to look up what things were actually like in 2010 because I really don't remember getting a 2002:, which brought me to rfc6343 that explained why exactly 6to4 suffered. A big part of it is that even a v6-enabled ISP didn't give you a 2002:, rather you were relying on anycast to reach a relay server. Why would an ISP or user ever want that mess? Either they do v6, or more likely they don't care and focus on v4.
So actually I take it back, 6to4 was pretty different from this ipv4x idea. I misunderstood and thought a v6-enabled ISP would give you a 2002: under 6to4, which you could later subdivide.
Ignoring that, say 6to4 were like v4x... The idea that those 2 years showed that native was preferable, that was among a small minority who even cared about v6 to begin with (0.25% at the time). This was a primary, not a final.
It doesn't rely on anycast. The ISP gives you a v4 address from which you generate the 2002:: prefix, and when talking to another 6to4 network you send the packets directly from your v4 address to their v4 address. Inside those packets is an extra 80 bits of addressing that the source and destination networks can use.
Or, ISPs that want to do so can use one of their own v4 addresses to get the corresponding /48 6to4 prefix, split it up and hand that out to customers while handling the v4 packets themselves.
Looking at the 15 or so years afterwards shows native v6 deployment at 50% of the Internet, and approximately nobody interested in deploying 6to4 outside of these posts where people keep reinventing it but with different names and an assertion that if only the IETF had thought of this, everybody would have loved it. I think that's decent evidence that they wouldn't.
What you're describing is what rfc6343 called "router 6to4" as opposed to the anycast variant, but anycast was what got used. "In practice, there are few if any deployments of Router 6to4 following these recommendations. Mostly, Anycast 6to4 has been deployed." But even router 6to4 isn't easy like the v4x-like ideas: "not designed to be an unmanaged solution."
To whoever wishes the IETF had thought of ipv4x, I'll bet the IETF thought of it and decided they wanted to start with a clean slate in v6. I can understand why, big part because of v4's fragmented routes, and also /8 haves&have-nots. But v4x at least had a way to gradually defrag things once it was fully adopted. If you're suggesting staying on 6to4 forever, that would've meant continuing to use vanilla ipv4 between subnets.
Also, all those compromise ideas had NAT staying at least on day 1. Many v6 proponents seem to think your device on any random guest network should be able to receive inbound connections, enabling true P2P everywhere, which implies having no router firewall. Others recommend a router firewall. Not sure what IETF's initial position was on this, but they later recommended that residential routers give the user a clear choice between closed vs open. v4x with its NAT would just default to closed. In reality, I've never seen a consumer router clearly say what it does with inbound v6 traffic.
I don't think that RFC is quite correct. Both types are set up the same way (the only difference is whether the dest IP of a packet is 6to4 or native) and it was designed to be unmanaged in the scenario we're imagining here. Windows machines configured it automatically if they received a public IP, for example.
But if it is, it's basically saying that people really didn't want to take this approach.
> To whoever wishes the IETF had thought of ipv4x, I'll bet the IETF thought of it and decided they wanted to start with a clean slate in v6.
They obviously did think of it, since v4x and 6to4 take pretty much the same approach. Both of them put extra address bits into the start of the payload. Both of them indicate they've done this with some flag in the header. Both of them send packets through unmodified v4 routers. About the only difference is that v4x consumes the entire 128-bit address space while 6to4 sits in a /16.
Everything that you've said can be done in v4x can also be done with v6 and 6to4 (possibly combined with some of the other transition options), and it doesn't make it any easier either unless you handwave away a bunch of the work that needs to happen.
> But v4x at least had a way to gradually defrag things once it was fully adopted. If you're suggesting staying on 6to4 forever, that would've meant continuing to use vanilla ipv4 between subnets.
6to4 prefixes can be routed natively without needing to be tunnelled inside v4 packets, so you don't need to do that. That's how ISP-side 6to4 would work, but you could also do the same thing in the DFZ. If you're announcing v4 addresses you could also make a BGP announcement for the 6to4 prefixes that correspond to those v4 addresses, allowing packets sent to those prefixes to reach you natively instead of getting tunnelled. Presumably this is about the same as what you were imagining v4x would have done to get rid of vanilla v4.
I don't see how you could gradually defrag things in v4x, unless you include ways that would also work with 6to4.
> Many v6 proponents seem to think your device on any random guest network should be able to receive inbound connections, enabling true P2P everywhere, which implies having no router firewall
A few do, but it's rare. The main point is that the IP of a machine should be the same no matter where you are in the network, and nobody should be using clashing address spaces. None of that implies no firewall.
NAT features heavily in the v6 transition. It's how you handle connectivity to and from v4 hosts that can't do v6. There's just no need for it between two v6 hosts.
If your goal is just to roll out support for v6 addresses without actually using them, then you don't need to change anything about your network to do that. (Remember going from XP to Vista? That added support for v6 in Windows, but you didn't need to change your network for it.) This is the same as your described rollout of support for v4x.
Once the world is ready and your ISP actually issues you with v6, then you need to change some things (I would argue not everything). Again, this is the same as for v4x. The only difference is that in the v4x case, you put this after the end and ignored it.
Ignoring the bit you don't like for one of them but not for the other isn't a fair comparison.
Yeah I'm with you on not needing to change much to have v6 without actually using it.
But when the world is actually ready for v6, you do need to change everything. And it's worse than that, the world only becomes ready by people changing everything before the world is ready. You said I put this part at the end of v4x, and yes that's the point, the ordering of changes matters.
So your suggested ordering is for everybody to add support for your v4x addresses in every OS, device, software, API, protocol etc, and then only allow them to make use of the extra address space after everybody else has also added support everywhere?
To be clear, you don't need to change much in order to have v4x without using the extra address space... but starting to use that extra space means changing everything in the same sense that starting to use the extra address space from v6 means changing everything.
At least with v6 you don't have to wait for the rest of the world to be ready. You can start using it straight away. What you're asking for would give people _less_ motivation to do the work, so the "the world only becomes ready" problem gets worse. How is this better?
To answer your first question, if both hosts understand v4x, you can still use the extra space before everyone else is using it. v6 has the same limitation except all routers in between also need to understand it, unless you've set up 6to4 which is more configuration. But this isn't even the main issue.
How is v4x easier, because from many parties' perspectives (end users, ISPs, hosts), there is little to no change. Yes, whoever makes the router or OS has to deal with the new protocol. Now if you buy a new router or update your OS, it supports and truly uses v4x without you needing to configure anything. Unlike v6 where even if you don't make a router or OS, you very much have to deal with it as a sysadmin or maybe even an end user.
How is v4x more motivating, well to most people it's not. It's just easier. Those who already own large v4 blocks might be more motivated to support v4x than v6, but it's a double-edged sword.
I'm not saying it's simply better though. There are downsides. It's better if your only goal is to extend the address space.
Why do you think sysadmins and end users wouldn't need to deal with v4x?
I understand it in the situation where you aren't using the expanded address space, because then you can just present v4 addresses to the sysadmin/user and they don't need to do anything differently. But... then you don't get any expanded address space.
If you want to actually use the expanded address space, then they need to deal with it in the same ways and for the same reasons they need to deal with v6. There doesn't appear to be any aspect of v4x that would alter or avoid that need.
v4 doesn't even manage one IP per person. It's fundamentally completely insufficient in a world with personal computing devices. Even if you declared every single IP in v4 to be wasted and demanded we repurposed them all, it wouldn't be enough to fix that.
The multicast and reserved blocks total 32 /8s. Before IANA runout, we were going through over one /8 per month, so this would represent less than 2.5 years of allocations. We've already spent decades buying more time for people to migrate to v6; we don't need another 2.5 years that people will just immediately squander.
NAT is not in any way a feature. I'll admit it can be a useful tool in your toolbox sometimes, but otherwise it's just a completely unnecessary complication that breaks things and wastes time and effort. It's not something to be building the Internet on. You want each device to get an IP and you don't want two devices to have the same IP, because that's how machines on the Internet send packets to each other -- which is the entire point of having the Internet at all.
> All of "the global internet" is in 2002::/16, which effectively gives 32 bits of assignable space. Exactly the same as IPv4 [...] the assumption that all routable prefixes are in 2001::/16 as specified
Global allocations are actually coming from the entire of 2000::/3, not 2002::/16 or 2001::/16 (and there's another five untouched /3s in case we need them). So far about 0.2% of it been allocated to RIRs, and most of that RIR space has not yet been allocated to anybody. We're clearly not exhausting it at anything like the rate of v4.
v6 is less complex than v4 in practice due to not needing NAT, and gains us far more than you're thinking. A /48 contains 65k subnets of effectively infinite hosts each, which is similar to a /8 but with no limit on hosts per network, and there are something on the order of a trillion of them in total rather than 256 of them.
What you're describing there is just an approach to store NAT state inside every packet instead of on the router. I'm not sure that's even an improvement on v4, but in any case it wouldn't increase the size of the address space so it wouldn't help with the one thing driving the need for IPv6.
IPv4 will be with us for a long, long time. My point is that we're stuck with the combination in some form or another and it didn't have to be that way.
Embedding the address extension using the extension mechanism built into IPv4 would have allowed for a true upgrade and a single IP address space. Two nodes with eight-byte IP addresses would exchange packets without any rewriting. That's an address-space expansion, not just a weird way to do NAT. With perfect foresight, I have no doubt the IETF could have embraced NAT as a transition technology quickly obsoleted by broad adoption.
There is no extension mechanism built into v4 for longer addresses.
Of course v4 is going to be with us for a long time. We can't make existing v4-only devices go away because we have no way to enforce a flag day on the Internet, so there was never any way around that. But you can run networks without v4 just fine if you want to (including the ability to connect to v4-only hosts, or let them connect to you), so you can in fact turn v4 off on your network.
> That's an address-space expansion, not just a weird way to do NAT
You said the server would have a proper v4 address and the client would put its RFC1918 address into an extension header. You don't get any extra address space that way. But if you do try to give nodes 8-byte addresses... you immediately get the same situation we have in v6, because nodes and software that only know how to deal with 4-byte addresses won't know how to deal with your 8-byte addresses. You'll end up having to use the same approaches v6 uses to deal with the same problem.
> I have no doubt the IETF could have embraced NAT as a transition technology quickly obsoleted by broad adoption
What do you think RFC 2765/2766 are about? Or RFCs 6144~6147?
Two 6to4 networks will communicate directly between each other without using a relay, so it will still work for that. Although you ought to be able to use native v6 these days.
If you can't deploy v6 (whether native or 6to4) on the remote side for whatever reason, NAT64 is useful for dealing with conflicting RFC1918. You map each instance of RFC1918 you need to access into different v6 /96s, and then they don't conflict from your perspective. (But like NAT44, it only works for outbound connections; inbound ones need a port forward.)
reply