IPv6's failure was mostly caused by the IETF's ivory tower dwellers, who seem to generally have no practical experience or understanding whatsoever of how networks are actually built and run today, especially at the small to mid scale.
Small site multihoming, for example, is an absolute disaster. Good luck if you're trying to add a cellular backup to your residential DSL connection.
IETF says you should either have multiple routers advertising multiple provider-assigned prefixes (a manageability nightmare), or that you should run BGP with provider independent address space; have fun getting your residential ISP or cellular carrier onboard with this idea.
IETF has a history of being hostile to network operators. I mean actual network operators - not the people who show up at conferences or work the mailing list who just happen to get a paycheck from a company that runs a network (and have zero production access / not on call / not directly involved in running shit). It's gotten better in the last few years in certain areas (and credit to the people who have been willing to fight the good fight). But it's very much a painful experience where you see good ideas shot down and tons of people who want to put their fingerprint on drafts/proposals - it's still a very vendor heavy environment.
Even the vendor representatives are mostly getting paid to post on mailing lists and show up at conferences.
They're not building products, and they're not supporting, visiting or even talking to their customers. Design-by-committee is a full time job that people actually building things for a living tend to not have time for.
The fact is that already in 1993 routing tables were just too big, and the fact is that having a "flat" address space was always going to mean huge routing tables, and the fact is that because IPv6 is still "flat" routing tables only got larger.
The fix would have been to have a subset of the address space that is routed as usual for bootstrapping ex-router address->AS number mapping, and then do all other routing on the basis of AS numbers _only_. This would have allowed us to move prefix->AS number mappings into.. well, DNS or something like it (DNS sucks for prefix mapping, but it could have been extended to not suck for prefix mapping), and all routing would be done based on AS numbers, making routing tables in routers _very small_ by comparison to now. Border routers could then have had tiny amounts of RAM and worked just fine. The IP packets could have borne AS numbers in addition to IP addresses, and all the routers in the middle would use only the AS numbers, and all the routers at the destination AS would know the routes to the destination IPs.
But, no. Great missed chance.
Well, we still could do this with IPv6, but it would be a lot of heavy lifting now.
EDIT: Ah, I see draft-savola-multi6-asn-pi existed.
> a cellular backup to your residential DSL connection
Hmm, what's the problem? I suppose your home devices should never be exposed to the public internet, and should only be accessible via a VPN like Wireguard. NAT64 is a thing if your home network is IPv4.
BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Yes, an interface can hsve two separate IPv6 addresses, but that doesn't make it easy.
If you do the easy and obvious thing of setting up two routers to advertise their prefix with your preferred priority when they're available (and advertise it as unavailable when they're not), your devices are likely to configure themselves for addresses on both prefixes, which is great.
Then when you open a new tcp connection (for example), they'll pick a source address more or less randomly... There's a RFC suggestion to select the source address with the largest matching prefix with the destination address, which is useful if the prefix is pretty long, but not so useful when the prefix is 2001:: vs 2602::
Anyway, once the source address is selected, the machine will send the packet to whichever router most recently sent an announcement. Priorities only count among prefixes in the same announcement. If you manage to get a connection established, future packets will use the same source address, but will be sent as appropriate for the most recently received advertisement.
This is pretty much useless, if you want it to work well, you're better off with NAT66 and a smart NAT box.
This so, and this is the same if you use IPv4. IPv6 does not bring any regression here; sadly, no progress either. If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
I would experiment with advertising two default routes, one with a significantly higher metric than the other. Most / all outgoing traffic would go through one link then. If you want to optimally load both uplinks, you likely need a more intelligent (reverse) load balancer.
> If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
That's the problem. It sounds like it would work if you do this. The documentation suggests multi homing like this would work. When your server gets a request, it sends back the response from the address it received on... but the problem is what router it sends to; when it sends to the correct router, everything is good, when it sends to the wrong router, that router's ISP should drop the packets, because they come from a prefix they don't know about.
> I would experiment with advertising two default routes, one with a significantly higher metric than the other.
Sounds like it would work, but as far as I've found, the priority metric only works if the prefixes are in the same advertisement. If each router advertises its own prefix, the actual metric used is most recent advertisement wins as default route.
As I recall, I tried Windows, Linux, and FreeBSD and it was circa 2020. 25 years in, bad OS support for a supposed feature means the feature doesn't work.
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Because it breaks your network when that router goes away. Your switch ACLs, firewall rules, and DNS records all become invalid because they contain addresses that no longer exist, that your devices continue trying to reach anyway.
Ah, I understand what you likely mean saying "small site multihoming": not a Web site (where it would be trivial), but e.g. a small office.
But with multi-homing you would need to actively test which of your uplinks has Internet access anyway, won't you? And you would have to react somehow when one of your uplinks goes down.
It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
I still assume that you don't want the internals of your office network directly accessible via the public Internet, even when you easily can; VPNs exist for a reason.
In the IPv4 world, it's easy. Just use NAT, and forward everything over your preferred bearer. Have your router ping 8.8.8.8 or something periodically from that WAN interface to verify reachability. If your preferred link goes down, make your backup link the primary route, clear your NAT translation table, and your local devices remain mostly oblivious that anything happened.
> It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
In the IPv6 world, this is pretty much what you have to do. A whole lot of extra complexity and expense that you didn't have previously.
Extra complexity and expense? You're describing basically the same thing they are. A router that does NAT and decides which link to send the packets over based on connection testing.
And IPv6 has the benefit of a significantly simpler 1:1 NAT.
NPTv6 is rarely used, and so its real world implementations tend to be poorly tested and buggy.
The answer in this case ends up being solutions like explicit web proxies, or alternatively a VPN concentrator or the like from which you can receive a routable prefix delegation, and then run multiple tunnels to satisfy your own availability or policy routing needs. Either way, you’re building some complex infrastructure to overcome regressions imposed upon you at layer 3.
You should be using dynamic DNS and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
That doesn't solve the problem. DNS remains broken until each and every device, assuming VERY generously that it is capable of dynamic DNS at all, realises that one of its prefixes has disappeared and it updates its DNS records. With DNS TTL and common default timeouts for prefix lifetime and router lifetime, that can take anywhere from 30 minutes to 30 days.
> and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
This requires you to assign one VLAN per device, unless perhaps you've got lots of money, space, and power to buy high end switches that can do EVPN-VXLAN so that you can map MAC addresses to SGTs and filter on those instead.
> each and every device ... updates its DNS records.
What device on your office LAN should maintain its own DNS records? Advertise your own caching DNS server over DHCP(6), give its responses a short TTL (10 sec), make it expire the relevant entries, or the whole cache, when one of your links goes down. I suppose dnsmasq should handle this easily.
It seems that the discussion turned away from a multi-homed setup (pooling the bandwidths of two normally reliable links) to an HA/failover setup (with two unreliable links, each regularly down).
It either needs to be able to update DNS by itself (a la Active Directory), or it needs to be able to give the DHCP server a sensible hostname in order for DHCP to make this update on its behalf, which most IoT devices cannot.
The amount of ignorance in these ipv6 posts is astounding (seems to be one every two months). It isn't hard at all, I'm just a homelabber and I have a dual-stack setup for WAN access (HE Tunnel is set up on the router since Bell [my isp] still doesn't give ipv6 address/prefixes to non-mobile users), but my OpenStack and ceph clusters are all ipv6 only, it's easy peasy. Plus subnetting is a heck of a lot less annoying that with ipv4, not that that was difficult either.
“it’s easy peasy” says guy who demonstrably already knows and has time to learn a bunch of shit 99.9% of people don’t have the background or inclination to.
People like you talking about IPv6 have the same vibe as someone bewildered by the fact that 99.9% of people can’t explain even the most basic equation of differential or integral calculus. That bewilderment is ignorance.
"The shit about IPv4" was easy to learn and well documented and supported.
"The shit about IPv6" is a mess of approaches that even the biggest fanboys can't agree on and are even less available on equipment used by people in prod.
IPv6 has failed wide adoption in 30 decades, calling it "easy" is outright denying the reality and shows the utter dumb obliviousness of people trying to push it and failing to realize where the issues are.
Could you share a list of IPv6 issues that IPv4 does not exhibit? Something that becomes materially harder with IPv6? E.g., "IPv6 addresses are long and unwieldy, hard to write down or remember". What else?
Traffic shapping in v6 is harder than v4. At least it was for me, because NDP messages were going into the shaping queue, but then getting lost since the queue only had a 128 bit address field, and 128 bits isn't actually enough for local addresses. When the traffic shaping allowed traffic immediately, the NDP traffic would be sent, but if it needed to be queued, the adapter index would get lost (or something) and the packets disappeared. So I'd get little bursts of v6 until NDP entries timed out and small queues meant a long time before it would work again.
Not an issue in ipv4 because ARP isn't IPv4 so IP traffic shaping ignores it automatically.
Software support is a big one. I ran pfSense. It did not support changing IPv6 prefixes. It still barely does. So something as simple has having reliable IPv6 connectivity and firewall rules with pfSense was impossible just a few years ago for me.
Android doesn't support DHCPv6 so I can't tell it my preferred NTP server, and Android silently ignores your local DNS server if it is advertised with a IPv4 address and the Android device got a IPv6 address.
Without DHCPv6 then dynamic DNS is required for all servers. Even a 56 bit prefix is too much to remember, especially when it changes every week. So then you need to install and configure a dynamic DNS client on all servers in your network.
"I already know enough to be productive, can the rest of the world please freeze and stop changing?"
This is not even that unreasonable. Sadly, the number of IP devices in the world by now far exceeds the IPv4 address space, and other folks want to do something about that. They hope the world won't freeze but would sort of progress.
Network engineering is a profession requiring specific education. At a high level it’s not different from calculus. You learn certain things and then you learn how to apply them in the real life situations.
It’s not hard for people who get an appropriate education and put some effort into it. Your lack of education is not my ignorance.
company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.
all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.
I find this completely fine. I don't see much (if any) upside in migrating a large existing network to anything new at all, as long as the currently deployed IPv4 is an adequate solution inside it (and it obviously is).
Public-interfacing parts can (and should) support IPv6, but I don't see much trouble exposing your public HTTP servers (and maybe mail servers) using IPv6, because most likely your hosting / cloud providers do 99.9% of it already, out of the box (unless it's AWS, haha), and the rare remaining cases, like, I don't know, a custom VPN gateway, are not such a big deal to handle.
I ran network team at an organization with hundreds of thousands hardware hosts in tens-of-megawatts large data centers, millions of VMs and containers, links between data centers, links to ISPs and IXes. We ran out of RFC1918 addresses at around 2011-2012 and went IPv6-only. IPv4 is delivered as a service to nodes requiring it via an overlay network. We intentionally simplified network design by doing so.
I should have been gentler and less arrogant, yes. Sincerely though, please explain how ipv6 is in anyway more difficult than a properly set up ipv4 enterprise. What tools are not available?
I left my job as a NE/architect over a 15 years ago, but the show stopper back then revolved around how to handle routing with firewalling. Firewalling being biggest roadblock due to needing traffic symmetry. I'm doing my best to remember why we stopped at just providing v6 at the edge for site-specific Internet hosted services and never pushed it further.
Mind you, our team discussed this numerous times over a few years and never came up with a solution that didn't look like it would require us to completely fork-lift what we were doing. The whole team was FOR getting us to v6, so there was no dogmatic opposition.
Consider this:
25k employee company. Four main datacenter hubs spread out across the USA with 200 remote offices evenly dual-homed into any two of the four.
All four of the DCs had multi-ISP Internet access advertising their separate v4 blocks and hosting Internet services. The default-route was redistributed into the IGP from only two locations, site A and B. e.g. two of the four DCs were egress for Internet traffic from the population of users and all non-internet-facing servers. IGP metrics were gently massaged as to fairly equally use of both sites.
All outbound traffic flowed naturally out of the eastern or western sites based on IGP metrics. This afforded us a tertiary failover for outbound traffic in the event that both of the Internet links into one of the two egress sites was down. e.g., if both of site A's links (say, level-3 and att) were down, the route through site A was lost, and all the egress traffic was then routed out site B (and vice-versa). This worked well with ipv4 because we used NAT to masquerade all the internal v4 space as site X's public egress block. Therefore all the return traffic was routed appropriately.
BGP advertisements were either as-path prepended or supernetted (don't remember which) such that if site A went down, site B, C, or D would get its traffic, and tunnel it via GRE to the appropriate DC hub's external segment.
The difficulty was that traffic absolutely had to flow symmetrically because of the firewalls in place, and easily could for v4 because NAT was happening at every edge.
With v6 it just didn't seem like there was any way to achieve the same routing architecture / flexibility, particularly with multi-homing into geographically disparate sites.
I'm not sure anymore where we landed, but I remember it being effectively insurmountable. I don't think it was difficult for Internet-hosted services, but the effort seemed absolutely not worth it for everything on the inside of the network.
I want to send my ssh via my low latency reliable connection, I want to route my streaming via another connection. That’s just a routing rule and srcnat in ipv4
That’s before you go on to using PBR. I want to route traffic with different dscp via different routes.
Ultimately I want the rout g to be handled by the network, not by the client.
Without nat, my understanding is the right way in v6 is to issue addresses of every network and then send a message to each end device asking it to use a specific ip address to route traffic and hope every client implements RFC 4191 in the right way.
The "proper" way would be to get your own ASN and use BGP to route the traffic.
If you're wanting to use a secondary WAN link as a backup for when the other goes down you could have the backup link's LAN have a lower priority. (So I guess hope everything implements RFC 4191 like you said).
You can use NAT66/NPTv6 if you want (though it's icky I guess).
Nat 4, it’s trivial. But IPv6 tell me how terrible nat is despite it being the only solution in both the v6 and v4 world.
Sadly my 4g provider will not peer via bgp with me, even if I could provide an AS and Sufficiently large IP range.
I think my home ISP will actually peer with me, but I’d have to tunnel to them over my non-fibre connection, and there’s reduced resilience in that case.
At work that wouldn’t help at all, there are very few providers for many of our branch offices.
So once again ipv6 only works with “icky” nat, or on simple 1990s style connections, and not in the real world of multiple providers. Now sure I can do npt which means I don’t need to keep track of state, but then if I didn’t keep track of state I lose the benefits of a stateful firewall.
As such the only benefits of nat on v6 is that source ports will never need to change even if client 1 and client 2 both send to server 1 port 1234 from source port 5555. This helps with a handful of crappy protocols which embed the layer 4 data (port number) in a layer 6 or 7 protocol.
Small site multihoming, for example, is an absolute disaster. Good luck if you're trying to add a cellular backup to your residential DSL connection.
IETF says you should either have multiple routers advertising multiple provider-assigned prefixes (a manageability nightmare), or that you should run BGP with provider independent address space; have fun getting your residential ISP or cellular carrier onboard with this idea.