I’d like a turnkey k3s and a 10” rack designed for consumers. Set up to host your Minecraft server, store your media, and be incrementally upgradeable.
Setting aside the address scarcity issue, how is IPv6 going to simplify the routing table? If anything, it would just be an explosion of the number of addresses?
I mean a million is objectively a large number if it's all on paper, but to me, that's not a particularly large data set for talking about the entire freaking internet.
And how cheap of a SOC can handle that in memory? A better question might be to even make a system on a chip that couldn't handle that memory?
The small ISP that serves my home has six IPv4 prefixes and one IPv6 prefix.
The small hosting provider I use has I think 7 v4 prefixes, but could be one v6 prefix (if they supported v6 which they sadly don't). Maybe not --- a lot of their /22s are advertised as four /24s to allow for a DDoS Mitigation provider to attract traffic when needed; but it'd probably still be fewer prefixes with v6.
Not every ASN looks the same, but many of them would advertise a lot fewer prefixes if they could get contiguous addresses, but it's not possible/reasonable to get contiguous allocations for v4.
Since the routing table is organized around prefixes, if there is complete migration, the routing table will probably be smaller.
Huh
A single prefix is easier on the router than a dozen..
I should hope so?
Isn’t this kind of like saying the grade 1 math test is easier than the grade 12 math test ?
The thing is that the abundance of IPv6 addresses enables fewer prefixes to be used, by allowing addresses to be allocated in much larger chunks.
For instance, Comcast (AS 7922) owns about 2^26 IPv4 addresses, distributed across 149 different prefixes. Almost all of these prefixes are non-contiguous with each other, so they each require separate routing table entries. Comcast can't consolidate those routes without swapping IP address blocks with other networks, and it can't grow its address space without acquiring new small blocks. (Since no more large blocks are available, as this article discusses.)
In contrast, Comcast owns about 2^109 IPv6 addresses, which are covered by just 5 prefixes (two big ones of 2^108 each, and three smaller ones). It can freely subdivide its own networks within those prefixes, without ever running out of addresses, and without having to announce new routes.
There theory might be that an organisation would end up advertising a single prefix, rather than whatever they have now (say 40 networks with various prefixes).
It's not just any memory. When it comes to core infrastructure routers those routes need to fit into specialized and expensive CAM (Content Addressable Memory) to do the lookups in hardware. And on every single one.
Right but that's still not really answering his question. Sure, the constant factor is higher for router TCAM memory. Still: you can sum this post up as "in the late 1990s, tier-1 carriers filtered advertisements for all but the 'swamp' range down to /19s or smaller prefixes; now everything is the 'swamp'". Why is that?
Because IPv4 address scarcity means small blocks get sold as they are available to people in completely different parts of the Internet. With IPv6 the address space is so large that they can easily keep the blocks in one piece.
No, obviously, I get that (we buy a lot of IPv4 space --- and I'm actually happier with the current regime than I was with the "supplicate to ARIN" regime). I'm just wondering what technologically happened to make universal /24 advertisements fine. I assume it's just that routers got better.
The transition to 7200 VXRs as core routers really hit a tipping point around 2000. They could handle millions of entries in the FiBs and really led to a relief in pressure. Subsequent devices had to match that.
On the IPv6 side; by 2002, nobody was really experimenting with A6 records any more, and EUI64 was needless. Both were parts of IPv6 designed to facilitate "easy" renumbering, so that single prefixes could be replaced with larger ones. But the ISPs weren't complaining any more about table size.
It's interesting to consider that the IPv4 address space is only 32 bits wide. Back in the early 2000s asking for 4GB of RAM was unthinkable, but today (well last year) that's not even a big ask. If your routing decision can fit in a single byte (which interface to use next) you could load the entire thing as a 4GB table easily. 8GB if you need two bytes for the next hop. Multicast might be a problem but since multicast doesn't work on the backbone anyway I think we can ignore it.
> I'm just wondering what technologically happened to make universal /24 advertisements fine. I assume it's just that routers got better.
Routers had to get better (more tcam capacity) because there wasn't much choice. Nobody wants to run two border routers each with the table for half the /8s or something terrible like that. And you really can't aggregate /24 announcements when consecutive addresses are unrelated.
The issue is; in the default free zone, every peer which gives you a full table, gives you 1 million routes. Core infrastructure is not getting refreshed every 5 year, I have heard so...
> Unfortunately it means those languages will be the permanent coding platforms.
not really,
I suspect training volume has a role in debugging a certain class of errors, so there is an advantage to python/ts/sql in those circumstances: if, as an old boss once told me, you code by the bug method :)
The real problems I've had that hint at training data vs logic have been with poorly documented old versions of current languages.
To me, the most amazing capability is not the code they generate but the facility for natural language analysis.
my experience is that agent tools enable polyglot systems because we can now use the right tool for the job, not just the most familiar.
WML/WAP got a bad rap I think, largely because of the way it was developed and imposed/introduced.
But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.
Game graphics are still a high margin silicon business. Someone will do it.
Frankly, the graphics chops are plenty strong for a decade of excellent games. The big push in the next couple decades will probably be AI generated content to make games bigger and more detailed and more immersive
Wasnt there a Google cross app logging framework and request tracking project 15 years ago?
Did grafana die when I wasn't looking? Does datadog still make money?
What's weird about this article is that it's the same thing being said 20 years ago. Is this a sign of people not learning from the better parts of Java deployment stacks?
We simply need an engineering generation of 50 mile range PHEV vehicles. It will get a huge percentage of low-efficiency driving electrified, won't be too big of a burden on the grid, educate more people on EV-style driving, adds regen braking, should still be able to provide high-torque towing and driving.
Yeah, Ford makes the F-150 Powerboost which is a hybrid version, but no plugin capability. I'd love to see a 50 mile plug in hybrid version of their truck line (Maverick, Ranger, F150)
In theory, AI should be good at helping building interfaces between cloud backups and home server apps. Because AI should be good at apis.
In theory
reply