Cables on overhead high voltage lines are mounted using stacks of ceramic insulators, but here they seemingly just sleeved in some protection and hang on a tunnel wall. Why is that?
Overhead conductors use air as the insulator. Underground cables use an insulating jacket. In the past it was really difficult to build cables with voltage ranges in the 10s of thousands of volts without additional complexity like a dielectric oil being pumped through the cable. I think modern dielectrics are significantly better though.
Yeah, the wires in the new London tunnels are XLPE. Despite being first used in the late 60s it took a long time to be commonly used. Though much of the infrastructure around is still very old.
The cost of oil insulated cables that can do 132kv is about £900 a meter. Whilst there are HV cables that exist on the outskirts of london, they are much rarer in zones 1-3.
I assume that the cost of pylons with raw cables is much much cheaper. The problem is planning permissions, physical clearance. planning permission and now one wants to live near HV cables (that they know of. There are a bunch of 33kv cables buried outside posh people's houses in zone 5, and a bunch in canals.)
Overhead high voltage conductors are not insulated with a coating, probably for many reasons but certainly for cost and heat dissipation.
That means the path through the air to some conducting materials needs a certain distance, and that even when wet or iced over or whatever can happen up there.
Overhead lines need big ceramic stacks because the air is the insulation. In tunnels, the insulation is in the cable itself, and the tunnel just provides structure, cooling, and controlled geometry
My immediate question is that if all of that was on-disk data duplication, why did it affected download size? Can't small download be expanded into optimal layout on the client side?
It didn't. They downloaded 43 GB instead of 152 GB, according to SteamDB: https://steamdb.info/app/553850/depots/ Now it is 20 GB => 21 GB. Steam is pretty good at deduplicating data in transit from their servers. They are not idiots that will let developers/publishers eat their downstream connection with duplicated data.
Furthermore, this raises the possibility of a "de-debloater" that HDD users could run, which would duplicate the data into its loading-optimized form, if they decided they wanted to spend the space on it. (And a "de-de-debloater" to recover the space when they're not actively playing the game...)
> to recover the space when they're not actively playing the game
This would defeat the purpose. The goal of the duplication is to place the related data physically close, on the disk. Hard links, removing then replacing, etc, wouldn't preserve the physical spacing of the data, meaning the terrible slow read head has to physically sweep around more.
I think the sane approach would be to have a HDD/SDD switch for the file lookups, with all the references pointing to the same file, for SDD.
Sure, but defrag is a very slow process, especially if you're re-bloating (since it requires shifting things to make space), and definitely not something that could happen in the background, as the player is playing. Re-bloating definitely wouldn't be good for a quick "Ok, I'm ready to play!".
depending on how the data duplication is actually done (like texture atlasing the actual bits can be very different after image compression) it can be much harder to do rote bit level deduplication. They could potentially ship the code to generate all of those locally, but then they have to deal with a lot of extra rights/contracts to do so (proprietary codecs/tooling is super, super common in gamedev), and
Also largely cause devs/publishers honestly just don't really think about it, they've been doing it as long as optical media has been prevalent (early/mid 90s) and for the last few years devs have actually been taking a look and realizing it doesn't make as much sense as it used to, especially if like in this case the majority of the time is spent on runtime generation of, or if they require a 2080 as minimum specs whats the point of optimizing for 1 low end component if most people running it are on high end systems.
Hitman recently (4 years ago) did a similar massive file shrink and mentioned many of the same things.
Sure it can - it would need either special pre- and postprocessing or lrzip ("long range zip") to do it automatically. lrzip should be better known, it often finds significant redundancy in huge archives like VM images.
I came into the comments specifically to ask if this flag existed. I feel bad that the author developed this whole flow just because they didn't know about this, but that's pretty common with git.
discoverability is a big problem, especially for CLI tools which can't afford to show small hints or "what's new" popups. I myself learned it from someone else, not docs.
update-refs works only in a narrow case when every branch starts form the tip of a previous. Your helper might still be useful if it properly "replants" whole tree keeping its structure.
No, as far as I can tell, it's basically just doing update-refs. But in my defense, I just found out by looking for the option that for some reason my git manpages are from an old version before it was introduced
VPN app can still solve it by locally resolving configured domain into special local IP, which get VPNed into real IP on their side. You'll need to encode original DNS name into protocol somehow, so that remote side knows which real IP to access, but it is certainly doable.
Having worked with QUIC extensively lately I must say that having protocol entirely in userspace is truly liberating. It also helps that almost everything, including headers is encrypted, as a result middle boxes can't interfere and "optimise" your flow.
For high latency and high packet loss links, like one described in the article, you'll probably need pre-emptive retransmits and I am not sure that simply tuning parameters will get there. Retransmitting before loss is detected cuts bandwidth, but I suspect will improve end to end latency.
Aren’t fec codes already used in satellite transmissions? I recall reading the patents around fec codes had something to do with the satellite industry.
definitely. that's the only reason I know about them. usually you have to do a little more, because the errors are very bursty, so you have to use very wide windows or interleave in order to spread the errors out so that the redundancy can cover them.
if you're saying 'thats a link layer problem' then I agree, but it would be better to change your link level encoding strategy than to just start sending multiple copies at the transport layer.
reply