Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It didn't. They downloaded 43 GB instead of 152 GB, according to SteamDB: https://steamdb.info/app/553850/depots/ Now it is 20 GB => 21 GB. Steam is pretty good at deduplicating data in transit from their servers. They are not idiots that will let developers/publishers eat their downstream connection with duplicated data.

https://partner.steamgames.com/doc/sdk/uploading#AppStructur...





Furthermore, this raises the possibility of a "de-debloater" that HDD users could run, which would duplicate the data into its loading-optimized form, if they decided they wanted to spend the space on it. (And a "de-de-debloater" to recover the space when they're not actively playing the game...)

The whole industry could benefit from this.


> to recover the space when they're not actively playing the game

This would defeat the purpose. The goal of the duplication is to place the related data physically close, on the disk. Hard links, removing then replacing, etc, wouldn't preserve the physical spacing of the data, meaning the terrible slow read head has to physically sweep around more.

I think the sane approach would be to have a HDD/SDD switch for the file lookups, with all the references pointing to the same file, for SDD.


So you'd have to defrag after re-bloating, to make all the files contiguous again. That tool already exists, and the re-bloater could just call it.

Sure, but defrag is a very slow process, especially if you're re-bloating (since it requires shifting things to make space), and definitely not something that could happen in the background, as the player is playing. Re-bloating definitely wouldn't be good for a quick "Ok, I'm ready to play!".

I imagine it'd be equivalent to a download task, just one that doesn't consume bandwidth.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: