I'd put everything I own on it (for reasonable interpretations of 'quite a bit'). I know from personal experience that web pages and blogs die like flies, and that the Internet Archive - the only live possibility for pages in general - often simply doesn't have copies or the copies are unavailable. (Lost a number of _Mainichi Shimbun_ articles, IIRC, to a robots.txt that went up years after the articles did; the IA respects it anyway. Sometimes the domain changes hands and the new owners put up a restrictive robots.txt; I hate that too.)
11TB of HTML & images goes a very long way. To put that in perspective, that's in the same area as a full copy of all Wikimedia projects, including the English Wikipedia and Commons.
It's just HTML/text and it's 11TB gzipped as of 2008. I bet the archive has grown since then, but the valuable data is the oldest. Uncompressed it's probably in the 20TB range.
I'd also bet everything I own that a significant fraction of the data is unavailable anywhere else. Google Reader started in 2005. Ditto for Google Blog Search. Even if they crawled the same feeds as Bloglines and kept the data indefinitely, they'll still be missing those first two years.
That seems a little low to me if you're starting with an 11TB archive. Doesn't gzip usually shrink HTML/text down to more like 1/5 or 1/10 the size of the original, rather than less than 1/2?
That greatly depends on the nature of the original data. Some forms of data are already compressed, such as images. Most websites apply gzip or DEFLATE (gzip is built on DEFLATE) compression serverside if your browser indicates it understands it.
It seems to have firsthand knowledge that this data has been stored permanently by Bloglines; what I'm curious about is why you think it wasn't stored by any other companies that crawl the web for a living.
If a tree falls in Google's archives and no non-Googler can access it, does it really exist? Anyway.
As AngryParsley says, the specific Google services in question inherently have a gap of a few years. I have already given examples from my personal trifling experiences looking up a few hundred dead sites where the sole public archive, the IA, fails. Scale that up to hundreds of thousands and millions of sites...
11TB of HTML & images goes a very long way. To put that in perspective, that's in the same area as a full copy of all Wikimedia projects, including the English Wikipedia and Commons.