Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many of the sites I visit frequently are exactly like that: tons of requests with tiny payloads.

The nytimes.com homepage makes 100+ requests to tiny images.

Same thing for the yahoo.com homepage.

An ebay.com listing page makes many requests to small thumbnails of items on sale.

And so on... This makes it a perfectly fair benchmark IMHO.



I don't know how you're assessing those pages, but bear in mind that

- Counting images can be misleading, since well-optimized sites use spritesheets or data URIs.

- If you're using something like Chrome's dev console to view requests, a lot of them are non-essential requests which are intentionally made after the page is functional.

- HTTP connection caps are per host. The benchmark is making hundreds of requests to one host, whereas a real page might make a dozen requests to the main server, a dozen to some CDN for static files, and a dozen to miscellaneous third parties.

- The benchmark is simulating an uncached experience; with a realistic blend of cached/uncached, HTTP 1 vs 2 performance would be much more comparable.

HTTP/2 is an improvement but if people expect a "5-15X" difference, they're in for a big disappointment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: