Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The tools you’re talking about are useless for measuring this kind of thing. We’re talking about potential differences well below microseconds, and you’re proposing using tools that (presuming I correctly understand which you mean) report answers in milliseconds, with noise rates of milliseconds (and a lot more if you try scaling it up with things like a million elements in a row). It is possible to benchmark this stuff, but the way you describe is utterly unsound.

Unless presented with concrete steps to reproduce what you’re talking about, I refuse to believe you.

(Mind you, I’m not denying in this that there are differences, just that they’re even measurable this way on even vaguely plausible documents.)



I’m not talking, like you, about “A parses faster than B”. I talk about “A causes the parser to start over, B doesn’t, so B is faster”. Resets do make a difference that does not require microbenchmarks and is in the realm of milliseconds. This way I was able to load pages in a single frame at 60Hz, which was the threshold I wanted to hit, because it made my webdev friend not realize he already was on the next page when he clicked the link. Feel free to refuse to believe me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: