By no means do I think microbenchmarks convey no useful information. My point is that they are never useful data in isolation, because the only time you can act on them is if you know why they are behaving the way they are. In this case it seems like the only thing being measured was String::compareToIgnoreCase, and the difference had nothing to do with the optimizer. In other cases the cause will be something else.
I wrote a blog post in response to someone's attempt to investigate different implementations of a small function through a sequence of microbenchmarks without asking why. I go into detail about a bunch of ways they ended up being wrong because they thought the numbers spoke for themselves. This only gets worse when talking about JITs, since their behaviour is even less local.
I wrote a blog post in response to someone's attempt to investigate different implementations of a small function through a sequence of microbenchmarks without asking why. I go into detail about a bunch of ways they ended up being wrong because they thought the numbers spoke for themselves. This only gets worse when talking about JITs, since their behaviour is even less local.
https://medium.com/@veedrac/learning-the-value-of-good-bench...