Yes. However, I want to expressly avoid the word "efficiency" because it is our intent to eventually capture some metric(s) related to developer efficiency (e.g., lines of code and so on). I don't want the word efficiency to then be ambiguous, at least within our results site.
I struggled for a while with various alternatives, but ultimately went with "overhead" even though, as you point out, a higher number is better. I'm open to other suggestions, but for the time being, I'd like to avoid efficiency.
Edit: The trouble with inverting the metric ("lower is better") is that in some cases, a framework achieves superior performance than its underlying platform due to custom components or test implementation particulars. See for instance the multiple query test in which presently Revel exceeds Go (I know at least one Go SME is researching why this is).
Call it "framework throughput". Then explain that it is the portion of the raw performance of the nominal underlying platform that is available to serve pages.
In a footnote you can point out that frameworks can do things like offload work to specialized components to get throughput above 100% on your tests.
As others have mentioned, it was really quite confusing to see "higher is better" and then "lower is better" in the same set of graphs. Stick with one convention. It works. :)
Interesting. That is actually an argument to leave the overhead chart as-is (possibly renamed) but change the latency chart to a "higher is better" chart in some way. I don't disagree that a single convention would be preferable, but it would be unusual to represent latency in a higher-is-better fashion.
I struggled for a while with various alternatives, but ultimately went with "overhead" even though, as you point out, a higher number is better. I'm open to other suggestions, but for the time being, I'd like to avoid efficiency.
Edit: The trouble with inverting the metric ("lower is better") is that in some cases, a framework achieves superior performance than its underlying platform due to custom components or test implementation particulars. See for instance the multiple query test in which presently Revel exceeds Go (I know at least one Go SME is researching why this is).
http://www.techempower.com/benchmarks/#section=data-r6&hw=i7...