Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You make a very good point. I had to do a trade study of different microprocessors (for embedded software) when I worked at a more engineering-focused organization, because one of our requirements was that a certain algorithm had to complete in under 30 seconds (or something like that---I don't remember the exact time requirement). One of the things we had to do was try to quantify how fast the algorithm would run on each processor, which as you mentioned was nearly impossible.

We ended up looking up the MIPS [1] and FLOPS [2] for each processor, whether it supported SIMD instructions, and how many pieces of data its SIMD instructions could process in a single instruction. Some processors didn't support floating point math, so we had to estimate how fast it could be emulated with integer registers. Then we had to have some formula to estimate the run time based on this data and the nature of the algorithm (what percentage of the code uses floating point vs integer instructions, what percentage can take advantage of SIMD, etc).

The problem with this is that even the published performance data, such as MIPS, aren't a measure of a single attribute that we can apply in any meaningful way. For example, MIPS is affected by clock speed, which instructions are executed (some may execute in a single cycle, others take several cycles), branch prediction, and cache coherency. How do you quantify stalled instructions? The final execution time is a result of an interplay between the hardware's characteristics and the algorithm's characteristics (as compiled by a certain compiler) over time.

Perhaps a more appropriate approach for software engineering would be statistics. Given normal distributions for MIPS, missed branch predictions, etc, similar characteristics of an algorithm, and perhaps the language and compiler used, then maybe we can give a confidence interval for an algorithm's run-time.

But like you said, in many cases it's probably easier (and cheaper) to just build the algorithm and then measure it.

[1] Millions of (integer) instructions per second.

[2] Floating point operations per second.

[3] Single instruction multiple data.

[4] Not to down-play the difficulty of that task. I'm sure it's very complicated as well.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: