Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How many CPU / GPU cycles are wasted unnecessarily?
2 points by stefanos82 on Aug 26, 2022 | hide | past | favorite | 7 comments
For years I have been asking this question to myself: how much code actually wastes CPU / GPU cycles, assuming optimization flags are set to their highest option possible?

Is there actually such thing with the flags set on, or not?

I'm really curious.



How do you define wasted cpu cycles ? The compiler wrongly emitting redundant code ? The programmer doing something particularly silly ? Python running code that would be 10x as efficeint if rewritten in C ?


I'm not good at describing my thoughts in English as it's not my native language, but I will do my best.

Based on a reply from Mike Pall [1], the creator of LuaJIT, he made me appreciate assembly a lot more than I thought I would and I really wish there were more resources - books and videos - to teach us the good parts we should have known in the first place around low-level stuff.

The most common reply I read nowadays is of the following nature:

    avoid premature optimization, unless your code creates bottlenecks that causes serious performance problems; therefore, throw hardware at it (more CPUs and RAM) and be done with it".
I mean...I get it, but should not we have faster applications nowadays that consume less memory resources and energy, thanks to lots of factors, such as multicore CPUs and RAM that run on incredible speeds?

How badly designed libraries are, that need so many resources anyway?

I have the impression I'm missing something here...

[1] http://lua-users.org/lists/lua-l/2011-02/msg00742.html


I was thinking more about how much unnecessary and useless software gets run at all

At the same time, a failed ML training session could account for more waste than another company's entire useful footprint

I've personally been spending a fair amount of time recently deduping builds in CI, yet another source of waste


Yeah, I'm old school; most of the time I do my job with simple shell scripts that produce the results I want.

I really live and breathe the UNIX philosophy and works wonders for me.


Don't look at Deno then, the author claims unix philosophy is dead and that JavaScript is (should be) the universal programming language


A part of code that should take, for instance, 3 to 5 CPU cycles to produce the desire output, to waste something like 25 to 40 CPU cycles.


If someone wrote a linear search on something that could be done with a binary search, that may perform a lot of unneded cycles (or it may not due cache misses in the binary search case). I don't think this is possible to quantify, you would need experts go through the orignal code to find optimization opportinities , and even then there would be an infinite number ways to achieve the same result or write the same program, many which could be more efficient




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: