Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> have certain amounts of time which they need to take in order to tie voltages high or low, to either read or output a bit.

When you overclock you add voltage which helps with this which goes a long way. You would often hit thermal limits first, so heat dissipation is a major factor.



This is not my experience in the olden Athlon XP days. Even though you give the extra voltage and give enough cooling, the CPU starts to encounter many internal errors and needs to re-run a lot of instruction trains to get sane results.

This causes heat and load spikes, and while you get no errors, the increased frequency doesn't give you any real world performance gains. Returning a to a slower configuration actually gives a much snappier and performant system.


Athlon was 20 years ago.

It feels unfair to generalize across time and architectures


It's not about architecture. It's about physics. Once you pass a certain threshold, electron/current leaks and escapes start to wreak havoc inside any silicon. So beyond voltage, current and temperature, you're limited by the silicon itself.

I stopped overclocking systems after Athlon XP. This is why I gave that example.

Even without overclocking and overheating, I've seen and still seeing partially cooked processors which shut-down half of their FPU pipelines to stay reliable albeit with orders of magnitude lower performance.

CPUs are much more complex with their MCE and more advanced microcode structures ever, and there's much more than meets the eye.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: