Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Linux kernel is now compiled with -fno-delete-null-pointer-checks

Like many other large C codebases, it also uses -fno-strict-aliasing and -fno-strict-overflow (which is a synonym for "-fwrapv -fwrapv-pointer").



-fwrapv introduces runtime bugs on purpose! The last thing you want is an unexpected situation where n is an integer and n+1 is somehow less than n. And of course that bug has good chances of leading to UB elsewhere, such as a bad subscript. If you want to protect from UB on int overflow, -ftrapv (not -fwrapv) is the only sane approach. Then at least you'll throw an exception, similar to range checking subscripts.

It is sad that we don't get hardware assistance for that trap on any widespread cpu, at least that I know of.


I can easily test if n+1 is < n with fwrapv.

Without you have to do convoluted things like rearranging the expression to unnatural forms (move the addition to the right but invert to subtraction, etc), special case INT_MAX/INT_MIN, and so on - which you then have to hope the compiler is smart enough to optimize, which it often isn't (oh how ironic).


It's not to protect from UB, it's to protect from the optimiser deleting your bounds checks.


On x86 you can put an INTO instruction after each arithmetic operation to trap if the overflow flag is set.


We've got a few components written in C that I'm (partially) responsible for. It's mostly maintenance, but for reasons like this I run that code with -O0 in production, and add all those kinds of flags.

I'd be curious to know how much production code today that's written in C is that performance critical, i.e. depends on all those bonkers exploits of UB for optimizations. The Linux kernel seems to do fine without this.


I'm fairly confident in declaring the answer to your question: None.

Most programs rarely issue all the instructions that a CPU can handle simultaneously, they are stuck waiting on memory or linear dependencies. An extra compile-out-able conditional typically doesn't touch memory and is off the linear dependency path, which makes it virtually free.

So the actual real-world overhead ends up at less than 1%, but in most cases something that is indistinguishable from 0.

If you care that much about 1% you are probably already writing the most performance critical parts in Assembly anyway.


> If you care that much about 1% you are probably already writing the most performance critical parts in Assembly anyway.

I call this hotspot fallacy and it is a common one. This assumes there is relatively small performance critical parts that can be rewritten in assembly. Yes, sometimes there is a hotspot, but by no means always. A lot of people caring about 1% is running gigabytes binary on datacenter scale computer without hotspots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: