Ouch. I think the blame is partly on the build configuration. IMO build configurations shouldn't degrade silently this way. If the user is OK without a 32-bit vDSO then they should explicitly specify that.
It's worth noting that even on x86, -m32 isn't as complete as a real i?86 build of gcc. It's "complete enough" for the kernel and many other things, but, say, I found it very difficult to build a 32-bit program that didn't rely on SSE since the 64-bit version assumes it.
In theory you should just be able to set -march to the lowest common denominator kind of CPU you expect your code to run on and it'll avoid relying on SSE if appropriate.
What’s the use case for running a 32bit binary in a 64bit cpu/os? Is there any advantage? Or is it simply to avoid having to recompile twice to support two architectures?
I once read an article about a project that placed everything into 64KB size blocks. Any pointer within the block is 16-bit, and you can also have references to other blocks.
First advantage is that the pointers become tiny, so the objects took up a lot less space. The other advantage was that when it was time to serialize and deserialize the data, you didn't have to do any processing at all.
You can still use 32-bit wide addresses in 64-bit mode and sign-extend them into 64-bit registers.
If you have a statically compiled binary all you need is compiler support for 32-bit address mode in the otherwise 64-bit ready instruction set and on x86 you get access to more registers etc but you don't pay the price for the wider addresses stored in data structures.
OTOH if you use shared libraries you also have to have your shared libraries compiled in that mode too. The difficulty of dealing with that depends also if you gave a hybrid system or a full 32 bit userspace
Yes, that's an alternate ABI with 32-bit addressing targeting the same 64-bit instruction set. But it's not trivial, as you described. You might also want some kernel support too, and there might not be enough demand to maintain that support.
Under Linux, I have no idea. but aarch has a wide variety of use cases. 64bit means more transistors on cpus, and wider instructions. The people that implement arm's designs would like to save money on all that. if it is an embedded device or a limited use device, it may not need 64bit. Especially when you're mass manufacturing, every $0.01 counts.
Just a side-note, I am always intrigued by 16bit thumb and aarch32 "interworking", basically flipping on/off the LSB of the PC(EIP/RIP equivalent register) to tell the cpu that the next instruction is thumb mode or a32, letting them mix different instruction sets within the same program.
The advantage of 64-bit is only a wider integer range and address space which isn't needed for most applications, so trading those for portability is probably good. I've also heard that many people doing ARM assembly prefer A32 NEON over A64 NEON.
Depends what you want to be portable to -- now that 64 bit only Arm CPUs are becoming more common, building for 32 bit means you cannot run on those systems.
some legacy applications were designed around having a 32 bit wordsize for pointers. so calculations of offsets into memory where a data structure includes pointers depends on a pointer size.
1) System calls need to switch into kernel mode and back, this can be a massive performance hit.
2) This is especially bad if you want to do precise time measurements, a chunk of time is now spent just calling `gettimeofday`. The performance impact (benchmarked and shown in the article) is substantial.
3) Linux (the kernel) at some point added a "vdso", basically a shared library automatically mapped into every process. The libc can call into the vdso version of `gettimeofday` and use the system call as a fallback.
4) A 64 bit kernel that can run 32 bit programs needs a 64 bit and 32 bit vdso.
5) On x86, the kernel build process can "simply" use the same compiler to produce 32 bit code for the 32 bit vdso. GCC for ARM can't, for some reason. You need 2 toolchains, a 64 bit and a 32 bit one.
6) If you don't know that, the kernel kernel only has a 64 bit vdso, the 32 bit program will run but silently use the system call, instead of the vdso, causing unexpected performance issues.