While conventional linkers work at the compilation-unit level (one source file, usually), placing that whole source file's functions adjacently in memory [1], an atom-based linker is able to take the smallest linkable units (individual functions, each static/global variable...), and arrange those optimally.
As I recall, the OS X ld is based on this model. However it remains more limited as it doesn't support GNU ld's linker scripts and only limited command-line parameters, so it's not exposing all the power of the flexibility it would provide.
As far as I know, AtomLLD remains an experimental project with only 1 or two people working part-time on it.
[1] although modern linkers also add LTO (Link-Time Optimization) to rearrange things after everything's been integrated/
> While conventional linkers work at the compilation-unit level (one source file, usually), placing that whole source file's functions adjacently in memory [1], an atom-based linker is able to take the smallest linkable units (individual functions, each static/global variable...), and arrange those optimally.
This isn't quite right. It's just that traditionally (i.e. when not using -ffunction-sections/-fdata-sections) when compilers output an ELF relocatable object, they group all the functions into a single "smallest linkable unit" (a single .text section) and so the linker can't actually do any reordering because the information that the functions are distinct has been lost.
"""
the ELF and COFF
notion of "section" is a strict superset of its [the Atom LLD's] core "atom" abstraction (an
indivisible chunk of data with a 1:1 correspondence with a symbol name).
Therefore the entire design was impossible to use for those formats. In ELF
and COFF, a section is decoupled from symbols, and there can be arbitrarily
many symbols pointing to arbitrary parts of a section (whereas in MachO,
"section" means something different; the analogous concept to an ELF/COFF
section is basically "the thing that you get when you split a MachO section
on symbol boundaries, assuming that the compiler/assembler hasn't relaxed
relocations across those boundaries" which is not as powerful as ELF/COFF's
notion). This resulted in severe contortions that ultimately made it
untenable to keep working in that codebase.
"""
Heh, speaking of linking individual functions, GHC Haskell has a flag to emit one object file per function and do that thing with a traditional linker.
It's horribly slow.
But it produces smaller binaries, I've got 2x smaller binaries on my projects with that option.
While conventional linkers work at the compilation-unit level (one source file, usually), placing that whole source file's functions adjacently in memory [1], an atom-based linker is able to take the smallest linkable units (individual functions, each static/global variable...), and arrange those optimally.
As I recall, the OS X ld is based on this model. However it remains more limited as it doesn't support GNU ld's linker scripts and only limited command-line parameters, so it's not exposing all the power of the flexibility it would provide.
As far as I know, AtomLLD remains an experimental project with only 1 or two people working part-time on it.
[1] although modern linkers also add LTO (Link-Time Optimization) to rearrange things after everything's been integrated/