This has various names: double-double, quad-double (etc.) arithmetic; floating-point expansions. It's definitely the best way to do arithmetic at precision up to a couple of hundred digits on modern hardware, though traditional arbitrary-precision arithmetic wins at higher precision (and in any case becomes necessary, due to the exponent range).
It's unfortunate that this isn't something that was standardized and more widely available a long time ago. By contrast, IEEE binary128 and binary256 are practically useless due to lack of hardware support.
Yeah, I think only Power has hardware double of 128 bits, but it's not the same as double-double because the exponent range is equal to double's exponent range.
I agree this package is not new conceptually, but it has a convenient metaprogramming to generate code for arithmetic for any [hardware ieee number type] x N.
It also lists the complexity as a function of N, most operations scale as N^3, so yes, for larger N it might fail to be fast.
It's unfortunate that this isn't something that was standardized and more widely available a long time ago. By contrast, IEEE binary128 and binary256 are practically useless due to lack of hardware support.