My bad, I was too focused on that class in general, imagining “lz4 and friends”.
Zstd does reach LZMA compression ratios on high levels, but compression times also drop to LZMA level. Which, obviously, was clearly planned in advance to cover both high speed online applications and slow offline compression (unlike, say, brotli). Official limit on levels can also be explained by absence of gains on most inputs in development tests.
Distribution packages contain binary and mixed data, which might be less compressible. For text and mostly text, I suppose that some old style LZ-based tools can still produce an archive roughly 5% percent smaller (and still unpack fast); other compression algorithms can certainly squeeze it much better, but have symmetric time requirements. I was worried about the latter kind being introduced as a replacement solution.
Zstd does reach LZMA compression ratios on high levels, but compression times also drop to LZMA level. Which, obviously, was clearly planned in advance to cover both high speed online applications and slow offline compression (unlike, say, brotli). Official limit on levels can also be explained by absence of gains on most inputs in development tests.
Distribution packages contain binary and mixed data, which might be less compressible. For text and mostly text, I suppose that some old style LZ-based tools can still produce an archive roughly 5% percent smaller (and still unpack fast); other compression algorithms can certainly squeeze it much better, but have symmetric time requirements. I was worried about the latter kind being introduced as a replacement solution.