That's data size, not code. There's no fundamental reason that a program that can smoothly render unicode at 4k needs a GB download when kB could suffice.
We tried that in the Windows 9x days. We called that "DLL hell".
The idea was that programs would share libraries, and so why have a dozen identical frameworks on the same system? Install your libraries into system32. If it's already there but an earlier version, deploy your packaged one on top.
Turns out that nobody writes good installers, and binary level dependency requires too much discipline, and dependencies are a pain for users to deal with.
So shove the entire thing into a huge package, and things actually work at a cost of some disk space and memory.
> and things actually work at a cost of some disk space and memory.
I have ~10 000 .exe files on this machine, if none of them shared code and/or data (or were written in a ``modern`` language with 50+ MB hello worlds), they would not fit on my 1TB disk.
True, but I personally discovered this has limits.
What if you're working on something reasonably novel, like say, open source VR? Well, turns out you may want a quite eclectic mix of dependencies. Some you need the latest version, because it's state of the art stuff. Some is old because the new version is too incompatible. Some is dead.
Getting our work into a Linux distro is on my list, but even if dealing with all the dependencies works out, there's the issue of that we sometimes need to do protocol changes and upgrade on our own schedule, rather than whenever the new distro is released.
Distros are great for things that are supposed to integrate all together. They're less ideal for when you're working on something that is its own, separate thing like a game.
So for the time being, shoving it all into an AppImage it is.
You presume one option when the other option is a bundled but smaller renderer. The truetype renderer my terminal uses is about 700 lines of code. The C it's a translation of is about 1500. There's a sweet spot that might well be a bit higher to e.g. handle ligatures etc., but the payoff from going from that to some huge monstrosity is very small.
As somebody who actually works on a pretty large program, no, I'm absolutely not going to use your 700 LOC TTF renderer. I'm going to use the 128K LOC FreeType.
Why? Well, because it's the one everyone else uses. It's what comes with everyone's Linux distro. Therefore, if there's something wrong with it, it's pretty much guaranteed it'll break other stuff and somebody else is going to have to fix that. Also it probably supports everything anyone might ever want.
If your 700 LOC TTF renderer doesn't perform as it should, it might become my problem to figure out why, and I don't really want that.
I'm not suggesting you should. I'm pointing out that these things can be done with a whole lot less code. And a lot of the time so much less code that it is less of a liability to learn a smaller option. Put another way, I've had to dig into large font renderers to figure out problems before because they didn't work as expected and it became my problem, and I'd much prefer that to be the case with 700LOC I can be intimately familiar with than a large project. (I'm old enough to have had to figure out why Adobe's Type1 font renderer was an awful bloated mess, and in retrospect I should have just rewritten it from scratch, because it was shit; that it was used by others did not help us at all)
I ended up with this one in large part because it took less time to rewrite libschrift (the C option I mentioned) and trim it down for my use than figuring out how to make Freetype work for me. I now have a codebase that's trivially understandable in an hour or two of reading. That's what compact code buys you.
No, it won't do everything. That's fine. If I need Freetype for something where it actually saves me effort, I'll use Freetype. It's not about blindly rewriting things for the sake of it, but not lazily default to big, complex options whether or not they're the appropriate choice.
A lot of the time people pick the complex option because they assume their problem is complex, or because it's "the default", not on the merits.
There are tradeoffs, and plenty of times where the large, complex component is right, but far too often it is picked out of laziness and becomes a huge liability.
You say that as if it was some kind of failed one-off experiment of the 90s. We tried it in the Multics days, it caught on and the design philosophy is still popular to this day. It works quite well in systems with centrally managed software repositories, even if it doesn't in a system where software is typically distributed on a 3rd party shareware collection CD or download.com.