It’s no different from ecosystem/library fragmentation within a language. For example, there are at least six approaches to accelerating numerical code in Python (weave, plain C + ctypes, Cython, f2py, numexpr, numba) each with its own cognitive load, interop and debug problems (Julia advocates rejoice, but it’s coming for them too).
It seems like a community needs huge incentives to avoid churn and fragmentation eg (a) strong backwards compatibility commitments, (b) big backing company to do the endless boring stuff, (c) strong benevolent dictator, ...
The same is true for languages except there’s no outer “scope”, except for platforms like iOS that might dictate toolchains or a company where CTO might make such choices.
But trying to avoid fragmentation among hackers seems like barking up the wrong tree.
"Julia advocates rejoice, but it’s coming for them too"
Why do you say that? The whole point of Julia is to create a language that's similar to Python in ease of development but is natively fast. Do you think it fails at this?
I think Julia was well designed to stay fast for a long time, for many tasks. But anything that isn’t universally optimally addressable by a Lispish front end to LLVM JIT is going grow multiple approaches that aren’t fully compatible, in the same way that Python, not built for speed of execution, grew multiple approaches to fast code ( just realized I forgot about PyPy in my list above). So I expect there to be multiple eventually incompatible approaches to AOT compilation, web frameworks, GUIs etc. Julia’s youth restricts divergence in the short run but long run I think divergence is a healthy part of any ecosystem, and not to be disparaged as in the post I originally replied to.
In the general case that can't really be true, because the Lua interpreter is written in C.
With regards to GC languages in general, if you spend a lot of time working around the GC by doing things like object pooling, which is really just reinventing manual memory allocation, you can get close to a non GC language in terms of performance.
GC languages are obviously fine for plenty of use cases, and for some use code snippets they can be faster, but there is no way to make a GC free--there's going to be some overhead no matter what you do.
The point of a tracing JIT is that it runs code in an interpreter, then generates machine code for loops and hot spots. By doing this at runtime you can take advantage of knowledge that a C compiler doesn't have. This is why LuaJIT is often faster than C.
LuaJIT can be faster than C for some code. Just like C can be faster than someone's naive hand coded assembly.
That doesn't change the fact that in the general case C is still faster, and there are classes of critical high performance code that have to be written in C (or Assembly, Rust, or even Fortran). Sometimes, manual memory management is necessary to get acceptable performance (also determinism is occasionally required).
All else being equal, GC is always going to be slower than non-GC because a GC introduces unavoidable overhead.
I've worked in this space btw and I've never seen any evidence that LuaJIT is actually faster than C for anything outside of very specific micro-benchmarks.
Multiple large programs written in LuaJIT that have better performance than the same programs written in optimized C.
The vast majority of benchmarks I've seen are down to LuaJIT performing specific optimizations out of the box that the C compiler used in the comparison can perform but doesn't.
In particular the last time I looked at LuaJIT vs C++ benchmarks, the C++ compiler flags weren't set allow the use of SIMD instructions by default, but LuaJIT does.
There was another recent example I saw where LuaJIT was calling C functions faster than C in a benchmark. Then someone pointed out what the LuaJIT interpreter was actually doing, and how to implement the same speed up in C.
Java people made the same arguments years ago: "Java is just as fast or faster than C++". You'll notice that after 20 years of comparisons, no one who writes high performance code for a living makes that claim.
Java is fast enough that the increased programmer productivity of the GC and other features wins out in many cases. People aren't choosing Java over C++ because it results in generally more performant code.
It seems like a community needs huge incentives to avoid churn and fragmentation eg (a) strong backwards compatibility commitments, (b) big backing company to do the endless boring stuff, (c) strong benevolent dictator, ...
The same is true for languages except there’s no outer “scope”, except for platforms like iOS that might dictate toolchains or a company where CTO might make such choices.
But trying to avoid fragmentation among hackers seems like barking up the wrong tree.