I haven't looked into CLR in a long time, but it feels like it has a fraction of JVM's adoption and community size. Microsoft also seems to be prioritizing Typescript and Node internally with its recent moves.
They scrapped the old CLR and started over with a new "CLR Core".
Then they ported the new CLR Core along with the framework BCL to webassembly and has the CLR running in the browser, upon which they built Blazor Webassembly.
So you can now target the CLR and have your code run in the browser.
And now they are making progress on Blazor Unified where components can start of as server-side rendered exclusively and transparently and automatically move to webassembly rendering within the same application or page. It really is crazy stuff.
Typescript is not the only thing going on in MS Engineering.
To be exact, they have three active CLRs: CoreCLR, (Framework) CLR and MonoVM. WebAssembly, Android and iOS apps use the MonoVM because it is optimized on AOT.
At the risk of starting a flame war, the CLI and the coreCLR are fare superior VM and platform from a technical stand point. A lot of the features schedule for jave 21 / project Valhalla are just basically catching up to modern VM design.
Of course there is more to the choice of a platform than just the technical differences.
There would be a world for this argument if the CLR had anything even remotely resembeling hotspot runtime optimization.
What you describe is the result of different philosophies/priorities. CLR focuses on static compile-time optimization, while the JVM is a highly dynamic construct with unmatched runtime analysis. In the 90s, there was a hope that with sufficient escape analysis, the need for user-defined primitives would vanish, which is why value types have not been done prior.
By themselves, accessing values via stack and not by reference is technically trivial. The problem lies in backporting that kind of stuff.
> What you describe is the result of different philosophies/priorities
Maybe, it might also be the results of bad design decisions.
> highly dynamic construct with unmatched runtime analysis
As someone who spend quite a bit of time working on custom optimization around hotspot, i failed to see how anyone can describe the current state of the JVM ( J9 is a bit better) as unmatched. V8 and some some extend Julialang have much strong dynamic analysis.
> Maybe, it might also be the results of bad design decisions.
Most assuredly not. Back in the 90s, the cost of loading memory and performing a CPU instruction was essentially equal. Today, fetching data from RAM takes 100x longer than a CPU instruction. This makes locality of data absolutely crucial and is a consequence of computing throughput increasing, but latency remaining stagnant (think of it like a database transaction).
With focus on garbage collection, it made sense to throw everything into the heap and use runtime analysis to inline as much as possible. Nobody has forseen how such hardware fundamentals would change over the next decades.
As far as I'm concerned, the proposed JVM spec for value types is the most promising model I have seen anywhere. Instead of a binary choice between entities in the heap and values on the stack, you have a more granular control with incremental benefits and constraints.
> As someone who spend quite a bit of time working on custom optimization around hotspot, i failed to see how anyone can describe the current state of the JVM ( J9 is a bit better) as unmatched. V8 and some some extend Julialang have much strong dynamic analysis.
Didn't know that, probably worth looking into. Though, I wonder how much you can compare V8 and the JVM, given the fundamental difference between a static and dynamic language.
> CLR focuses on static compile-time optimization
I thought that was common knowledge. I mean, does the latest CLR perform any significant amount of runtime optimizations? From what I've read the CLR makes sue of cpu-specific instructions such as SIMD, but no cache/layout optimizations or any inlining during runtime.
The Java JVM was originally designed with a very dynamic language in mind, e.g. Java's support for dynamic loading, dynamic binding, reflection etc. The influences at the time were Smalltalk and ObjectiveC.
As Java has evolved to be a much more statically typed language (especially Java 5), the JVM has somewhat struggled to exploit this while maintaining backwards compatibility. It's nowhere near as bad as the Python situation though. I believe the CLR was built with things like parametric specialisation in mind from the beginning.
Not opposition, I was just trying to adding some historical context to your comment. I agree the JVM is poorly suited for modern Java and Oracle know this too, hence they are developing GraalVM.
I do not use C# (in fact, I do my best to avoid Microsoft products). But I can tell you that there are obvious workarounds C# employs to work around its lackluster GC. Stack allocation and spans immediately come to mind.
As far as I know, Java offers no way to mark objects as stack-allocated, but C# does. Spans in C# allow programmers to produce subarrays without copying. Enums in C# are stack-allocated, unlike Java. So on, so forth. None of this is a huge deal for Java since some of the best GCs in the world are implemented atop the JVM. But I do think C# offers its workarounds when GC performance gets in the way.
But these are all tredeoffs, that make C# almost as complex as C++. Sure, there are cases where these low-level optimizations allow for better performance, but don’t forget that the more things we specify, the less freedom does the runtime have. SQL is a good example here, it specifies the what and not the how, and this makes a good db very hard to beat on complex queries.
The way I have seen it described somewhere: C# has a slightly higher performance ceiling, but a naive application may very well run faster in Java.