Fortran isn't faster than Julia. most comparisons I've seen are ties or Julia wins as long as both implementations are somewhat competent.
Fortran actually makes it fairly hard to write fast code since it is missing some features. for example, I don't believe there is any way to write a fortran program using Bfloat16 numbers. you also don't have great ability to write programs with a mix of strict ieee semantics and fastmath semantics. you have to choose 1 as a compile time flag.
If you are ignoring the cost of the compiler, and a whole host of other things - sure. But the same can be said for most any modern programming language. A lot of Julia's public benchmarks are not idiomatic Julia or packages were created to elide how nonidiomatic they are. Julia isn't a slouch after precompilation, but the time to burn in code can be longer than the runtime and the compilation time of the code in other languages by orders of magnitude. It's great for academic benchmarks though! Huge pain for CI and development.
Are you not talking about Julia v1.9? Packages precompile to binaries (.so/.dll) in this release, and will support direct calls pretty soon. It sounds like you're talking about a much older Juila.
> A lot of Julia's public benchmarks are not idiomatic Julia or packages were created to elide how nonidiomatic they are.
I do enjoy programming in Fortran but let's at least keep it concrete and to reality. The older Fortran versions do have a small amount of optimizations that are hard to perform in other languages because the lack of aliasing can make difficult to prove optimizations possible. But the newer Fortran versions don't optimize as well without forcing things like ivdep, which is similar to Julia, which is why you tend to get the same/similar machine code between LFortran and Julia (since both are using the same compiler, LLVM).
If by much older you mean the current stable Julia release than sure... As of today Julia 1.8.5 is the stable release. So this post is traditional Julia community stuff.. "it works and everything is perfect you're the problem. We didn't create our own message board and stop posting in other places to try to control the narrative". Gas light city.
I wasn't referring to package benchmarks, my apologies if that was unclear.
I'm glad I'm not the only one that feels this way. I love Julia as a language (not as an implementation, which is a pain-in-the-ass to work with), but there has long been a pattern of responses which in practice amount to gaslighting. Afaik it's only small set of people who do that, and charitably they're likely just being less-than-thoughtful about generalizing their "work for my project", "works on my 24 core computer", "works when I know half the people developing packages" experiences to everyone. But to an outsider, it looks like the answer is always "the problems are all fixed now, you're just out of date", and most of the time that turns out not to be true.
More than half the community feels that way the other half are either sycophants or don't care. It's not like it's one person either, it's the language maintainers general attitude. It's been called out in the past but it sure hasn't changed because there's a product to sell.
Your benchmarks are not concrete and realistic. I have never seen any language but (unsurprisingly) Julia, to compare its performance with the performance of other languages by calling those languages from within the host language. Definitely 15000 times faster than MATLAB as claimed by JuliaComputing can be achieved with such seemingly concrete and realistic benchmarks.
LLVM has benefited and learned so much from Fortran and its compilers. But free food and service are always undervalued.
> Definitely 15000 times faster than MATLAB as claimed by JuliaComputing can be achieved with such seemingly concrete and realistic benchmarks.
I believe you're talking about NASA Launch Services engineers claiming Julia's ModelingToolkit simulations outperformed Simulink by 15,000x? That claim was of course not made by Julia Computing or anyone affiliated by Julia Computing, which is pretty clear because the person who makes the claim very clearly describes his affiliation at the beginning of the video. The source is here: https://www.youtube.com/watch?v=tQpqsmwlfY0, at 12:55. You did watch the whole video to understand the application and the caveats etc. instead of just reading the headline and immediately coming to a conclusion, right?
It is not just about call overhead. It is about a whole suite of aggressive optimizations only possible for a whole program. Point to one person or entity in the world who calls SUNDIALS Julia wrapper to bind their C production code to SUNDIALS. If you cannot, you have two options: 1. make your Julia benchmarks concrete and realistic or 2. cease and desist from pointless advocacy of your employer (JuliaComputing) and its benchmarks in public forums.
Sure you can keep moving goal posts. Of course it doesn't make sense to bind a C production code to a C package (SUNDIALS) through Julia. But if you're asking who is using Julia bindings to SUNDIALS as part of a real case, one case that comes to mind is the Sienna power systems dynamics stuff out of NREL (https://www.nrel.gov/analysis/sienna.html). If you look inside of the dynamics part of Sienna you can clearly see IDA being used (https://github.com/NREL-Sienna/PowerSimulationsDynamics.jl). IIRC at a recent Julia meetup in the Benelux region kite model simulations also used it for the same reasons (https://github.com/aenarete/KiteSimulators.jl) which of course is pointing to the open source code organization for Aenarete (http://aenarete.eu/).
The way to find other use cases is to look through the citations. Generally there will be a pattern to it. For cases which reduce to (mass matrix) ODEs FBDF generally (but not always) outperforms CVODE's BDF these days, so those cases have mostly converted over to using the pure Julia solvers. This includes not just ODEs but also other DAEs which are defined through ModelingToolkit, as the index reduction process generates ODEs and generally the ODE form ends up more efficient than using the original DAE form (though not always of course). It's in the fully implicit DAE form that the documentation (as of May 1st 2023, starting somewhere back in 2017 according to the historical docs) recommends using Sundials' IDA as the most efficient method for that case (https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/) (yes, the docs recommend non-Julia solvers when appropriate. There's more than a few of such recommendations in the documentation). Power systems is such a case with Index-1 DAEs written in the fully implicit form which are difficult in many instances to write in mass matrix form and not already written in ModelingToolkit, hence its use of IDA here. By the same reasoning you can also search around in the citations for other use cases of IDA.
Our discussion will continue as long as one side believes fair, realistic benchmarks are merely moving the goalposts. Your benchmark has a severe fundamental flaw, especially given the tiny reported runtimes. I hope you realize and fix it before other critics (perhaps more credible than an unknown forum contributor) begin to question your programming knowledge or fairness.
To address the matter, you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations. If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does. Apples to Apples.
What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? There's lots of benchmarks on that site which show things from small to large. And small models do matter too...
> If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.
It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....
> you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations
You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?
I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).
If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.
They are one time per instance. Which isn't the same thing as one time. Julia sysimages are huge and take a long time to generate even on decent hardware. Last I checked that whole process was very janky, poorly documented, and under heavy revision(as it had been for years prior).
it's not. once you generate it somewhere you can just copy the files to anywhere that has the same architecture.
sysimages are huge (but they've gotten a decent bit smaller recently). notably, 1.8 added some features that let you make them a bunch smaller for deployment. you can now remove the metadata (i.e source code text) which saves about 20%, and you can also generate it from a Julia launched with -g0 to remove debug info (Julia unlike C includes debug info by default because stack traces are nice). we also recently fixed a really dumb bug that was causing libraries to be duplicated in sysimages, so that will sometimes save a few dozen mb. (who knew that tar duplicates symlinks?)
When did you last check? it's now pretty dejankified and has been for about a year. the docs aren't perfect, but I think they're relatively good.
People have been saying "it's better" for 3 years now. One things for sure, I'm not going to go recheck this again, anytime soon. Anything I'll say at this point will be met with "it's better now" and then I'll have to go find out it's really not in classic Julia fashion. The docs for package compiler were awful for about 2 years. See my above comment and other users jumping as to how annoying it is.
Fun Julia story. I remember one time someone, I believe from Julia computing(iirc) was telling me how much better Julia had gotten at something. They sent me links to academic flag plant repositories that had no code in them. Literally empty packages with no branches even with statement of purposes readmes. I offered to work on it and was met with... Academic competition about how I shouldn't do that because a package already existed for it, and how I should try to work with the author on theirs. Meanwhile I already had code for it, it just never went into the ecosystem. I'm highly unlikely to start investigating Julia again in the short term. Maybe in five years.
The reason people have been saying it's gettting better for 3 years is because it has been. PackageCompiler 1.0 was released in 2020 which made it possible to distribute Julia programs as self contained apps, Julia 1.6 released in 2021 added parallel precompilation which made loading a lot faster. Julia 1.8 was released in 2022 which improved precompilation a bunch, and Julia 1.9 will be released in 1 to 2 weeks and makes precompilation cache native code which significantly improves things again.
Deployment is a fundamentally hard problem for dynamically typed languages. Shipping a Julia .so will probably never be as easy as shipping a .jar file in java. However, Julia has gotten a lot more deployable over the past 3 or so years and work on that front continues. Julia 1.10 already has a bunch of compiler speedups that make things a bunch faster than 1.9 (I expect 1.10 to ship late 2023 or early 2024)
Your parent comment wrote "it's better", you said "it's getting better". This is a common Motte-and-bailey argument in Julia discussions:
The "it's better [now]" is most often given as a response to someone expressing a problem they've had, and in context it's presented in a way that suggests the problem is fixed.
"It's getting better" is a far more reasonable response, if it also comes with a caveat about how much better it's gotten and how usable for purpose it is. A lot of the time Julians seem to conflate between "it's a reliable usable feature" and "a pull request vaguely related has been merged and will be available some time in the future, which fixes maybe 10% of the issue".
If you reread this comment thread under the guise of "hm how is the Julia community?". I think you'd find it very enlightening. Maybe even find things you could improve. If only that was the goal.
Fortran actually makes it fairly hard to write fast code since it is missing some features. for example, I don't believe there is any way to write a fortran program using Bfloat16 numbers. you also don't have great ability to write programs with a mix of strict ieee semantics and fastmath semantics. you have to choose 1 as a compile time flag.