I'm excited to see `--trim` finally make it, but it only works when all code from entrypoints are statically inferrable. In any non-toy Julia program that's not going to be the case. Julia sorely needs a static mode and a static analyzer that can check for correctness. It also needs better sum type support and better error messages (static and runtime).
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
I don't think Julia was designed for pure overhead projects in memory-constrained environments, or for squeezing out that last 2% of hardware performance to cut costs, like C++, Rust or Zig.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
actually I think it sort of was, I remember berkeley squeezing a ton of perf out of their cray for a crazy task because it was easy to specialize some wild semi-sparse matrix computations onto an architecture with strange memory/cache bottlenecks, while being guaranteed that the results are still okay.
Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
> Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
To clarify exactly where I'm coming from, I'm going to expand on my thoughts here.
What is Julia's central conceit? It aims to solve "the two language" problem, i.e. the problem where prototyping or rapid development is done in a dynamic and interactive language like Python or MATLAB, and then moved for production to a faster and less flexible language like Rust or C++.
This is exactly what the speaker in the talk addresses. They are still using Julia for prototyping, but their production use of Julia was replaced with Rust. I've heard several more anecdotal stories of the exact same thing occurring. Here's another high profile instance of Julia not making it to production:
Julia as a community have to start thinking about what makes a language successful in production.
Quote from the talk:
> "(developers) really love writing Rust ... and I get where they are coming from, especially around the tooling."
Julia's tooling is ... just not good. Try working several hundred thousand line project in Julia and it is painful for so many reasons.
If you don't have a REPL open all the time with the state of your program loaded in the REPL and in your head, Julia becomes painful to work in. The language server crashes all the time, completion is slow, linting has so many false positives, TDD is barebones etc. It's far too easy to write type unstable code. And the worst part is you can write code that you think is type stable, but with a minor refactor your performance can just completely tank. Optimizing for maintaining Julia code over a long period of time with a team just feels futile.
That said, is Python perfect? Absolutely not. There's so many things I wish were different.
But Python was designed (or at the very least evolved) to be a glue language. Being able to write user friendly interfaces to performant C or C++ code was the reason the language took off the way it did.
And the Python language keeps evolving to make it easier to write correct Python code. Type hinting is awesome and Python has much better error messages (static and runtime). I'm far more productive prototyping in Python, even if executing code is slower. When I want to make it fast, it is almost trivial to use PyO3 with Rust to make what I want to run fast. Rust is starting to build up packages used for scientific computing. There's also Numba and Cython, which are pretty awesome and have saved me in a pickle.
As a glue language Python is amazing. And jumping into a million line project still feels practical (Julia's `include` feature alone would prevent this from being tenable). The community is growing still, and projects like `uv` and `ty` are only going to make Python proliferate more.
I do think Julia is ideal for an individual researcher, where one person can keep every line of code in their head and for code that is written to be thrown away. But I'm certainly not betting the near future on this language.
Python has useful and rich ecosystem that grows every day. Julia is mostly pile of broken promises (it neither reads as Python, nor it runs as C, at least not without significant effort required to produce curated benchmarks) and desperate hype generators.
Since you have a rosy picture of Python, I assume you're young. Python has been mostly a fringe/toy language for 2 decades, until around ~2010, when a Python fad started not too different from the Rust fad of today, and at some point Google started using it seriously and thought they can fix Python but gave up eventually. The fad lived on and kept evolving and somehow found some popularity with SciPy and then ML. I used it in 90s a little, and I found the language bad for anything other than replacing simple bash scripts or simple desktop applications or a desktop calculator, and I still think it is (but sure, there are people who disagree and think it is a good language). It was slow and didn't have type system, you didn't know whether your code would crash or not until you run that line of code, and the correctness of your program depended on invisible characters.
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
> Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Ecosystems matter, but runtimes do as well. Take Java, for instance. It didn’t have to wrap C/C++ libraries, yet it became synonymous with anything data-intensive. From Apache Hadoop to Flink, from Kafka to Pulsar. Sure, this is mostly ETL, streaming, and databases rather than numeric or scientific computing, but it shows that a language plus a strong ecosystem can drive a movement.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python.
You don't need FFI to use a Fortran library from Fortran, and I (and many physicists) have found Fortran better suited to HPC than Python since... the day Python came to existence. And no, many other scripting languages have wrappers, and no, scientific computing is not restricted to ML which the only area Python can be argued to have most wrapper libraries to external code.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
> If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python
The fact that you can use those nice numerical and scientific libraries from the language that had also tremendous amount of nice libraries from other domains, wide and good IDE support, is very well documented and has countless tutorials and books available... is an argument against that language? Because you can easily use Fortran code in Fortran?
Nice.
> You don't need FFI to use a Fortran library from Fortran
Wow. Didn't know that.
> And no, many other scripting languages have wrappers,
Always less complete, less documented, with less teaching materials available etc.
But sure, many other languages have wrappers. Julia for example wraps Python API.
> and no, scientific computing is not restricted to ML
Never said it is. I don't do ML, by the way.
> You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul
Yeah, yeah. My claims on marketing like "Julia writes like Python, runs like C" are hyperbolic and require explicit citation, even though everyone that had any exposure to this language knows such and similar catch-phrases.
Look, you like Julia, good for you. Have fun with it.
in the early aughts educators loved the shit out of python because "it forced kids to organize their code with indentation". This was about a decade before formatting linters became low key required for languages.
These are exactly the feelings that I left with from the community in ~2021 (along with the AD story, which never really materialized _within_ Julia - Enzyme had to come from outside Julia to “save it” - or materialized in a way (Zygote) whose compilation times were absolutely unacceptable compared to competitors like JAX)
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
I basically agree with subtyping (but not multiple dispatch). More importantly, I think it's important to recognize that Julia has a niche that literally no one can compete with - interactive, dynamic and high performance.
Like, what exactly is the alternative? Python? Too slow. Static languages? Unusable for interactive exploration and data science.
That leaves you with hybrids, like Python/Cython, or Python/Rust or Numba, but taken on their own term, these are absolutely terrible languages. Python/Rust is not safe (due to FFI), certainly not pleasant to develop in, and no matter how you cut your code between the languages, you always lose. You always want your Python part to be in Rust so you get static analysis, safety and speed. You always want your Rust part to be in Python, so you can experiment with it easier and introspect.
To clarify my comment: I agree that multiple dispatch is a very good language feature. I enjoy it, and I’m well-versed in the expression problem, yada yada.
That’s not what I meant by “method JIT architecture” — I meant calling back into the compiler at runtime to specialize code when the types are known.
I think multiple dispatch (useful as it is) is a little overrated. There's a significant portion of the time where I know I have a closed set of cases to cover, and an enum type with a match-like syntax would have worked better for that. For interfaces, multiple dispatch is good but again I would have preferred a trait based approach with static type checking.
I largely think multiple dispatch works well in Julia, and it enables writing performant code in an elegant manner. I mostly have smaller gripes about subtyping and the patterns it encourages with multiple dispatch in Julia, and larger gripes about the lack of tooling in Julia.
But multiple dispatch is also a hammer where every problem in Julia looks like a nail. And there isn't enough discussion, official or community driven, that expands on this. In my experience the average developer to Julia tends to reach for multiple dispatch without understanding why, mostly because people keep saying it is the best thing since sliced bread.
wrt to hybrid languages, honestly, I think Python/Cython is extremely underrated. Sure you can design an entirely new language like Mojo or Julia, but imo it offers only incremental value over Python/Cython. I would love to peek into another universe where all that money, time and effort for Mojo and Julia went to Cython instead.
And I personally don't think Python/Rust is as bad. With a little discipline (and some tests), you can ensure your boundary is safe, for you and your team. Rust offers so much value that I would take on the pain of going through FFI. PyO3 simplifies this significantly. The development of `polars` is a good case study for how Rust empowers Python.
I think the Julia community could use some reflection on why it hasn't produced the next `polars`. My personal experience with Julia developers (both in-person and online) is that they often believe multiple dispatch is so compelling that any person that "saw the light" would obviously naturally flock to Julia.
Instead, I think the real challenge is meeting users where they are and addressing their needs directly. The fastest way to grow Julia as a language is to tag along Python's success.
Would I prefer a single language that solves all my problems? Yes. But that single language is not Julia, yet, for me.
Mojo has a different scope than Julia and Python, it targets inference workloads.
Polars is a dataframe library.
Yes, it features vectorized operations, but it is focused on columnar data manipulation, not numerical algorithm development. I might say that this is narrow framing, people are looking at Julia through the lens of a data scientist and not of an engineer or computational scientist.
Most of my gripes are when trying to use Julia the way a software engineer would use a programming language.
Most "data scientist" code is exploratory (it's a prototype or a script for an one-off exploration) in nature. And my main gripe is that making that code production ready and maintainable over a long period of time is so difficult that I would switch to Rust instead. If I were going to switch to Rust, I might as well start with Python.
There is no "_the_ language to use", I always pick the language based on the project delivery requirements, like what is tier 1 support on a specific SDK, not pick the project based on the language.
I would really like to see more people who have never written C++ before port a Rust program to C++. In my opinion, one can argue it may be easy to port initially but it is an order of magnitude more complex to maintain.
Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
I'm a developer since 30 years. I program C#, Rust, Java, some TS etc. I can probably go to most repositories on github and at least clone and build them. I have failed - repeatedly - to build even small C++ libraries despite reasonable effort. And that's not even _writing any C++_. Just installing the tooling around CMake etc is completely Kafkaesque.
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
> Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
Exactly this. Regardless of safety, expressiveness, control, whatever argument someone pulls from their hat to defend C++ the simple fact of a solid dependency manager cannot be overstated.
I'm happy to see static compilation in the works! If static compilation takes off, and people are able to build Python packages that are secretly just compiled Julia code, I can see a world where more people opt to using Julia over C or C++. Though writing Rust libraries that work as Python packages is a joy and Julia would still have competition.
Julia biggest problem at the moment is growth. Julia has suffered from not having exponential growth, and has either maintained a small linear growth or has fallen in popularity. Search online on YouTube for tutorials, on Twitch for WatchPeopleCode, or on GitHub for benchmarks; and Julia is not even in the room where the conversation is happening - there just isn't any mindshare.
And for good reason. There are so many ergonomic challenges when using Julia in a large codebase and in a large team. Julia has no formal interfaces, LSP suggestions that are often just wrong, and no option types. This just makes writing Julia code a drag. And it makes it quite difficult to advocate to developers experienced with languages that offer these features.
Additionally, the core conceit pushed by Julia advocates is that the language is fast. This is true in controlled benchmarks but in real-world scenarios and in practice it is a real pain to write and maintain code that is fast for high velocity teams because it requires a lot of discipline and a strong understanding of memory allocation and assumptions the Julia can and cannot make. You can write code that is blazingly fast, and then you make a change somewhere else in your program and suddenly your code crawls to a halt. We've had test code that goes from taking 10 minutes to run to over 2 hours because of type instability in a single line of code. Finding this was non-trivial. For reference, if this were uncaught our production version would have gone from 8 hours to 4 days.
The lack of growth really hurts the language. Search for pretty much any topic under the sun and you'll find a Python package and possibly even a Rust crate. In Julia you are usually writing one from scratch. Packages are essential to data processing are contributor strained. If you have a somewhat unpopular open source code code you rely on that doesn't work quite work the way you want it to, you might think I'll just submit a PR but it can languish for months to a year.
The Julia community needs to look at what programming languages are offering that Julia developers want and will benefit from. The software world changing very quickly and Julia needs to change too to keep up.
> We've had test code that goes from taking 10 minutes to run to over 2 hours because of type instability in a single line of code.
For those who might not be familiar, tooling can sometimes help a lot here. The ProfileView.jl package (or just calling the @profview macro in VSCode) will show an interactive flamegraph. Type instabilities are highlighted in red and memory allocations are highlighted in yellow. This will help to identify the exact line where the type instability or allocation occurs. Also, to prevent regressions, I really like using the JET.jl static analysis package in my unit tests.
If they are so easy to identify, why not just make it a JIT error. Manually inspecting all of this sounds awful. I'd rather my compiler just do it for me.
Dynamic behaviors can be a nice default for all the spots where performance is not critical. So Julia lets you code like Python in places where performance doesn't matter, and then code like C++ or Fortran in places where it does.
I really like Nim but this identifier resolution seems to be the only topic that comes up against Nim, and I really doubt this "feature" is worth the consternation when pitching the language to someone new. I wish the Nim developers would just admit that it is not worth the effort and make it opt-in instead of opt-out.
Personally, I agree, but the Nim lead dev looked into it and declared it "unreasonably hard to implement with unknown effects for template instantiation"
I wish he had detailed the problems more as I think opt-in to insensitivity at import time with some kind of compile-time error for ambiguity is The Better Answer. { For then Nim would not be taking away ident choice from (many) creators in order to support the ident-pickiness of (many) consumers. It's hard to measure/know/forecast, of course, but I suspect the 2nd "(many)" (the number of such very picky consumers) is low in absolute terms in spite of usually high name-consumer/name-producer ratios. A huge fraction of people just go along with whatever the name-producer did, a small fraction complain loudly, and an even smaller fraction would care enough to opt-in to insensitivity if such were available and, say, sometimes caused trouble. There is also a "broader system" aspect of the producer-consumer ratio changing as a prog.lang goes from dark horse/obscure/niche to popular (with a lot more consumers). }
All this said, I would reinforce what others here have said - the property is not actually as onerous in practice as you might imagine. So, it is really worth suppressing knee-jerk reactions to this one feature and giving the language a try, just with `nim c/cpp/js --styleCheck:usages` or whatnot.
> One way data binding that is immutable and has explicit functions for each state transition is a major feature.
> <Tag a="b" x={y+z}>{valid JS expression}</Tag>
> is
> React.createElement(Tag, { a: "b", x: y + z }, [<valid JS expression>])
If you take the main reasons React is criticized and claim it is a feature, surely you have refute the criticism more thoroughly than "This seems so simple, yet do not underestimate it." or "That is powerful. Do not underestimate this."
Modern frameworks (Svelte / Vue / Astro) are about using the platform. They are performant, efficient, easier to read, easier to write, and easier to understand.
I don't see any reason I would pick React for a greenfield project anymore.
I get that if YOU don't want to use a modern framework and want to stick with what you know, sure, by all means, pick React. But writing even a semi complex application in both React and Svelte should make it immediately obvious that React is antiquated, if you give both frameworks a fair shake.
>They are performant, efficient, easier to read, easier to write, and easier to understand.
Proof? Source?
>I don't see any reason I would pick React for a greenfield project anymore.
I don't see why you wouldn't. It's stable, performs well, works in every browser, easy to find answers for problems you run into, and almost every knows it (or should, it's 2024, you don't have an excuse anymore).
>I get that if YOU don't want to use a modern framework
React is the modern framework. It's nimble, concise. The other frameworks are regressive -- they make mistakes that older frameworks already highlighted as being problematic over time.
You should read this article, it was quite good, even if you do continue to use React after. It's good to understand the alternatives, even if you never use them.
It's been a few months since I read it, but I recall the main thing I took away from this was: React isn't necessarily the best choice, other frameworks provide better performance, development experience and tooling that should also be considered
Of course these are just opinions. Everyone should consider all the facts and come to their own conclusions about what they use and don't use.
Companies will continue to use React of course. But I'm not sure if I would use vanilla React for anything I have complete control of.
I’ve done exactly this. Can confirm you are correct. Svelte is also much more performant in the client. But react is great in its own ways and I particularly like the way it tends to point developers towards composition of small components.
I remember thinking I’d like to pick up react for some changes we were planning at work. I spent a weekend with Road to React making a simple web app. I was amazed how overly complicated it seemed and that I was being told “there’s no DSL” but JSX sure seemed like one. I’m a moron and not some special coder so maybe that’s why. I was also using rails 6 at work at the time, but 7 seems to have eliminated any need we thought were were going to have for react so that’s been nice.
Each person that I’ve come across who likes react is super smart and I can’t follow what they are trying to say is so great. So maybe don’t listen to me anyway.
I think YouTube will "win" this war, unfortunately. Even people that "join" individual channels cannot watch content from those channels if they use AdBlock. At some point these people are either going to buy premium or turn off the ad-blocker.
It boggles my mind that engineers and managers that have kids and are working on the YouTube team think this is okay and this is what the future of the platform should look like.
It's the beginning of the end of an era, and I'm immensely sad to see what the internet has become and where it is heading.
> At some point these people are either going to buy premium or turn off the ad-blocker.
Or they stop going to YouTube. In the past 12mo my use of these platforms has reduced a huge amount. I no longer visit Twitter or Reddit because they blocked 3rd party apps, forcing me to view adverts, I would rather not be a user on a user hostile platform.
Deep down, most of us know these are huge time sinks providing very little real value to ours lives so it doesn’t take much friction added by the platform to make people turn away from their bad habits.
YT is too big for people who hate ads to not invest a massive effort. The ad blocker endgame will be YT videos being "pre-watched" by a headless browser running in the background and the video data grabbed from there (either recorded or directly). You then watch the ad-less video in a custom frontend. If implemented correctly there's pretty much nothing YT could hope to do about this.
Sounds like how DVRs worked on cable TV back in the day. Some were even sophisticated enough to use the typical video cutout before commercials to auto skip them on playback.
Now, of course, we have community efforts like SponserBlock that could easily identify ad locations or some form of auto detection based on analyzing the video of they insert the ads at random locations in the video stream itself.
I'm certain this is coming. There's very little on YT I need to watch right now, and having a bunch of videos already downloaded and de-ad-ified would suffice. It would prevent the mindless watching anyway.
I'd even be cool with just blanking my screen and muting the sound when an ad is playing. Given my usual YouTube watching mindset, a brief moment to to just breath would be good for me.
The error handling in Go is SO verbose.
When reading my code (or even reviewing other people's code) in order to understand at a high level what is going on, I feel like I'm squinting through a mesh wire window.
let lat_long = fetch_lat_long(¶ms.city).await?;
let weather = fetch_weather(lat_long).await?;
let display = WeatherDisplay::new(params.city, weather);
Maybe on first glance the Rust code can seem alien (what is a `?` doing there, what is actually going on with `.await`, etc) but when you are writing a 100k line application in Rust, you learn the patterns and want to be able to see the domain application logic clearly. And yet, there's no hidden errors or exceptions. When this code fails, you will be able to clearly identify what happened and which line the error occurred on.
Prototyping even small applications in Go is verbose. And worse still, error prone. It's easy to be lazy and not check for errors and oops 3 months in your code fails catastrophically.
I know a lot of people like Go on here but in my mind Go only makes sense as a replacement for Python (static compilation, better error handling than Python, faster etc). If you don't know exactly what you want to build, maybe it is faster to prototype it in Go? I still would reach for Rust in those instances but that's just me. For large applications there's no question in my mind that Rust is a better choice.
Edit:
people have pointed out that I'm not comparing the same thing, which is true, I apologize for the confusing. But even code where Go propagates the
errors, it is much more verbose (and my point still stands)
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
And this is extremely common. More discussion in the thread below.
Go is a great replacement for Python as a web backend language (which Python really is not). I'm not sold on Rust as a web backend language, though: It ends up being a little too hard to work with (hello `async`) in that application, and you need to import a lot of 3rd party dependencies that are very opinionated. That stuff and the complexities of working with the borrow checker and async adds a lot of complexity to your large, long-running applications that you don't have to manage in Go.
I think Rust is a fantastic systems language that is misapplied to the web. I think Python was a fantastic scripting language that is misapplied to the web, too, so you can put that in context.
I agree that Go's web backend features make it fun to prototype an application. But the moment I want to do anything more complicated, then I'm not sure.
I counted the number of lines in my work projects, and I have $WORK projects that are 100k lines of code. Maintaining that in Go would seem like a nightmare to me, but in Rust that is so much nicer. My personal projects range from 10k - 35k and in all of those I much prefer the ones where I'm writing and maintaining Rust vs Go when it comes to similar complexity.
It sounds like you have a strong personal preference for Rust, which is fine. I'm pretty sure nobody loves Go as much as many people love Rust.
Even 100k LOC is pretty small for a software project, and likely doesn't need more than a few engineers. The advantage of the simplicity of Go shows up when you have to coordinate across >100 people, many of which are kind of mediocre, and you need all of them to ship features. If everyone in the world were a genius who is obsessed with writing clean code, Rust would be a fantastic language to work in at that scale, but they are not.
For clarification, these are 100k LOC projects where I'm the only software engineer. I've worked on larger projects in C++ with other engineers, and would absolutely continue to prefer Rust as the size of the codebase increases. I guess my point is that Rust scales in a way that few languages do. Go comes close though :)
This has been my primary objection with Go, as well. I wonder if it's just a lack of practice and that I'd eventually git gud, but I find it so hard to flow through code to get a general idea of what's going on. It's basically impossible to use code "paragraphs" to separate logical groupings of functionality because of the `if err != nil` blocks, and leads to a very choppy reading experience. With any non-trivial logic, I've found Go to be detrimental to my understanding of what's going on.
Sure, but this code propagates the errors and that has the same problem:
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
In Rust propagating errors is a lot more succinct and easy to do. It is usually what you want to do as well (you can think of Python and C++ exceptions as essentially propagating errors). The special case can be handled explicitly. In Go, you have to handle everything explicitly, and if you don't you can fail catastrophically.
I guess it comes down to what features the language provides that makes it easy to do "the right thing" (where "the right thing" may depend on where your values lie; for example, I value correctness, readability of domain logic, easy debugging etc). And in my opinion, it's easy to do what I consider bad software engineering in languages like Go.
The point of verbosity in Go error handling is context. In Go, you rarely just propagate errors, you prepend them with context information:
val, err := someOperation(arg)
if err != nil {
return nil, fmt.Errorf("some operation with arg %v: %v", arg, err)
}
It really sucks when you're debugging an application, and the only information you have is "read failed" because you just propagated errors. Where did the read happen? In what context?
Go errors usually contain enough context that they're good enough to print to console in CLI applications. Docker does exactly that - you've seen one if you've ever tried executing it as a user that isn't in "docker" group:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.
If your error is 5 or 10 levels deep, do you prepend contextual information every time? External libraries typically have good error messages already, why do I have to prepend basically useless information in front of it?
Not to pick on any of these projects, but this pattern is way too common to not have a some sugar:
Wait that’s interesting and I haven’t formulated it this way.
It reminds me of A Philosophy of Software Design:
The utility of a module (or any form of encapsulation such as a function) is greater, the smaller the interface relative to the implementation. Shallow ve deep modules.
Error handling might be a proxy to determine the semantic depth of a module.
Another perspective: (real) errors typically occur when a program interacts with the outside world AKA side effects etc.
Perhaps it’s more useful to lift effects up instead of burying them. That will automatically put errors up the stack, where you actually want to handle them.
> Perhaps it’s more useful to lift effects up instead of burying them. That will automatically put errors up the stack, where you actually want to handle them.
This perspective also made me think of another advantage of Go error handling over traditional try/catch logging:
In other programming languages, when doing some asynchronous work in some background worker or whatever, exceptions become useless as error reporting mechanisms, because stack traces don't correspond to the logical flow of the program. I remember Java's Spring to be exceptionally painful in this regard.
In Go, since errors are values and you're expected to add context information whenever appropriate, you can shift and shuffle those errors between stacks, and still keep useful context information when you eventually read those errors.
It does the same thing as the Go version. The Go version requires you to pass into the context the data to be sent back to the client, while the Rust version uses the return value of the function as the data to be sent back. The framework then serializes that appropriately.
The only issue there is if you want to return the error with code 200 (which you shouldn't, but it's been known to happen). In that case the Go code and the Rust code will look a bit closer to each other because then you can't use `?` this way (without writing some more boilerplate elsewhere).
Funny, I have the exact opposite reaction to those examples. I look at the Rust code and think "what happens when something goes wrong?" There's no way to tell from the code you gave. The error handling is somewhere else. Whereas, I can see exactly how the Go code is going to behave.
I like that errors are in your face too, but with the caveat that only when they matter. And in Go, the lazy thing will result is a bad time. You can always bet on people being lazy.
Like take a look at this pattern:
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
Is it really necessary to have the error handling explicit in this case? Go through any of your go code. I find this kind of error handling is 90% of the error handling I write.
If those calls can cause errors, then yes, it's necessary to handle them. Maybe you're content to have a contextless stack trace printed whenever things fail, but I like a little order.
1k, 100k, 10m loc does not change anything because no project depends on all the loc as a single unit, everything is split into modules / packages / class / functions.
Kubernetes is over 1.5M loc and I've not seen problem with error handling.
This talk on YouTube seems to suggest that not everything is all hunky dory with Zig as a C/C++ compiler (with reference to libc) but I cannot remember any details since it has been almost a year since the talk was published.
I wonder if things have changed? Does anyone know?
No this talk is about something entirely different, the idea of giving a libc interface to Zig's standard library in order to use it whenever a C project needs a libc dependency and wants static linking. Currently Zig uses musl for that but in a future it could just use its own stdlib instead.
As an addendum, using zig to build libc is going to be amazing. It'd be like a compile-time ld_preload -- so less magical and you have much more control over the scope
People that are saying this is SEO, in my opinion, haven't read or used RealPython. It is genuinely one of the best Python learning websites out there. Their articles are written by people knowledgeable in the field, and usually comprehensive and thorough, while still being easy to understand.
I also think Google's ranking is fair in this case, assuming you want to know most high quality information about "how to iterate through a dictionary". I certainly would prefer this kind of a RealPython article with a table of contents so that I can find exactly what I am looking for.
I do think Google (and the web in general) has started to suck because of SEO spam, and generally agree with the authors premise. Searching for recipes suck, searching for reviews suck, searching for places and events also suck.
I just think this particular instance is a bad example. RealPython values your time, and I wish there were a lot more websites that valued my time too.
When looking for Python questions, it honestly ranks above any random StackOverflow link or the docs for me. The detailed table of contents with links that the author labels “…so…much…nonsense”, is actually the kind of precise “take me to exactly what I want to know quickly” solution they seem to yearn for.
Right, I’m genuinely confused about what the author thinks would be better than that table of contents.
I guess they wanted an answer to their specific question and absolutely nothing else? That might work for something as simple as “iterate through a dictionary” but in many other cases the full context is useful so you can figure out for yourself whether the immediate answer actually solves your problem.
If the author wanted to write "How to do absolutely anything with a dictionary" they should have titled the article that.
Was the first 20 paragraphs on what a dictionary is really useful? No, it's unnecessary verbiage that anyone clicking on an article with the title knows.
I had no idea RealPython was so well respected, but the example article is really quite egregious. I think the problem is the title was written for experts, but the contents were written for a beginner. Content mismatch.
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
https://www.youtube.com/watch?v=gspuMS1hSQo
Which is pretty telling. It takes a overcoming a certain inertia to move from any language.
Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.