Well that's good, since Go was specifically designed for juniors.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.
Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...
Yeah, but just going back to warnings would be a regression.
I believe the correct approach is to offer two build modes: release and debug.
Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
Release is the default, is strict and runs fast.
That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.
> Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.
In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".
Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.
Right, I meant that the binary should run slowly on purpose, so that people don't end up defaulting to just using the debug build. A nice way of doing so without just putting `sleep()`s everywhere would be to enable extra safety checks.
I feel like people always take the designed for juniors thing the wrong way by implying that beneficial (to general software engineering) features or ideas were left out as a trade off to make the language easier to learn at the cost of what the language could be to a senior. I don't think the go designers see these as opposing trade offs.
Whats good for the junior can be good for the senior. I think PL values have leaned a little too hard towards valuing complexity and abstract 'purity' while go was a break away from that that has proved successful but controversial.
But it also has a advantages that you can literally read a lot of code from other devs without twisting your eyes sideways because everybody has their own style.
"This is Go. You write it this way. Not that way. Write it this way and everyone can understand it."
I wish I was better at writing Go, because I'm in the middle of writing a massive and complex project in Go with a lot of difficult network stuff. But you know what they say, if you want to eat a whole cow, you just have to pick and end and start eating.
Yep ... its like people never read some of the main dev's motivations. The ability for people to be able to read each others code was a main point.
I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs. When i started Go, it was always Rust devs in /r/programming pushing their agenda as Rust being the next best thing, the whole "rewrite everything in Rust"...
About 10 years ago, learned Rust and these days, i can barely read the code anymore with the tons of new syntax that got added. Its like they forgot the lessons from C++...
> I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs.
I see it as a bit like Python and Perl. I used to use both but ended up mostly using Python. They're different languages, for sure, but they work in similar ways and have similar goals. One isn't "better" than the other. You hardly ever see Perl now, I guess in the same way there's a lot of technology that used to be everywhere but is now mostly gone.
I wanted to pick a not-C language to write a thing to deal with a complex but well-documented protocol (GD92, and we'll see how many people here know what that is) that only has proprietary software implementing it, and I asked if Go or Rust would be a good fit. Someone told me that Go is great for concurrent programming particularly to do with networks, and Rust is also great for concurrent processing and takes type safety very seriously. Well then, I guess I want to pick apart network packets where I need to play fast and loose with ints and strings a bit, so maybe I'll use Go and tread carefully. A year later, I have a functional prototype, maybe close to MVP, written in Go (and a bit of Lua, because why not).
The Go folks seem to be a lot more fun to be around than the Rust folks.
But at least they're nothing like the Ruby on Rails folks.
Just because it was a design goal doesn't mean it succeeded ;)
From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."
It's interesting that I've also heard the same from people involved in Rust. Expecting more interest from C++ programmers and being surprised by the numbers of Ruby/Python programmers interested.
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
The people writing C++ either don't need much convincing to switch because they see the value or are unlikely to give it up anytime soon because they don't see anything Rust does as being useful to them, very little middle ground. People from higher level languages on the other hand see in Rust a way to break into a space that they would otherwise not attempt because it would take too long a time to reach proficiency. The hard part of Rust is trying to simultaneously have hard to misuse APIs and no additional performance penalty (however small). If you relax either of those goals (is it really a problem if you call that method through a v-table?), then Rust becomes much easier to write. I think GC Rust would already be a nice language to use that I'd love, like a less convoluted Scala, it just wouldn't have fit in a free square that ensured a niche for it to exist and grow, and would likely have died in the vine.
I think on average C++ programmers are more interested in Rust than in Go. But C programmers are on average probably not interested in either. I do agree that the accessible nature of the two languages (or at least perception thereof) compared to C and C++ is probably why there's more people coming from higher-level languages interested in the benefits of static typing and better performance.
Not the person you replied to, but they're probably talking about Log4j. It's a Java logging library that had a helpful feature where logging a special format string would pull code from a remote URL and execute it. So anywhere you can get a Java server to log something you can run arbitrary code. (Ex: by setting a malicious User-Agent.) Estimates say 93% of enterprise cloud environments where affected.
I suppose Stuxnet could also count, where the initial infection depends on the human curiosity of plugging an unknown usb drive into an air gapped system.
I design solutions so that there is a distinct frontend and backend. Instead of reuse, common code gets shared. Yes, it does take a bit of fiddling to get the architecture to support that. But once you get it bedded down, it becomes just a matter of conventions.
You’re talking about static typing, the opposite of which is dynamic typing. User hamstergene is talking about weak vs. strong typing, which is another thing entirely. Python has always been strongly typed, while JavaScript is weakly typed. Many early languages with dynamic types also experimented with weak typing, but this is now, as hamstergene points out, considered a bad idea, and virtually all modern languages, including Python, are strongly typed.
I personally though it was a very interesting analogy.
Let's take a made up example of a structural engineer designing a building. As another comment mentioned, in theory the design space here is enormous. Just the concrete mix can be endlessly optimized. But here there's a large monetary cost tradeoff that's obvious to everyone involved. Say an overenthusiastic junior proposes attempting to rediscover ancient Roman concrete mix for this project. Everyone from the other engineers to management can call that out as absurd.
In theory you can make the building out of anything. But in practice the economics of producing real world components only allows for a few choices in each stage of the design.
Meanwhile, software components are essentially free, especially if they're open source. Just clone the repo, hook up the code and you're done, right? So surprisingly often, the overenthusiastic junior can convince the whole company to build the new feature with some obscure framework that'll be unsupported in a few years. And no one can reliably call them out, because there's no easy, objective way to measure something like tech debt. (Another compounding problem is the rapid growth of the software field. Juniors are minted faster than they can be trained.)
To make the software example more concrete: there's an internal config language "blub" in one of the FAANGs that someone designed in two months. It looked simple, gained a lot of adoption quickly. Fast forward 5 years, and it turns out using blub was a big mistake. It scales terribly, the semantics were badly thought out and cause subtle bugs everywhere. But here's the kicker: there are now millions of lines of blub. Over the next decade, the company makes several very expensive attempts to replace blub and fails. Blub continues to underpin their production infra to this day.
So to sum up, I think maybe the core issue is not so much "3d space versus hyperbolic space". If you view the space as the tree of decisions you need to make to reach the goal, all engineers are working in hyperbolic space.
However, when designing real world things, economics rapidly culls the design tree. Buying materials requires cash up front. But in software it's often the opposite: you get the libraries "for free", and the unmeasurable tech debt accrues over time.
> However, when designing real world things, economics rapidly culls the design tree
Exactly like sticking with blub, which was only chosen because of historical accident, but they're stuck with it because it would cost too much to stop all progress while refactoring everything to work around it.
You're arguing a matter of degree while asserting a matter of kind.
And the point is still ridiculous with the explicit comparison to maths, which is somehow different because there exists wrong proofs.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
https://go.dev/talks/2012/splash.article#TOC_6.