Reading this it’s not clear - how well integrated is swift build with swift’s tooling and language server? I know the language server has been open source for a while now. Having them be separate seems like it would create issues with duplicate code.
If you read the article (which I know no one does anymore)
>OpenAI and its partner Microsoft investigated accounts believed to be DeepSeek’s last year that were using OpenAI’s application programming interface (API) and blocked their access on suspicion of distillation that violated the terms of service, another person with direct knowledge said. These investigations were first reported by Bloomberg.
There were different narratives for different people. When I heard about r1, my first response was to dig into their paper and it's references to figure out how they did it.
There seem to be two kinda incompatible things in this article:
1. R1 is a distillation o1. This is against it's terms of service and possibly some form of IP theft.
2. R1 was leveraging GPT-4 to make it's output seem more human. This is very common and most universities and startups do it and it's impossible to prevent.
When you take both of these points and put them back to back, a natural answer seems to suggest itself which I'm not sure the authors intended to imply: R1 attempted to use o1 to make its answers seem more human, and as a result it accidentally picked up most of it's reasoning capabilities in the process. Is my reading totally off?
Yes, but I think most of the rout is caused by the fact that there really isn't anything protecting AI from being disrupted by a new player - They're fairly simple technology compared to some of the other things tech companies build. That means openai really doesn't have much ability to protect it's market leader status.
I don't really understand why the stock market has decided this affects nvidia's stock price though.
...how do you think you got your job? You ever see those old movies with rows of people with calculators manually balancing spreadsheets with pen and paper? We are the automators. We replaced thousands of formerly good paying jobs with computers to increase profits, just like replacing horses with cars or blacksmiths with factories.
The reality of AI, if AI succeeds in replacing programmers (and there's reason to be skeptical of that) is that it will simply be a "move up the value chain". Former programmers instead of developing highly technical skills will have new skills - either helping to make models that meet new goals or guiding those models to produce things that meet requirements. It will not mean all programmers are automatically unemployable - but we will need to change.
A few questions popped in my head. Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore? For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans? By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
> Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore?
I mean, education will have to change. In the early years of computer science, the focus was on building entire systems from scratch. Now programming is mainly about developing glue between different libraries to suit are particular use case. This means that we need to understand far less about the theoretical underpinnings of computing (hence all the griping about why programmers don't need to write their own sorting algorithms, so why does every interview ask it).
It's not gone as a skill, it's just different.
>For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans?
So I had a eureka moment with AI programming a few weeks ago. In it, I described a basic domain problem in clear english language. It was revealing not just because of all the time it saved, but because it fundamentally changed how programming worked for me. I was, instead of writing code and developing my domain, I was able to focus my mind completely on one single problem. Now my experiences with AI programming have been much worse since then, but I think it highlights how AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote. I instead get to focus on the more fun parts. The fulfilling parts.
>By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
I think the best precedent here is the start of the 20th century. In this period, elites were absolutely entrenched against the idea of things like increasing worker pay or granting their workers more rights or raising taxes. However, I believe one of the major turning points in this struggle worldwide was the revolution in Russia. Not because of the communist ideals it epoused, but because of the violence and chaos it caused. People, including economic elites, aren't marxist-style unthinking bots - they could tell that if they didn't do something about the desperation and poverty they had created, they would be next. So due to a combination of self interest, and yes, their own moral compasses, they made compromises with the radicals to improve the standard of living for the poor and common workers, who were mostly happy to accept those compromises.
Now, it's MUCH more complicated than I've laid out here. The shift away from the gilded age had been happening for nearly twenty years at that point. But I think it illustrates that concentrating economic power that doesn't trickle down is dangerous - creating constant social destruction with no reward will destroy themselves. And they will be smart enough to realize this.
> AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote.
I like to think that the best kind of automation when it comes to writing code is writing less code, but instead writing it with strategic abstractions embodying your best understanding of subject matter and architectural vision.
There's nothing really wrong with it, but it's core design is pretty outdated in ways that are difficult to fix without putting together a totally new language. Couple of examples
- Support for nullable values. Swift and Kotlin have first class support for these, which are meant to minimize or reduce the number of null pointers in your code. You can mostly fix this with annotations, but those your team using them very consistently, and are not supported by most java libraries.
- It's approach to concurrency. Java built in synchronization primitive is based upon an older model of concurrency whose name I cant remember but it involves every object basically being able to maintain internal consistency with it's own state. No one still uses it like this, with most method creating an `Object lock` as a synchronization primitive.
- Serialization. Java has built in binary serialization support that ended up being a massive security hole. Most people are now forced to use some json serialization, but the old serialization format is still lurking in the background to ensnare less knowledgeable programmers.
- Generics. Smarter people than me can probably give you more detail on this, but generics were grafted on to the language long after it was introduced, and it shows. At runtime, there are no generic types kept, meaning it is technically possible to break the generic typing of an object.
None of these it should be noted are deal breakers or reasons why you shouldn't use the language. Almost every single one has some form of workaround for it. But if you're not aware of them (or stuck with an older legacy codebase like a lot of people are) they can be major headaches that can just be avoided by using a more modern language.
I've written a bit of code in Go, and the problem I have with it is primarily it feels really outdated for a "modern" language - every time I use it I feel like I'm dealing with something written by someone who really hated Java in 2005. There are features that could be added to the language that would make it more readable and less error-prone without compromising the simplicity of the core language. Generics are the famous example, but the one that really gets me is the the lack of nullable type signatures. This is a great way to avoid an entire class of bugs that nearly every modern language I've used has evolved a solution for except Go.
Another issue I have is the reliance on reflection. In general, I think if you have to rely on reflection to do something, that usually means you're working around some inherent limitation in the normal language - and the resulting code is often far less readable than the code would be in a more expressive language. Lots of Go libraries and frameworks are forced to use it in a lot of cases because there's just no other way to express some really basic things without it.
I really want to like Go. There's a lot I like - the "only one way to do something" approach means that code always feels consistent. Errors as values is a far superior approach to exceptions. I had to write some for a job interview project a while back and it felt really refreshing, but every time I try to use it for a personal project, I don't feel like I'm getting anything out of it that I couldn't get out of say, Rust, or modern, typed Python.
If you’re the type of engineer who prides themselves on the raw amount of code you write, then Go is for you. If you’d rather focus on solving problems creatively and expressively, Go is not your tool.
I don't mean this as slight against those people that really enjoy writing lots of (Go) code. It’s just my observation after being in a few different contexts where Go was the language of choice. Personally Go is too verbose for me and this is especially painful/apparent when you get to writing the multitude of tests required to ensure your code works since Go’s type system/compiler doesn't lend you much in the way of helping ensure code correctness.
>If you’d rather focus on solving problems creatively and expressively, Go is not your tool.
>It’s just my observation
Well, my observation after dealing with js and ruby projects (ruby was my main language a few years ago) - those creating and expressive languages lead to fewer lines, sure, but in the end you are dealing with a pile of crap nobody want's to touch because some of the developers were so creative and wanted to express themselves more than they wanted to solve the problem.
Go is a tool if you actually want to solve the problem and make sure people after can quickly understand how and why you solved it. Rather than stand in awe of your creation.
> Go is a tool if you actually want to solve the problem and make sure people after can quickly understand how and why you solved it. Rather than stand in awe of your creation.
I disagree with that.
The problem in those cases is not expressivity. It's complex features that are very easy to misuse and have no alternatives within the language.
And Go is littered with those. It allows for lots of "cleverness" around Reflection and Empty Interfaces, which effectively turn it into a poorly-specified ad-hoc dynamic language. Same for templates. This is the same problem of Meta-programming in Ruby, and the same problem with complex OOP architecture that plagues some Java projects. It's all avoidable, of course.
Those features are not "expressive", quite the contrary, but they lead to the same problem of "pile of crap nobody want's to touch because some of the developers were so creative and wanted to express themselves".
It takes as much discipline within a team to avoid this in Golang (or any other "non-expressive language") than in more expressive languages.
On the other hand, lots of new ES6 features give it more expressivity without necessarily adding complexity and in fact making the code easier to read and more reliable.
>And Go is littered with those. It allows for lots of "cleverness" around Reflection and Empty Interfaces, which effectively turn it into a poorly-specified ad-hoc dynamic language.
I strongly disagree with that. You can do it, you can do clever code through reflection. But it is actively discouraged unless it's something required to solve the problem, e.g automatic JSON marshalling/unmarshalling.
No professional Go dev is going to immediately reach for empty interfaces or reflection without seeing what the solution looks like with verbose type safe code.
Maybe in your experience this doesn't happen, or maybe we have different thresholds for what we consider abuse, but it's definitely a thing for a lot of people.
Also, overuse of reflection doesn't happen overnight, or only because of inexperienced programmers.
In most complex projects it happens because someone wants to add more complex abstractions that exist in other languages to help with the work and to reduce programmer error and the only way to do it is via Reflection and Empty Interfaces, or maybe via templates.
All of the Go projects where I work have a very strong "empty interfaces are bad" culture (noobs who don't know better will get called out in code review), and the only reflection I've ever run across is for marshaling/unmarshaling. I really don't see these things being overused at all. When I first started using Go, I did perpetrate some empty interface crimes, but it really only takes one experience getting bitten by thwarting the static typing before you learn to avoid that.
I'm glad you are having a good experience, but people work on different projects with different requirements and different constraints.
In Go, empty interfaces and reflection, or templates, are for some cases the only possible way to solve a large class of complex problems. If you don't encounter those problems then it's all good, but some people do.
You haven't also taken into account the possibility of people having to maintain pieces of Go code acquired or forked from somewhere else. Maybe my team was the one who inherited the code from someone who "perpetrated some empty interface crimes".
Not all Go development is greenfield development.
This is one of the things I dislike the most about the Go community. Every single criticism or suggestion to the language is dismissed as being user error, without even taking into consideration the use case or the different experiences one might have, or if the codebase was gotten from somewhere else, or even acknowledging that there are other programming styles. It is a giant echo chamber with people constantly saying "works on my computer".
I dunno, that seems a little unfair. I’m not saying empty interfaces aren’t a problem just because I’ve managed to avoid them, I’m just saying it is quite possible to write reasonable Go by knowing what language features are misfeatures. Obviously if you’re often running across and having to deal with code laden with empty interfaces, that’s a major problem and put in that situation, I’d probably have the impression that the language encourages bad behavior. I’m just saying that so far, I haven’t encountered it all that much so I don’t really perceive it as a problem. If you want to paraphrase this as “works for me,” so be it, but it seems like a particularly uncharitable interpretation.
I do think it’s a crime that the standard library contains some empty interfaces in critical packages that seem egregious. The fact that you can accidentally pass an rsa.PublicKey (instead of a pointer to an rsa.PublicKey) to a function that takes a crypto.PublicKey interface and not find out until runtime is hard to forgive.
Anyway, I’m not just saying “works for me” but I am saying that I’m not going to let what dumb things someone else might do with the language change my enjoyment of it. This view is likely influenced by the fact that my particular projects aren’t allowed to pull in 3rd party dependencies without a thorough review, so the problem of dependency fan-out that may pull in some unfortunate code is reduced significantly.
And TypeScript and Flow these days allow to properly type the wast majority of JavaScript “dynamic typing” patterns making reflection usage in Go and Java to look ridiculous for a supposedly statically typed languages.
>It takes as much discipline within a team to avoid this in Golang (or any other "non-expressive language") than in more expressive languages.
I think this will differ from team to team. I've been working within two different companies as a Go dev so far and haven't seen any Reflection misuse issues.
The difference I see here: while Go does indeed have those features-about-to-turn problems you will be called on not use them too much or use them at all from around every corner. They are there as a necessarily evil.
At the same time meta-programming and every thing that comes with ruby's dynamic expressiveness usually is the one of the selling points.
Go has its flaws and tradeoffs, and simply things that can and will be misused but you don't see articles that promote them as something that should win you over some other language.
> Go has its flaws and tradeoffs, and simply things that can and will be misused but you don't see articles that promote them as something that should win you over some other language.
I disagree there. I can think of two examples of things I consider very easy to misuse in Go, but are promoted by the community as being the superior solution to problems in articles and posts all the time: Go Error Handling and Go Code Generation.
Go error handling has its downsides but how can you misuse it? If your code may generate an error = return an error to handle elsewhere. If the code you are calling returns an error = handle it.
I also shy away from sloppily "expressive" scripting languages like JS and Ruby. When I talk about expressive I'm referring to Lisp, Scheme, Rust, Haskell, Scala, Kotlin, etc. I do agree Go has a specific use as a technology that can be deployed in high-turnover environments to mitigate devs who have an attention span that lasts until they have to write tests and deploy their code. I mean this is exactly why Google likes Go. It's a least common multiple that can be picked up easily by anyone as engineers whirl around in their machine. You're not wrong. I, however, like to enjoy writing code and understand I have to test it and maintain it throughout its lifetime (or don't have the luxury of handing it off to a salivating crew of new grads looking for promotion-worthy work once I get bored) so I gravitate toward languages that are fun to write and easy to maintain. For me, those tend to be languages that offer formal macros/meta-programming, sound type systems, support higher-order programming, and preferably enforce memory safety. It doesn't mean my code is littered with that "look how clever I am let's admire my code greatness" stuff.. I prefer to keep most of it boring too focusing more on whether it's visually readable and logically easy to understand. But it's nice to have powerful features when they really matter and it's nice to know you can count on a compiler to help ensure memory safety and type correctness.
Rust has to be one of the least expressive languages ever made, so that’s a pretty weird one to group along with other actual expressive languages. That’s not a full-on dismissal of Rust, since expressivity isn’t its main goal.
Also, type system ‘soundness’ is a pretty empty desire. It’s easy to create a type system that is formally sound, yet not useful. This is the same argument as ‘type correctness.’ Java programs were ‘type correct’ before Java had generics, and the type system was extremely limited.
So types are an inherently meaningless goal, because types can mean so many things.
Rust is very expressive. It has a useful macro system, powerful generics, and generalized traits which means if you implement your types canonically you're using them via essentially the same patterns almost everywhere.
Don't confuse not-expressive with articulate. Rust requires your to be articulate about what you're doing so you don't gloss over ownership details with fancy one liners like is common in other languages. You can be both expressive and articulate. Not worrying about memory ownership, while possible in other languages, is not the hallmark of expressiveness. For example, imagine a C analog to a Rust program that implements all the same checks and memory discipline that Rust does.. Rust is way more expressive, relatively.
I see what you mean with ruby, it encourages to be clever, and I had some exposure to work on somebody's "clever" code.
I'm not as familiar with js, but feels similar to me as well.
What makes things worse they are both dynamic languages, but there's a middle ground.
It feels like Rust for example (I'm still planning to learn it) looks like have the right amount of functionality. Java or even C. I think what you experienced is the extreme case of it.
This is just the old conflict between ops and dev. A software dev wants a little code to express a lot, while maintaining some central guarantees, and doesn't really care about edge-cases beyond not having to code them explicitly. Needless to say, a few edge cases will turn out not to be what the dev intended, no matter how smart they are. Tests focus on the central guarantees and nothing else.
Ops is the next guy who comes in, who is only ever supposed to keep everything running. Changing things is second priority, if it's anywhere at all on the priority list. (S)he will HATE any edge case they don't know exactly what happens because it can be bad, and it's the source of all their work. They want to manually code every edge case, and have that covered by a test.
C/C++ ("smart" C++ specifically), Haskell, Lisp, ... are for the software devs.
Go, Java, C#, ... are for the software ops folks.
A large bank should probably be using Java. Someone trying to start up a new bank may prefer Haskell.
I don't see this conflict resolve any time soon. Personally I find the large bank situation somewhere between soul-crushing and suicide-inducing. But I've met plenty of people very happy in such situations, and great for them. I may even consult for them, because for 1-2 months or so software ops can be quite interesting. Figuring out how to introduce changes in a software ops org is ... usually a challenge in addition to figuring out how the software system should change.
You always come out well, because there's always large improvements that can be made because nobody's doing that. They're busy triple-checking that the next parent-teacher conference of employee 271 doesn't interfere with the oncall schedule.
And frankly, I could (and probably can still) use a few lessons in "this edge-case fuckup is not acceptable and you missed it !". I've been humbled several times by steering a large problem past all security measures of a software ops organization in the past.
>Ops is the next guy who comes in, who is only ever supposed to keep everything running.
>Go, Java, C#, ... are for the software ops folks.
Can't say anything about Java or C# but one of the reasons we use Go is because the requirements change often and you can adapt your code quickly.
At the same time our code (my main field is systems integration) is required to keep running obviously. New requirements should not change this fact.
So on a spectrum I'd say Go is 7/10 OPS. Subjectively.
Obviously Go probably is not the type of language you should chose for your MVP or prototyping. Unless you are sure you have a full picture in your head already.
If I was trying to justify Go's verbosity in this case I think I would say "a property check on the dict incurs a cost, by making it somewhat more verbose you encourage people not to check more times than needed".
At least that's the kind of thing I've heard about other verbose things you need to do in Go.
There’s a lot of things wrong with this. First, this has nothing to do with the actual reason for the verbosity. The actual reason is that a value may or may not be present in a map, that’s a universal problem. Most languages just return nil or an Option value, but Go returns an error and forces you to handle it. No comment from me on whether that’s good or bad, but that’s the reason the code is like that.
The real problem is you’ve just made up a reason that the code is that way, and the reason is also crazy. No language purposefully makes something verbose simply to discourage doing it because of a performance concern. Even if they did, key lookups are constant-time complexity and are among the most efficient things you can do in all of programming. So that wouldn’t even make sense as something you’d want to discourage.
Err... Go does exactly what "most languages" do too! You can write:
value, _ := map[key]
and if map[key] doesn't exist, value will be set to the "zero value" of the map's type. So Go doesn't force you to handle the error. The actual reason for the "verbosity" is just that you can't simply write
if(map[key]) { ...
because map[key] returns two values (and "if" wants to have a boolean expression). But if you really have to do this so often that it's bothering you, you should question yourself why...
and if map[key] doesn't exist, value will (still) be set to the "zero value" of the map's type.
The actual reason you cannot simply write:
if(map[key]) { ...
aside from the extraneous parens, is because Go syntax does not permit implicit default values/truthiness in its 'if' statements like that (as you say "if wants to have a boolean expression"). That's the real reason for the verbosity here.
But you certainly can write:
if map[key] != "" { ...
if the map contained strings, or:
if map[key] { ...
if it contained booleans, etc. (For what that may be worth: checking for default value isn't the same as checking membership in all instances, of course).
Because map[key] can in fact return one — or two — values, depending on its usage context.
Last time I wrote Java was before it had nice things so I may be slightly out of date in my assessment of its verbosity. I really wish it had decorator syntax like Kotlin does:
class Foo(val delegate: Collection) : Collection by delegate
{
override fun length() = delegate.length() + 42
}
val foo = Foo(delegate: LinkedList())
foo.add(1)
foo.add(2)
for e in foo {
print(e)
}
foo.removeAll()
foo.length() // = 42
I think a lot of the verbosity I remember was due to the sheer size of the type of interfaces you'd encounter in practice and an inability to easily compose them without implementing your own set of forwarding decorators (io stream style). That and getters/setters or rather the lack of any language-level support for dynamic properties.
Do you really want to inherit all the functions of Collection? Isn't this the anti-pattern everyone complains about on inheritance vs composition? This just looks like it's confused about whether it is a collection or has a collection.
Personally, I'd just rather take a collection by composition and only expose the small part of the API I actually want.
I guess though this is the standard decorator pattern you'd normally see from GoF.
I'm a C++ developer who has been writing more and more go code over the last year, and go's iota enum (alongside generics - having to write a search method for every different slice type gets grating after a while) is my biggest gripe with go. They really aren't any better than just global constants
This feels largely like a mischaracterization, and does not align with my experience. I'd change it to say, if you are focused on solving the problems creatively rather than using the language creatively, Go is a viable option. But if you're interested in creative usage of the language itself, Go is not right for you.
I get what you're poking at and I think we're mostly saying the same thing. However I want to point out that at the level of the stack we're discussing, the application of the chosen language is essentially: plumbing. The user-facing "creative" bits I think you're referring to (those in the system at large, which may or may not use Golang middleware) are most likely a react or mobile app. I'm not trying to devalue the self worth of individuals who write Go code by suggesting that the problems their products are solving aren't creative and fun. Not at all. Rather, my point is that the application of their Go code to build yet another solution for processing requests and serving responses ends up as pretty mundane and verbose imperative program logic that requires non-trivial amounts of effort to properly test and verify. Creativity in the "plumbing" slice of the stack to me looks like concise, readable, expressive code that leverages a type system to model a problem domain and then incorporates expressive logic governing how the types interact, the correctness of which is mostly enforced by the compiler itself and where testing is only required to ensure properties the type system can't capture or to make sure traits of a given type are implemented correctly. Go just simply isn't a language that allows for that level of "creativity" in building what would otherwise just be pretty mundane service plumbing.
I agree with this. Go works like my brain, and I don't mind that. Sure you can do X in some fancy way in other languages, but that just leads to feature creep. I've never read Go code I didn't understand at first glance. I cannot say that about a single other language.
And this is totally fine if you're okay reading and writing noticeably _more_ code to solve the same problems. There's nothing wrong with favoring verbosity over expressiveness, it's just not my style.
I disagree that expressiveness lends to feature creep, I've personally never seen that happen, but that's another topic.
For me, the verbosity of Go would be a lot more justified if it aslo had some sort of memory safety system in place (similar to Rust) that could help prevent concurrency related errors. I understand that you're supposed to used channels for everything in Go, but last time I wrote a channel heavy concurrent server the compiler did nothing to prevent me from mutating shared state across different go-routines and I was unable to land on a pure channel based implementation (I needed a run_once to initialize some shared state and a WaitGroup to track active listeners, and this was the case in all the other impls I audited/referenced as well). I just don't feel like I'm getting a lot of value in exchange for the verbosity when writing Go.
That's a completely fair assessment. I don't fault 'you guys' at all because I don't expect your brain works like mine. Read: I don't think you're wrong and I'm right...never.
What I fear myself is feature creep. A design by committee that ruins the language for me, and is never good enough for you.
If sharing state, you're much better off using locks and test with go's race detector. There'll be subtle bugs or contention though, unless you don't share state.
If not, you might try 0mq, immutable lists or any other facility. Channels are just the in-process idiomatic way to communicate state, but not required at all.
In my somewhat brief experience with Go I realized that one better avoid channels even for supposedly idiomatic cases. For example, try to implement a priority queue with channels. Or why it is not possible to use the select statement against a file descriptor? Another problem is proliferation of unnecessary threads that are entirely avoidable when using mutexes.
Channels are also hardly foolproof. You need to internalize the (non-obvious) channel axioms[1], and there are many other pitfalls and annoyances like:
- Needing to determine ownership of closable objects passed across channels.
- Defining what special token (or OOB signaling mechanism) is used for marking end of data.
- Synchronizing who is closing the channel
Go channels are quite painful to use and often a simple lock would do better. This write-up[2] summarizes the issues with them quite nicely:
can't really fault you on that either. features/looks etc. it really comes down to wether it matches how you think. I can make big claims about functional languages being superior for x y and z but if I'm being really honest with myself. its more about that fact that it fits how my brain already models problems.
Please omit the last line. Go is actually far more verbose than modern java when you are comparing anything bigger than a single snippet of code. Modern functional Java is fairly concise. Go is not - though maybe this will change with generics and functional packages after Go 2.0.
Go's weak type system encourages use of dynamic type ops (such as casting interface{}, etc) and obviously the compiler is out of the loop at compile time so it can't help.
Explicitly, I believe you, but I'd bet you've often implicitly cast to interface{} (even some very legitimate cases like json.Marshal() are still technically casts to interface{}).
>> the problem I have with it is primarily it feels really outdated for a "modern" language
It's mostly not about the language. An exception is of course when moving from a dynamically typed interpreted language to a statically typed compiled language.
The success of projects depends much more on other things than the programming language. It's about
- Processes and standards, like following a well defined structure, testing, documentation, ...
- Maintainability of code. It must be easy to read, to understand syntactically and to build a mental model of the code
- Long term reliability and stability of the eco system
- Easy and reliable tooling
- Developer efficiency, e.g. compile times
Go shines in many of the aspects. Especially in maturity, stability, amazing standard lib and tooling. As you mention Rust: This is exactly where Rust falls short. Rust is a great language with amazing people behind it. But there are reasons why its adoption in the broad real world is very, very small. The reasons are not the language. So I always feel it's a bit invasive when Rust promoters enter Go or Java threads by telling how much better Rust as a language is.
In this example Go has additional benefits being statically typed and compiled, very fast and with build in concurrency support.
Go isn't for programmers, it's for managers. Because of the simplicity of the language, it's easy to get people onboarded with, and it's fairly difficult for a single team member to go astray and move the codebase in a direction which will be unmaintainable. It's a kind of lowest-common-denomenator language which is easy to read, debug, and maintain, even for a mediocre programmer.
That said, I think you are right about explicit nullability. A language like Go with this feature, as well as named arguments and limited ADT's could be very compelling for Go's use case.
And yet it's a top-5 most-loved language among programmers and there's no real evidence that it's popular among managers (discussed in detail here: https://news.ycombinator.com/item?id=27175297). Moreover, lots of open-source software exists in Go--if only managers like the language, what's the theory for why all of this software (including major software, like Kubernetes) exists? Is it managers building it in their spare time? Even then, those managers are acting as programmers (not managers) in that capacity.
With respect to the implication that only managers care about the pragmatic aspects of software development (collaboration, onboarding, readability, debugability, maintainability, etc), where did this meme come from? I get the meme that there are many programmers who value abstraction above all else, but I'm not familiar with the meme that all programmers are abstraction maximalists.
I completely disagree with this statement. Some of the most high-performance modern platform-level code written is currently written in Go.
It was a Google presentation that examined their efforts to convert their download site to Go (from C/C++) that got my attention. It's easier to read, has a simpler mental model, and faster than its C/C++ cousin.
I'm not sure that the points you raise contradict my thesis. Could it not be the case that a language designed primarily for making large software teams easier to manage would also lead to high-quality software at large organizations?
If the thesis is that "Go is for managers", then yes. I would wholeheartedly agree to something like "Go is not for developers who want to write clever code that others (or they themselves in a few months' time) may have trouble understanding". But Go also works for large teams (communities) of open source developers who don't have what you would usually call "management", so saying it's only for managers is definitely too narrow a statement...
I basically agree; I think claiming "go is for managers" was a bit flip of me to say, but what I generally mean is that Go is more optimized for operational concerns than for the joy of programming. It's also not a pejorative: there's a lot of good reasons you might choose Go and why you might like it as a programmer.
Doesn't Java fit the same bill? (although it was kinda modern at the time)
> I think you are right about explicit nullability
Sadly it's really hard to shoe-horn this onto Go, now that the std lib is widely used. Same for proper sum types (which can then be used for multiple return values).
But the continuation of the nullability mistake puzzles me the most... And that wile Go's be designed by big name lang gurus: what were they thinking?
> Doesn't Java fit the same bill? (although it was kinda modern at the time)
Yes, I think Java was an earlier attempt at this.
> Sadly it's really hard to shoe-horn this onto Go, now that the std lib is widely used. Same for proper sum types (which can then be used for multiple return values).
I agree, I think these types of features are much easier to design around than to add into a mature ecosystem later
> But the continuation of the nullability mistake puzzles me the most... And that wile Go's be designed by big name lang gurus: what were they thinking?
Is it just a timing thing? It seems like explicit nullability came onto the mainstream a few years after Go debuted, but maybe it was already discussed in academic circles before then.
> Is it just a timing thing? It seems like explicit nullability came onto the mainstream a few years after Go debuted, but maybe it was already discussed in academic circles before then.
I know OCaml[1] has it ('93) and Haskell[2] ('90). Also the claim that implicit nullability is a mistake was made in 2009[3] (around the same time Go was released).
Given the seniority of the designers of Go, I expect they had knowledge of this.
... if "it" is even a thing. I would say that it's not, rather implicit nullability is the thing. It's just not something you'd do in a typed language unless you really specifically chose it. That's how Hoare could claim it was his billion dollar mistake. Implicit nullability was a thing he (and subsequent language designers) opted in to.
> Errors as values is a far superior approach to exceptions.
Why is that? I have never seen a cogent explanation for why this is the case.
I can tell you why exceptions (as implemented in Java) are cool: You can write code as if every function call is successful, as opposed to adding a line (or more) of error handling code after every function call, which makes it harder to follow the logic.
I write Java, but I prefer errors as values exactly because you can't only consider the happy path. It really makes you think about the appropriate response to this specific failure. You can do that with exceptions, but in practice it's exactly like you say: all error handling is delegated to some generic catch-all, which in a web app usually just gives a generic 500 Internal Server Error.
If I encode my errors as values (usually with Either), I have to decide how to gracefully fall back if a failure occurs before I'm even allowed to use the successful result. Maybe I just hide the part of the view that needed that data. Maybe I show an appropriate error message. Maybe I email operations to let them know about the problem. Whatever I do, I have to actually think about it and not just assume that the error will be caught somewhere by someone. The result is usually a dramatically improved user experience when failures inevitably occur.
Exceptions tend to pass the buck so far down the line that there's no context to make an appropriate decision. Values tend to force a decision early, when you still have enough context. (Obviously both can be used against the grain, but the question is which pattern is easier.)
> It really makes you think about the appropriate response to this specific failure.
I beg to differ. It forces you think about whether but not why the function itself has failed. The "why" is embedded in the type of the error (or exception) itself, but Go does not force you to examine the type of the error; indeed, sheer muscle memory compels you most of the time to just write if err != nil again and again.
> I can decide how to gracefully fall back if a failure occurs before I'm even allowed to use the successful result.
You could wrap every line of Java code in a try/catch if you wanted to (it wouldn't be idiomatic, but it's definitely possible). You're just not forced to.
For what it's worth, you also shouldn't email your ops team directly from production code. It doesn't scale. You should log the error, and your monitoring stack should handle altering the relevant team (full disclosure: I work for such a monitoring stack company). It's very rare that you actually want to recover from errors, as that's typically a pattern that leads to silent failures and difficult to diagnose issues.
Yeah, I'm not endorsing Go's approach to errors, just the idea of errors as values. I can't speak for Go, but other languages make it very obvious that to handle an error you have to inspect its type, and thereby get at the "why".
The lack of forcing to handle (checked) exceptions is exactly why I dislike Java's model. Until I've checked for an error state, I want to have an Either that may or may not have the data I'm looking for (and if it doesn't, has an explanation). In a truly exceptional situation I can crash and give a 500 error, but checked exceptions are by definition supposed to be recoverable, and in a production codebase I don't want to be able to lazily avoid recovering from them.
You're differentiating between Java the language and Java the ecosystem. Java's tooling is so strong that you can use static analysis tooling to fail builds that throw checked exceptions, if you want. See e.g. https://rules.sonarsource.com/java/RSPEC-1162
That… sounds like exactly the opposite of what GP wants? If you can’t throw checked exceptions then you can only throw unchecked exceptions, whose catching and checking is even less enforced by the compiler.
GP is differentiating between library code and application code. You wouldn't turn on the static analysis to fail the build on checked exceptions in library code, since you want to force the application to keep track of known error modes in libraries and decide where it's appropriate to handle them. But you would turn them on in application code in order to prevent application code from using exceptions-as-control-flow, which is an anti-pattern and one of the reasons why people who value Go's error handling sometimes hold their opinions because of scarring they suffered from anti-patterns that were in Java codebases they worked with.
I'm not sure where all these comments which specifically mention Java are coming from. In Java you MUST catch or forward all exceptions besides those derived from RuntimeException, and you MUST specify them in the type signature. RuntimeExceptions are equivalent to go panics, so you don't have to catch them.
Java never lets you ignore an non-panicking exception. Unlike Go, where you CAN ignore the error value.
Besides that, Java forces you to explicitly state what type of exceptions each function can throw.
Java exceptions are stricter than Go, not vice versa.
Now, if we were talking about .Net or JS, or Python or mostly any other language with exception I'd get this criticism, but it is patently false when it comes to Java.
I think Java in particular sticks in a lot of people's minds because, while the ideas are there, the execution is not - the standard library sets a poor example that is generally followed elsewhere.
An attempt to read from a Reader, for example, can fail with a java.io.IOException. The javadoc for that lists 31 direct known subclasses, including CharConversionException and JMXServerErrorException; and there is always the possibility of a custom subclass from somewhere else in your application or a 3rd party library.
You can't do anything sensible with such a broad error (like decide if retrying might be sensible), so you end up either propagating it in your type signature, wrapping it in a RuntimeException, or ignoring it.
For what it's worth, I believe many I/O exception sub-classes came from before the time wrapping exception was established as a best practice in Java (as it has been in Go recently, with the introduction of fmt.Errorf("%w")).
I agree that the Java standard library (especially the older parts of it) is quite bad. The Go standard library, whatever issues it has[1], is still pretty solid.
Unfortunately, the questionable quality of the Java standard library and the Java EE libraries back in the day, have led to some of the bad patterns we see nowadays in Java, that are not necessitated by the language, e.g. gross abuse of inheritance.
I still want to point out here that error values are not superior to exceptions, since:
1. Java shows exception handling can be forced to be explicit as well.
2. Handling Go errors is NOT forced. In fact, you can always ignore the functions' return value or assign the error part of it to a `_`. This is far less explicit than an empty catch block.
Can you please elaborate how the pattern that you describe doesn't pass the buck to a portion of code that has less context for the cause of the failure?
> You can write code as if every function call is successful
How does writing code as if every function call is successful (when you know that some functions will fail) not lead to some other bit of code further up the call stack having to make a decision about an exception that it doesn't have the context for? A bit of code which probably was written by a different developer who didn't anticipate what you were going to do?
A library knows what happened, but not what to do about it. It doesn't know who the caller is. It could be an interactive user or a microservice RPC or a Spark job or a debugger.
There have been frameworks like the Common Lisp Condition System to let a caller dictate a recovery/retry policy, but they never caught on. In practice "do no harm, give up, and report the error" is what almost everyone wanted, and most languages support it without punitive effort.
> Can you please elaborate how the pattern that you describe doesn't pass the buck to a portion of code that has less context for the cause of the failure?
Copy/pasting another person's answer:
You could wrap every line of Java code in a try/catch if you wanted to (it wouldn't be idiomatic, but it's definitely possible). You're just not forced to.
You can handle some of the exceptions in the same function, without going to the extreme of wrapping every line of code in a try/catch block.
> You could wrap every line of Java code in a try/catch if you wanted to (it wouldn't be idiomatic, but it's definitely possible). You're just not forced to.
Yes you ARE forced to. Unless the exception derives from RuntimeException.
Look at any piece of Java code and try to guess the cyclomatic complexity of it. It's not simple. Because every line, and every function call can fail. And that failure is one more place where the execution tree of your code branches. You can't see it unless you check every method call in your code.
In Go -- every error is obvious, and you get a sense of a cyclomatic complexity of any piece of code by just going through it quickly.
So, that's it for me. The complexity of the code is visible.
I think it's a little bit complicated to explain but mostly it boils down to this: errors are real. Java methods kind of let you ignore them through declaring exceptions, with this idea that, well, somebody else will deal with it. Golang functions make errors feel more present. They force you to think about how you're going to handle the errors up front, and to question whether or not you even should error in a particular instance. It's actually helped change the way I think about functions. There are many behaviors that in Java I would not have even considered making them idempotent but in golang making them idempotent is both easier and, as it turns out, more robust.
The patterns for error handling the golang have introduced are admittedly verbose, but they do lend a certain element of confidence that once the code is written, the errors should be handled. Of course a programmer can ignore the errors explicitly but doing so is different than forgetting to catch a thrown exception, because the programmer must go out of their way to write code ignoring the error. It feels like there's more agency around the decision.
The difference is that with exceptions sloppy code is the default.
As you write code in a language with explicit errors, the language makes you acknowledge that the code you call can error out. This makes you stop and think what to do about the error. You can choose to ignore it, but that's a conscious decision that the language forces you to make.
With exceptions, there's no such feedback mechanism from the language/compiler. In order to write robust code you yourself must have the discipline to add exception handlers around the appropriate calls.
In short, defaults matter. It's simply easier to write correct, robust code when you don't have to go out of your way to do it.
> It’s really hard to write good exception-based code since you have to check every single line of code (indeed, every sub-expression) and think about what exceptions it might raise and how your code will react to it.
Only checked exceptions. And the integration of checked exceptions is so bad, and the split between checked and unchecked so arbitrary, that most codebases have sworn off of them.
the fundamental tenet in Go is that every error should be handled. Here is an excerpt from Dave Cheney that clarifies this:
"For the truly exceptional cases, the ones that represent either unrecoverable programming mistakes, like index out of bounds, or unrecoverable environmental problem, like running out of stack, we have panic.
For all of the remaining cases, any error conditions that you will encounter in a Go program, are by definition not exceptional — you expect them because regardless of returning a boolean, an error, or panicing, it is the result of a test in your code"
I think the point is that in Go, you expect failure. Failure is not exceptional. It's a first class part of the logic.
Java conditions you to view failure as an exception to the rule, and the happy path as the real code (which is the attitude you express in your first comment). Lots of us have since observed that this approach to failure leads to errors because programmers ignore failures, allowing them to be handled by a catch-all at the top of the stack, which typically just dumps a stacktrace and calls it a day.
The paradigm espoused by Go sees errors as just another program state, and one whose implications are just as important to consider as the desired behavior. This forces programmers to consider all the implications of a failure, rather than just adding another `throws` clause.
> Lots of us have since observed that this approach to failure leads to errors because programmers ignore failures, allowing them to be handled by a catch-all at the top of the stack, which typically just dumps a stacktrace and calls it a day.
That's not typical Java code. You could do that in quick & dirty code, but I haven't seen such code in production code.
> That's not typical Java code. You could do that in quick & dirty code, but I haven't seen such code in production code.
It sounds to me like you're an extremely lucky person. I've seen too much of that sort of thing in production code, including a catch-all for Throwable at the top that didn't even dump a stacktrace.
Anecdata: but I see this pattern all the time in production code. It’s not limited to Java: I’ve seen it in TS/JS, Python, and C++. I haven’t (yet) seen it our Go codebases.
I'm working with some Python that follows this error as value convention and it's awful. The code likes to catch exceptions as soon as possible and replace them with strings like "not found". Exceptions are useful things, they tell me what broke and where. Instead I get a "not found" error value, and find out that happened because the endpoint called a service that returned a "item not found" error value, and then a few layers of obscure string error values later I find out some logic did division by zero or something.
In other words, I also don't understand the hype around error values.
So, back to Python, what error value is suitable for replacing an exception? I mean, Python is flexible enough to do Go-style error values, but what does a good error value look like? Maybe I just haven't seen good error values?
If an error happens way down in the weeds and I initially return a very specific error value, do I just pass it up the stack forever? If so, how is that different than traditional exceptions? Do I replace the specific error value with more general error values as I move up the stack to higher-level code? If so, then I am throwing valuable information away and am likely to end up with something like "not found" as my error value at a the top -- not very useful.
In Go, I would do one of:
* Pass the error value up unchanged
* Wrap the error inside another error
* Pass a new error
* Swallow the error and do something else to cope with failure
I would probably also do some error logging, capturing the state of relevant local variables, especially if it is one of those cases where "see an error" is unexpected.
Thanks for the answer. It sounds like pretty much the same thing good exception handling would do.
I suppose they're two sides of the same coin. Both can be done well or poorly.
I do resent the idea that error values are automatically better or easier to understand, because I'm dealing with some poorly done error values now and wishing for traditional exception handling.
I guess the differences I see are between "with exceptions, you can accidentally forget to handle the errors in the right place, and you end up handling them elsewhere" vs "with error values, if you forget to handle the errors, you don't handle them at all" on one side. And "with exceptions, you do't necessarily have the immediate feedback that one may come" vs "with error values, you need to choose between sticking them in an error value variable, or explicitly ignoring it" (and with Go's fondness for "you assign to a variable that is then never used? That's a compile error!" it essentially means "you either handle the error or you intentionally disregard it".
Either is fine, I think. But, don't mix both in the same codebase. And with exceptions, I would really like Common Lisp-style restartable errors, to get the flexibility I feel I have with error values.
> I've written a bit of code in Go, and the problem I have with it is primarily it feels really outdated for a "modern" language
Go is a language to get the job done. It's for the masses. It's this type of language you write a blog post on how you have done this and that and why instead of a scientific paper. It's really the boring mortar.
Exactly this. Go is the language you use when you need to get shit done, reasonably performant and easily distributed to different operating systems.
The language actively hinders any attempts to be fancy or "expressive". You write (or copy/paste) the same pattern(s) over and over again.
On the other hand, you can pick up pretty much anyone's Go codebase and quickly see how it works. No need to figure out what flavour of metaprogramming or other fancy crap the original author was a fan of at the time of writing.
It's boring, it pays the bills. You can be expressive with other languages on your own time.
> Errors as values is a far superior approach to exceptions
No, it's not. Unhandled errors that aren't noticed just lead to undefined/nondeterministic behavior and some really nasty bugs. And moreover, I literally cannot understand how it is possible for any programmer to not realize this (compare: elixir's approach, which instead embraces errors and restarts processes instantly).
It feels like Option types could be done quite easily - basically slices with a hardcoded capacity of 1. Could compile down to a pointer type. But it would make linting way easier. Of course, the real power of Option comes from Map.
Give Go a map and option and I'll be a happy gopher.
At that point why not use something F# or Rust? Both can provide comparable or better runtimes, very decent ecosystems, and they are already much better languages.
Because Go as an ecosystem has some really attractive properties that few other languages offer in the same combination: portability, stability and a strong compatibility guarantee, quick onboarding for new learners, a robust and scaleable runtime with GC that rarely gets in your way, minuscule startup footprint, reasonable direct control of memory layout when you really need it, a fairly large developer base and corporate backing/funding that's not going away any time soon
So I have to repeat the question: why not F# or Rust then?
None of that is too uncommon, with the notable exception that Go is better at the "quick onboarding" part since it goes out of the way offering no new concepts or syntax that have to be learnt; something that the comment I was replying to would like to change by introducing Optionals and Functors.
Yeah, I have mixed feelings about it as well. There are some good things, but there are also ugly warts.
For example if you try to use some numeric types that are not int, int32, float64 (for example float32 or uint16) you'll be in a lot of pain due to constant casting back and forth. Yes generics could solve this one too.
I also couldn't make myself enjoy programming in it. I think it's because it tried to be simple, so anyone can learn it. Because of it is kind of rare to say that you created with a creative way of solving specific problem.
That's very different to for example C, to which Go (at least early on) was compared to.