Not to devalue the author, or their findings/learnings... but I could see this was a JavaScript/Typescript coder (very likely self-taught) learning about typing paradigms.
A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
Some... statically typed and actually compiled languages. Maybe to take an entry course in CS.
It's very popular to hate on formal education, especially in software, but all these lessons would have been learned in the first semester or two.
That's interesting to hear. I started out with a formal CS education learning Java & C in school. I've found that traditional CS education doesn't really take this approach. A lot of what I was exposed to was very OOP-heavy practices that emphasized data modeling via class hierarchy. To me, the expressiveness of the Typescript system (being able to do things like sum types or branded types) is what unlocked a lot of potential despite not being a compiled language.
After using languages like Scala, Java(I use 17) feels like a joke in terms of type system expressiveness and ability to use functional patterns. I currently have to switch between the two and the kindest thing I can say about Java is it’s getting better(very slowly).
Even though the language is getting less painful, frameworks like Spring that do things at runtime instead of compile time, including rewriting bytecode on startup to inject code, make the ecosystem quite hostile to folks who want to work in a stricter, safer manner that’s easier to reason about(expressed in the language, not in some annotation based metalanguage with no principles and whose implementation changes randomly).
We need to stop defending Java and move on to something actually modern and good. Scala has fallen from favor, so maybe Rust is the next thing I’ll try.
A lot of the runtime fiddling is indeed a plague (the limited reflection is one of my favorite parts of Go, it means I can trust function call boundaries FAR more), but Java does do some nice things. E.g. I wish every language had as powerful of a compile time system as Java does - annotation processors and compile-time byte-code weaving enable magic "best of all worlds" stuff like Lombok, and it integrates with IDEs transparently. And hprof -> MAT is absolutely incredible compared to the memory-profiling capabilities of most languages.
The debugging and profiling features are definitely better than most, but other languages running on the JVM benefit from that too.
I think most of what people use Lombok for though are features that should be part of the core language by now, or would be better as library methods instead of annotations. Like generating constructors, equals, and hashCode methods - case classes and data classes in Scala and Kotlin respectively handled that within the language spec many years ago. I need to try Java’s new Records, perhaps they handle that stuff now. Lombok and friends also include features that change language semantics like @SneakyThrows.
Byte code injection sometimes also changes language semantics. Early in my career I spent a few hours perplexed by why my code was encountering null when the code path I was examining used only non-nullable primitives. Turned out injection and rewriting had turned my primitive long into a nullable Long. I don’t like not being able to understand my code from just reading the code. The magic means I have to be aware of spooky action at a distance mechanisms and review their documentation. I also need to open the debugger more regularly to inspect what’s actually happening at runtime instead of just mentally compiling my code.
a lot of the really functional stuff has happened from 17 onwards (though probably present as preview features since 17 perhaps?)
e.g. sealed classes are effectively sum types for java; records are effectively product types, and switch expressions now can do pattern matching at the record field level, with added guards. streams give you a mechanism for tail recursion with tail-call optimisation, etc. there's a nice little section on dev.java "moving to functional" (or something like that).
The irony is that before types became popular with interpreted languages and modern languages like golang or rust, untyped languages became MORE popular because of formal education.
The reason why is because most formal education curriculums teach C++ which ironically is more error prone and contains error conditions far harder to debug then untyped interpreted languages like python or JavaScript or ruby which were coming into popularity at the time. This is of course despite the fact that C++ has a type system with generics.
Because of this, a lot of people tended to associate typing with something more error prone and harder to work with. It wasn’t until the advent of typescript, golang and rust when people started to get the difference.
Is it? Then why can’t parameters be checked by default when called in a function? Anything passes through with zero runtime checks. Any type checks you need to implement it yourself.
Because Python isn't a statically typed language. Many will argue that this is a huge benefit of Python since you don't have to declare types. There are newer developments like mypy that allows you to add types as annotations, but the data types that you declare with the annotation is not enforced.
Right and I asked you a question and it wasn’t answered. If it’s dynamically typed how come I don’t get type checking at runtime for functions I defined?
> A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
I completely agree. I started reading the article expecting to read something interesting or smart about functional programming,but it turns out the blogger is just very vocal at telling the world their excitement over reinventing the wheel and being completely obliovius to what actually represents very basic things in any intro to software engineering course.
> What? I have never heard of a school teaching the importance of static typing esp when it comes to engineering practices
The blogger is not a really talking about static typing. The blogger is waxing lyrical over designing a domain model and then writing an application around it. You know, what others call basic software architecture.
Wait until the blogger learns of the existence of Domain-Driven design.
100% this. For a C++ or Rust programmer this reads so weird.
Don't get me wrong, I'm not hating on JS here, and I have lots of beef with C++, but I fully agree with your take that TS barely scratches the surface of the statically typed world.
Based on my small amount of work done in TS, it seemed like one of the more advanced type systems out there. To the detriment, even. The language was just huge, and that was years ago.
And it’s so advanced because it was/is designed to represent the types of real world dynamic JavaScript. More often than not, when people complain about the complexity of the types they encounter in the TS type system, they’re really complaining about the types of the underlying JS (which are the same whether they’re expressed statically or not).
There's a cultural problem in the TypeScript ecosystem, I find, where people are impressed (with both themselves and others) when complex interfaces can be expressed in the type system, and tend to embrace that instead of settling for simpler (and often admittedly more verbose) ones. Maybe that's because they're an ex-JS programmer who wants to use the exact same interface they'd use in JS with no compromise, or maybe it's just because they think it's cool. Either way I think it's really detrimental to TypeScript as a whole.
> Maybe that's because they're an ex-JS programmer who wants to use the exact same interface they'd use in JS with no compromise, or maybe it's just because they think it's cool
That sounds a little reductive and gate-keepy. Maybe an advanced type system allowing for complex types to be expressed easily actually allows you to write simpler, more effective code.
Most cases I've seen with more complex interfaces is due to the fact that it is what the interface truly expects. Usually making it simpler tends to mean it's actually wrong or incomplete.
This is hand-wavey, but that can't be true: less complex type systems manage to express all kinds of interfaces correctly all the time (sometimes at the cost of verbosity, but that that’s usually a good trade-off is the point).
You're asking me to tell on my coworkers, and I'm too loyal to throw them under the bus :)
Well, OK, here's one, but I'll keep it as blameless as possible. We had a thing where we wanted to register some event handlers. The primary use of these event handlers was to run a selector, and if the selected data changed, trigger an update, passing the selected data along. The initial implementation used existential types to store a list of callbacks, each returning different selected data. The "driver" then did the equality checking and update triggering. We later changed this, so that the callbacks - as far as the driver was concerned - all returned `void`, eliminating the need for an existential type. We just had to move the equality checking and update triggering to inside the callbacks.
Some features are straightforward translations: anywhere you have overloading and/or optional arguments you can (and often should) simplify by refactoring into multiple functions.
For a concrete, public example...well, I remember the Uppy library had a lot of stuff like this. A lot of work goes into making it's "Plugin" interface look the way it does (start at [1] and keep reading I guess) for instance, and while I haven't sat down and re-engineered it I don't think it needs to be this way, if you're willing to give up some of the slickness of the interface.
I think there’s a difference between ideal library code and ideal business logic code.
The more you lean into crazy ass generics in your library, the simpler and more error-free the user can make their biz logic code. Really nicely typed libraries almost give you zero chances to fuck things up, it’s amazing.
But then again most of your devs wont be able to understand all those generics, so you need to keep your biz logic types relatively simple.
This isn’t true, is it? I’ve only ever heard that TypeScript has one of the most advanced type systems of any mainstream language.. but I don’t have enough experience with other languages to know how true that is.
I didn't get the impression that this was a self-taught or newbie coder. I think their audience is not assumed to be a CS grad though.
I found it a good and well-reasoned explanation of _why_ he enjoys types in a large codebase. He does take time to explain different type concepts, but I assumed that was because he doesn't assume his audience is familiar with all of them. Considering that the opinion "types are good and helpful in a codebase" is not universally held, even by very experienced/productive coders (see https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701... or basically any ruby codebase), I think articles like this have a definite place.
In my experience, university is one of the least efficient ways to learn CS. The actually useful classes are few and far between, dwarfed by useless outdated courses, courses that aren’t very relevant to the job, and classes that are sadly lead by incompetent burnouts who don’t know that they’re teaching, come terribly unprepared, and in general seem to hate their job. Most of the people there have theoretical experience in writing software. But maybe that’s just my shitty university. I dunno. Supposedly one of the better ones.
On the other hand, as an Engish Lit major who taught himself programming from zero, and worked as a programmer for over a decade, all my experience writing software was practical, and I don't think that's the right way to go either. I wish I'd had any level of theoretical education that might have exposed me to fundamental concepts you (with yer fancy book-learnin') probably take for granted. If someone just learns on the job, or just learns as they go, they don't learn stuff until they need to. They learn it in a hurry, and on a deadline. That's not the best way to get a firm handle on tricky subjects, and maybe as a consequence, I always felt a couple steps behind my peers.
As someone in a similar position, may I take a tangent? I'm curious what you transitioned into out of programming. The stress of feeling "always behind" is taking its toll on me, and I wonder about another career change often.
I joined the software industry at a small consultancy that needed me to do a lot of different things, including both programming and design. So I got experience doing both of those. When I left the consultancy world in 2016, I had to decide whether to sell myself to employers as either a programmer or a designer—normal companies want you to pick a single lane—so I just focused on my design experience, and started doing that as a day job. I went from a fancy title to a much less fancy title for my first job as a designer, but more or less worked back up from there. I think for most programmers, their fork in the road would be to stay as an individual contributor or become a manager, but I don't want to be a manager, and was lucky to have a different path to fall back on.
A computer science degree is for the science of computing, not whatever is the latest in the workplace. You learn that on the job or a boot camp. Computer science is much more than the current popular framework and tools. It's the principles for how software works.
A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
Some... statically typed and actually compiled languages. Maybe to take an entry course in CS.
It's very popular to hate on formal education, especially in software, but all these lessons would have been learned in the first semester or two.