That and types. The only framework that's useful to JS is a better static type check system (and none of this lets-make-the-whole-damn-runtime-slow to support X feature, looking at you TypeScript).
We use a lot of typescript with a very opinionated setup on coding style and conventions, but that only goes so far when you’re dealing directly with the dom.
Because the dom is notoriously hard to work with. The internet is full of blog posts and articles talking about how slow it is as an example, but in reality adding or removing a dom node is swapping a pointers which is extremely fast. What is slow is things like “layout”. When Javascript changes something and hands control back to the browser it invokes its CSS recalc, layout, repaint and finally recomposition algorithms to redraw the screen. The layout algorithm is quite complex and it's synchronous, which makes it stupidly easy to make thing slow.
This is why the virtual dom of react “won”. Not because you can’t fuck up with it, but because it’s much harder to do so.
> When Javascript changes something and hands control back to the browser it invokes its CSS recalc, layout, repaint and finally recomposition algorithms to redraw the screen. The layout algorithm is quite complex and it's synchronous, which makes it stupidly easy to make thing slow.
Wait, you're saying it's synchronous but what exactly is being blocked here (since you also said the JS hands back control to the browser first)?
I’m not quite sure what it is that you’re asking. When you want to show something in a browser, you go through this process: JavaScript -> style -> layout -> paint -> composite.
The browser will construct a render tree and then calculate the layout position of each element/node in a process where it’s extremely easy to run into performance issues. And since the rest of the render pipeline waits for this to conclude, it’ll lead to very poor performance. You can look into layout trashing if you’re curious.
My point is more along the lines of how you can ask a lot of frontend engineers what the render pipeline is and how it works and many won’t be able to answer. Which isn’t a huge issue, because almost everyone who does complicated frontends either use a virtual dom or exclusive hire people who do know. But for the most part, you won’t be fine with just JavaScript for massive UI projects as the person I was replying to suggested.
Only if you yield to the layout engine (e.g. `await new Promise(resolve => setTimeout(resolve, 0))`) in between. Which, if you know you want to change two things, why would you?
Not enums. But you don't need a runtime or function mutation for that...
Particularly egregious was (is?) async/await. Upgrade your browser/runtime/don't use it you say? Sure, but first two weren't always possible, and the third isn't possible unless you thoroughly vet your dependencies (easier said than done).
"Compiling to javascript" is all well and good if you actually just compile to normal javascript, as soon as you have any code that simulates other features (classes/objects/what-have-you) you are no longer "compiling to javascript". I mean yeah sure as a sort of intermediary assembly language you are but the performance is not the same. You have a new language with a runtime overhead, that now requires you modify the "core" language to bring in new features, which results in the underlying execution engines (browsers/cpus) becoming more complicated, power hungry, etc....
The performance wins for typescript likely source from the ability of the runtime to pre-allocate and avoid type checking.
Providing the type checks without using any non-JS features (and possibly providing the runtime some heads up regarding checks to safely drop) is the ideal.
You can disable those fallback implementations if you don't want to use them. Just use the javascript version you have available as the basis for your typescript. The option to look into the future shouldn't be treated as a negative.
And I still don't see how they make "the whole damn runtime" slow. You don't pay the cost for code that isn't using it.
Also I'm pretty sure the class implementation doesn't slow things down. It's a very simple transformation.
> You have a new language with a runtime overhead, that now requires you modify the "core" language to bring in new features, which results in the underlying execution engines (browsers/cpus) becoming more complicated, power hungry, etc....
I think you have this backwards. Typescript doesn't implement new Javascript features until their addition to Javascript itself is imminent.
The only feature Typescript wants to push onto Javascript is a syntax for type annotations, because then you can remove the compilation step entirely. At which point there couldn't even be a runtime overhead.
> non-JS features
To first approximation, there aren't any. The main one is the old enum syntax, which is why I brought them up.
> Typescript doesn't implement new Javascript features until their addition to Javascript itself is imminent.
I guess we want different things from Type systems.
I want rock solid guarantees that code is correct, so that the only thing left to test as much as possible is the business logic. I don't care about the latest programming fads, and I want stable, performant code.
You seem to just want some boilerplate guarantees and backwards compatibility.
If I were writing/creating TypeScript, I would be not implementing new features before JS upgrades, but long after (possibly as support libraries). I understand the goal of "easing" the transition, but IMO those sorts of "upgrades" should be late, not early, in a tool whose primary goal is static type checking, not JS features.
The things you seem to be worried about are configurable in the tsconfig. You can stay as polyfill free as you would like by instructing the Typescript compiler to error out instead of making the glue for you. Aside from the inescapable quirks of runtime JavaScript, Typescript felt pretty intuitive to me when transitioning to a new job from C# previously. Typescript with ESLint is about as solid as you’re going to get with JavaScript. I know that ideally there’d be something better, but in the real world right now this is the best it gets. At some point reality and business constraints are going to slam into ideations and things are going to get a bit dirty.
Aside from that, no matter what you pick, standard Typescript configs are absolutely compiling to JavaScript, not any other step in the interpreting process. It doesn’t matter if it’s taking your async/await and polyfilling it to run on an older browser engine… it’s still producing 100% JavaScript.
It goes Typescript -> JavaScript during the build, and the JS is what gets distributed to clients.
The JavaScript produced by TS is sent to the browser which performs the same JavaScript -> abstract syntax tree -> byte code -> execution, as usual
"standard Typescript configs are absolutely compiling to JavaScript"
You are missing the point. I want zero cost abstractions (for some level of abstraction - I accept e.g. there is a CPU and a browser). I don't care what it is compiling to.
Typescript is not a zero cost abstraction. (Zero cost meaning, here, that any overhead is incurred compile time only).
"The things you seem to be worried about are configurable in the tsconfig"
That's terrible. I want the Z.C.A. to be by-default, not "configure the heck out of the language to make it so".
That and types. The only framework that's useful to JS is a better static type check system (and none of this lets-make-the-whole-damn-runtime-slow to support X feature, looking at you TypeScript).