The first and most important distinction is obviously which ecosystem you are more familiar / invested in (webpack vs. vite). It does make sense for projects deeply coupled to webpack to consider rspack first.
Putting that aside:
- Vite+ is a commercial offering with a company that can provide paid support. Rstack is a big corp by-product where their primary customers are internal teams.
- The Vite ecosystem provides way more options in choice of meta frameworks (Nuxt, Astro, React Router, Tanstack Start, SvelteKit, SolidStart...), and 3rd party tooling integrations
- While both written in Rust, our tools in general perform significantly better than rstack. With the upcoming full bundle mode, Vite 8 will be at least 2x faster than rsbuild across all categories (dev server start up, HMR, production build)
- Vitest and Oxlint are mature and widely used in production. rstest and rslint are both quite new and not even feature complete.
A rugpull means taking back something that was given.
Before Vite+, we maintain Vite, Rolldown, Oxc, all of which are open source and widely used. These remain open source - nothing changes about existing projects.
Vite+ is an entirely new product built on top of our own open source, with additional features that are entirely new. You don't need to use Vite+. You can keep using all the open source that we already provide.
The revenue generated from Vite+ flows back into the development of both its proprietary features the underlying OSS. So if you are a user of our OSS, you'd benefit from Vite+ even if you don't use it, because it allows us to keep improving the OSS you are using.
Vite is now open core, where whatever you think belongs in the commercial part is not welcome as open source. Rugpull means building a community around an open source project, and then trying to transfer that community to a non-open source product.
Wrong - Vite is not open core, Vite+ is. This differentiation is important because even if a feature benefits Vite+, if it needs to be shipped via Vite then it has to be open source.
Companies willing to pay for Vite+ help sustain and improve the open source parts powering it, including Vite. Even if you only use Vite and not Vite+, you’d benefit from the success of Vite+, not the other way around.
I don’t really find anything inherently wrong with your definition of “rugpull”. If some people in the community are happy to pay for it and the rest also benefit because of it, that’s a win-win in my book.
it is upto vite team to decide between "open source" or "open core". from my perspective, there is no data to say otherwise, so your "from outside" translates to "In my opinion, but I take no responsibilty for it"
they certainly get to decide which label to pick for their projects (vite and vite+). you should complain if they do not act according to their choosen labels. but there is nothing done at the moment to complain about.
You don't need to be arguing with users here, you have a business and ecosystem to support and people to pay. Decisions need to be taken in order to achieve that, I appreciate all of your work Evan and I genuinely wish you goodluck.
The point is if we want to compare bundler performance, we should keep all the non-architectural variables consistent across all implementations. Otherwise we are not comparing apples to apples.
For esbuild: We know other teams that have attempted to improve code splitting based on esbuild and found it very difficult. A big part of it is that in order to be fast, esbuild applies multiple features (bundle, treeshaking, transforms) in as few AST visits possible, but that comes at the cost of the logic of different features not being layered / decoupled nicely. It is difficult for anyone other than Evan Wallace himself to add non-trivial new mechanisms to esbuild, and although we didn't directly talk to Evan about this, we felt it would be too much to ask for a substantial refactor of esbuild in order to get what we need. In addition, the people interested in making Rolldown happen has much more experience in Rust than in Go - and there is a lot more to leverage (e.g. napi-rs & Oxc) in the Rust-for-JS ecosystem.
For Rollup: the Rollup team itself has been trying to incrementally improve Rollup's performance, e.g. by swapping acorn with a Rust-based parser. But there's only so much you can gain starting from a pure JavaScript base, especially considering multicore utilization. Another aspect of the performance is in the back-and-forth between Rollup (on the JS side) and native transforms (swc, esbuild) - there is a lot of overhead repeatedly parsing / serializing ASTs and then passing strings across JS/native. By building on top of Oxc (which will ship transforms in the future) we hope to be able to stay on the native-side as much as possible to avoid such overhead.
Thanks for the detailed explanation. It’s really interesting how much of an impact language choice has on outcomes. Given your description of esbuild, it almost feels like there is space for a bundler compiler that can generate an optimized bundler from a pipeline.
Hi! First up — not that it matters, but since people will wonder — our design wasn't informed by the Reactivity Transform. We evaluated something close to 50 designs, some of them extremely wacky, before settling on runes, and it wasn't until after that time that the Reactivity Transform was brought to our attention.
Nevertheless, it's interesting and validating that we landed on such similar approaches. While the Reactivity Transform failed, it did so for reasons that I don't think apply to us:
- $state and $ref are quite different. $state doesn't give you access to the underlying object, so there's no conversion necessary between reactive variables and ref objects (either in your head or in code).
- There's strict read/write separation. Anyone with access to a ref has the ability to change its value, which definitely causes problems at scale. It's something that React, Solid and Svelte 5 get right.
- Reactivity Transform introduces things like $() for magic destructuring and $$() for preserving reactivity across boundaries. We're instead encouraging people to use familiar JavaScript concepts like functions and accessors
- There are already a lot of different ways to work with Vue — SFCs vs non-SFCs, template syntax vs JSX, composition API vs options API, `<script>` vs `<script setup>`... on top of that, adding a new compiler mode that needs to interoperate with everything else is inevitably going to be a challenge. This isn't the case with Svelte. While both runes and non-runes mode will be supported for the next two major versions, meaning there will be some short term fragmentation, we've made it clear that runes are the future of Svelte.
I wouldn't say they are "different" - they are fundamentally the same thing: compiler-enabled reactive variables backed by runtime signals! But yes, Vue already exposes the underlying concept of refs, so for users it's two layers of abstractions. This is something that Svelte doesn't suffer from at this moment, but I suspect you will soon see users reinventing the same primitive in userland.
> There's strict read/write separation
I'd argue this is something made more important than it seems to be - we've hardly seen real issues caused by this in real world cases, and you can enforce separation if you want to.
> We're instead encouraging people to use familiar JavaScript concepts like functions and accessors
This is good (despite making exposing state much more verbose). In Vue, we had to introduce destructuring macros because we wanted the transform to be using the same return shape with all the existing composable functions like VueUse.
> There are already a lot of different ways to work with Vue
This is largely because Vue has a longer history, more legacy users that we have to cater to, so it's much harder to get rid of old cruft. We also support cases that Svelte doesn't account for, e.g. use without a compile step. That said, the default way with a compile step is now clearly Composition API + <script setup>. Reactivity Transform also only really applied in this case so the point you raised is kinda moot.
Separate from the above points, the main reason Reactivity Transform wasn't accepted remains in Runes: the fact that compiler-magic now invades normal JS/TS files and alters vanilla semantics. Variable assignments can now potentially be reactive - but there is no obvious indication other than the declaration site. We had users trying Reactivity Transform on large production codebases, and they ended up finding their large composition functions harder to grasp due to exactly this (and not any of the points raised above).
In Svelte 4, the let counter = 0 syntax is already reactive by default, a feature enabled by the compiler. This has been the status quo for Svelte prior to the rune change. The introduction of the $state(0) rune actually provides more hints about its reactivity than before, and restore the original meaning of let counter = 0 (in rune mode). While it's true that the compiler's "invasion" into JS/TS syntax has been a point of discussion, this invasion has been happening for a while, and the initial shock wave has been well-absorbed by the community.
Interestingly, the new changes could be seen as a retreat from that initial invasion, likely triggering a different response from the community. In fact, the resistance I've seen (and my own as well) has been in the opposite direction—it's hating this retreat, complaining Svelte becoming less "magical." and more close to regular joe Javascript.
I’m specifically taking about non-component context, i.e. plain JS/TS files.
Previously Svelte was able to get a pass on this because magic only happens in svelte files - but in the future, any JS/TS files in a rune-enabled Svelte project will technically be SvelteScript, this never happened before and I doubt the community has already “absorbed” how significant this change is.
This is a really great point. I think something like a `.svelte.js` file extension is warranted here. This would key tooling to when it needs to interpret runes, and makes it clear which files in a codebase require the svelte compiler. These files clearly aren't just js/ts at this point, but I think its fine as long as they're marked as such. Custom react hooks, for instance, aren't usable outside of the runtime but can be transpiled without issues by esbuild/tsc and interpreted correctly by a js/ts language server.
As long as it's marked separately from js/ts I don't think its a huge issue though. Svelte files already have script tags that aren't completely vanilla js/ts.
Hi Evan You! (Hi from wentin) It's great to see you here in this thread. It's such a wonderful gesture for the open-source community to collaborate in this manner, by sharing valuable lessons learned the hard way.
It’s intriguing to see where the Svelte exploration will lead. Will it face disapproval or achieve success? While I agree that both implementations have arrived at the same place (almost serendipitously), their origins differ. For Vue, it's about adding the syntactic sugar by dropping .value, making it less explicit and more magical. In contrast, Svelte made the change to make things more explicit and reduce the “black magicness” from the svelte compiler, and bring it more similar to javascript. This difference might trigger totally different reaction, time will tell.
Michel and I have been internet acquaintances for years, and we've even talked about this stuff IRL. MobX certainly isn't something we just somehow never learned about!
But anyway: it's absurd to compare this with React+MobX. MobX replaces useState, sure, but you're still re-rendering entire components on each change (which is why MobX explicitly recommends that you break apart your app into many small components, regardless of whether that's a boon to readability and maintainability.
By contrast, Svelte (and Solid, and other frameworks) understand signals on a much deeper and more optimal level. They're really not the same thing at all.
> MobX replaces useState, sure, but you're still re-rendering entire components on each change (which is why MobX explicitly recommends that you break apart your app into many small components, regardless of whether that's a boon to readability and maintainability.
This is not true. MobX has had an `<Observer>` component for years now. You can be as fine detailed as you wish with rerenders in a larger component.
Unlike Redux at that time (~2016), it was a first approach where minimum rendering was happening effortlessly.
You can have a list of components where each component references a bit of global state and rerenders only if that bit changes, even though the list might have enlarged or other elements changed.
Last time I used it (2016-2020), they used all of the tricks of the trade, get, set object attributes, dropped decorators and it still worked the same.
Notice that you just said "You can have a list of components _where each component_ references a bit of global state". In other words, in order to avoid re-rendering everything, you need to have a component for each item in the list.
In React, the component is the unit of re-rerendering. MobX can't change that fact. The only thing you can do is work around it with hacks that imperatively update the DOM.
Yes, this clarifies what you've meant and I can see the case that is not covered with MobX+React but I assume it is covered by Svelte's runes.
You're saying that I can have the whole app written in a single file without any separation and the updates will still happen only in the place that needs it.
That makes sense. With MobX, this could be done but not with React and not without a bunch of boilerplate that obtains html elements that are referencing the state.
>With MobX, this could be done but not with React and not without a bunch of boilerplate that obtains html elements that are referencing the state.
It's trivial with MobX. The Observer component essentially gives you an "inline" component wherever you need reactivity, without the need to actually componentize.
Cool, did not know that, last time I seriously used MobX was on 2016 primitives. But I see it works similarly to before. All of the accesses will be figured out during the first render.
Observer component is so simple. Just a deferred function invoke. Could have been done with 2016 primitives too.
I guess we are all waiting for Rich Harris to step in and comment on this one. I'm sure he has followed this Vue experimentation and has a clear argument to make. At least I hope so.
This analogy is plain wrong. The blog post lists multiple non-Vue frameworks/tools using Vite as their default build tool. Compare that to the number of non-JS languages (excluding ones that compile to JS) using NPM as their default package manager (0).
I do know Henry personally and Henry actually consulted me when he was debating whether he should quit his job to work on Babel full time. We also occasionally talk about the burdens of OSS maintenance so I know first hand how hard he's been trying to keep Babel afloat.
The linked comment above is one that people interested should read. Especially people who only read the relatively inflammatory and context free initial tweet.
The first and most important distinction is obviously which ecosystem you are more familiar / invested in (webpack vs. vite). It does make sense for projects deeply coupled to webpack to consider rspack first.
Putting that aside:
- Vite+ is a commercial offering with a company that can provide paid support. Rstack is a big corp by-product where their primary customers are internal teams.
- The Vite ecosystem provides way more options in choice of meta frameworks (Nuxt, Astro, React Router, Tanstack Start, SvelteKit, SolidStart...), and 3rd party tooling integrations
- While both written in Rust, our tools in general perform significantly better than rstack. With the upcoming full bundle mode, Vite 8 will be at least 2x faster than rsbuild across all categories (dev server start up, HMR, production build)
- Vitest and Oxlint are mature and widely used in production. rstest and rslint are both quite new and not even feature complete.