I suspect that the Webpack team thinks that developers' favourite job is to upgrade tool configurations. I mean, it's great that improvements are being made, but breaking changes are not.
Remember that the Webpack team has shown to be actively hostile to people who just want to keep their 2-year-old codebases work without doing needless upgrade work. They removed the webpack 1 docs [1], so if you have a few-years-old codebase that you want to get up and running, and somehow you get a webpack error, the message from Webpack is "screw you, upgrade to webpack 3 first and only then we'll allow you to read the docs".
I have no reason to not expect the same to happen with Webpack 4, and the inevitable Webpack 5, 7 months from now. Webpack 2/3 docs will be subtly removed, retired and forgotten. We're all forced to keep doing work to make something work that already worked perfectly fine.
Now, I'm well aware that an angry rant on the internet about the voluntary work from open source contributors is not particularly constructive. I've singled out the Webpack team here, but the reality is that a big part of the web dev ecosystem, across languages, does not care much about backward compatibility. I'd love for that attitude to change.
We strive to be as backwards compatible as possible. But we also hold ourselves accountable to keeping pace with the ecosystem.
Most breaking changes are to accommodate the number one ask of our users: faster, smaller builds.
We asked what the time frame they wanted for releases. We got out of a couple thousand responses 6mo being the sweet spot. Appreciate the candid feedback though.
In regards to the v1 docs: we wanted to have a fresh start. I don't imagine our docs pages now changing for awhile. Each major breaking change we ship a migration guide so that people can upgrade.
Sean, you don't know this but for quite a while now I've been amazed about your work responding to everybody complaining about Webpack on the internet, anywhere. There's absolutely no requirement for you to be so nice to users and yet, here you are again. Hats off, you're making the world a little bit prettier.
This time I'm the one complaining, and I must admit that I'm curious about the exact wording of that question about release time frames. If you asked "how often do you want to make config changes just so you can update Webpack to the latest version", I bet many people would answer "never" instead of "6 months" :-)
But hey, we established that we have different ideas about what the compat sweet spot is and you and yours are the one doing all the selfless work and I'm the guy complaining on the internet, so take everything I say with a bag of salt.
All that said, any chance you could make a teeny tiny little link from the Webpack 1 "THESE DOCS ARE GONE" pages to http://devdocs.io/webpack~1/ or some place like that? There are people who inherit old codebases and it would be very nice for them to not _have_ to update Webpack just to find answers to their problems.
When I first started using Webpack 4 years ago, it had the most finnicky, difficult-to-configure (perhaps even magic) API I can think of. It worked really well once you got it configured, and wSokra was incredibly helpful and supportive in the chat room; but if you didn't have a known-working config and/or access to wSokra, well, good luck.
I'm highly supportive to anything they can do to make it easier to configure and maintain, even if the cost is breaking the old APIs and making folks migrate to newer ones.
Hopefully as someone works on a library and learns a problem space more thoroughly, they have better knowledge of what an ideal API would look like. I know migration can be painful, but I'm very supportive of long-run improvements to a project.
Ah yes so we removed the page but is all still under our WIKI on webpack/webpack. Mind submitting an issue and we could surface a little note on our README?
> In regards to the v1 docs: we wanted to have a fresh start.
Those who inherit a old codebase normally don't have the luxury to start fresh, or sometimes even upgrade. The work and effort you put into webpack is selfless; which is great; and you should be thanked by users! Removing the v1 docs was selfish, and unfortunate.
Thanks! FWIW I think your other comments were plenty legitimate, and definitely don't deserve to be downvoted so low (below that inane "wtf emojis??" comment, of all things).
Although with that said, I doubt webpack can do anything about what you're saying except to try to be conscientious. Obviously breaking too often is bad, but naturally that doesn't mean never ever breaking is necessarily preferable.
I agree with the nuance you add, but also note that the last major version was released July 2017. This is not "never ever breaking", this is breaking all the time :-) There's plenty middle ground.
A lot of the breaking changes could be solved by versioned config formats, for example. Maybe not all, but then maybe the remaining things that are now breaking would be worth keeping alive, albeit as a piece of legacy code that nobody wants to work on improving anymore. I don't know enough about webpack internals to go deep here, but I'm convinced that if there's a strong desire to stay backward compatible it's possible. The webpack team has no such desire, which is their prerogative but I'm still sad about it.
I'm not sure you're being fair with that last bit. Granted, migrating webpack 1->2 was a nightmare even without the docs issue. But in contrast I moved from 2->3 without even touching my (fairly nontrivial) config and everything worked. Presumably there were breaking changes somewhere, but I got the impression they only really affected plugin authors.
So I'm not sure it's fair to decide the devs don't care about breaking changes. It seems more reasonable to assume that some wrong choices were made early on (including about how to handle docs and major releases), and they've been learning and getting better since. After all, if like most things it started out as someone's weekend hack, with no thought of growing into the powerhouse it is now, some early stumbles are fair enough, right?
I'm currently upgrading a repo that is maintained internally and uses grunt+webpack+karma+jasmine and I strongly agree with this comment. The tool has been running webpack 1 and it works but now that webpack 1 docs have been removed we want to upgrade to webpack 2. It's been a host of issues, with the API changes being the least of it.
webpack 2 introduced ES6 syntax in their plugins which grunt mangles when passed to its config https://github.com/gruntjs/grunt/issues/1595. Admittedly that's a grunt issue but seeing how many teams use this tool and have already configured their applications to build via Gruntfiles we can't move things overnight. I had issues with loaders (now rules) running at the appropriate time for istanbul as well and had to go through two or three now-defunct packages and piece together how to make it work with our rather unique setup. Honestly in this adventure I think our issue is grunt and the underlying bespoke hacks people have designed over the years, but in Enterprise this just keeps me from being out of a job.
I personally think you're overstating the amount of work required to upgrade between Webpack versions.
You generally should not be ignoring your tech stack for 2+ years in the first place either - nobody likes spending time staying on top of updates, but that's the world of software.
As for your complaint about backwards compatibility, I'd agree, but there's only so far you can go backwards before it comes pretty ridiculous to support ecosystems so far behind the curve.
Well, you wanted a capitalist world, didn't you?
WebPack was forced to upgrade to a zero config tool because of Parcel's threat. They couldn't let the others take that place.
Now there you have it, progress without end. Developers constantly upgrading, until death does them apart!
Do you remember the time when you just copied your file over to the server, and magically everything was deployed? I do. It was a good time. Yes, I am old.
In search for something that makes me work like the good old times, but with modern javascript solutions, I stumbled upon mod_pagespeed[1]: it minifies your javascript and html, optimizes images, and more or less does what webpack does (minus the typescript -> js compilation).
The good old times are back for me: deploying is now an scp away.
For important projects I copy the static assets to the server, let jenkins run integration tests on staging and if everything works as expected just copy it over to production. One less thing to worry about for me.
I do remember. I've been doing web stuff for a little over 20 years. It was not a good time.
1. If a file upload failed your site was broken until you managed to upload it again.
2. Deploying something that required a database migration had to be done out of hours, or the site had to be taken offline to do it.
3. Recovering from a disaster was hard, and pretty much always resulted in significant data loss.
4. Moving a site from one server to another was awful, because it meant SSH'ing (or telnet'ing...) on to the box and compiling things from source. If you got anything in the environment wrong the site didn't work and you had to start again.
5. If you worked on anything that ran on multiple servers you pretty much had to script it all anyway. Essentially there were thousands of different (much worse) build-and-deploy processes people had stuck together with various different tools. Going to a new company meant learning a completely new set of processes. Today's processes are just an industry-standard best-in-class approach to doing the same thing.
I'll take Webpack to compile a site and terraform/compose/puppet to deploy the infrastructure over old way of doing things every time.
I don't think an scp ever went wrong for me. But maybe your lan network is different.
When I started doing web things rcp (or scp, or rsync, etc) access was quite unusual. You had FTP and that was it. It failed a lot.
Does webpack handle db migrations too? I didn't know. I was talking about js/html/css assets. Not the backend.
Fair enough. I don't really see the front end asset pipeline as being separate to the backend. It's all just stuff you build and deploy.
That's the nice thing about mod_pagespeed: you don't compile anything.
Well, you do, but you're happy just accepting the mod_pagespeed defaults are doing a great job (which they are). I want a bit more control over things, and I want to do things that mod_pagespeed can't do, which is why I use Webpack instead. A trivial example: mod_pagespeed doesn't do subresource integrity hashes. Webpack has webpack-subresource-integrity as a plugin that can automatically write hashes to your html. That's useful to me. It's good that there are different options for different things, and better that the industry is coalescing around small number of different tools so there aren't too many different ways of doing things.
What's wrong with scripting everything? I like automation.
Nothing at all. Automate all the things! My point is that back in the olden days of making web things we didn't automate things, so it was much, much harder. Today's tools (eg things like Webpack) make automating your build process pretty trivial. It's far better now.
There was also a risk, with things like mod_php and the like, that a user could hit your app in the middle of an upgrade. Commonly in those days (for many, not all), the app was tied rather tightly to the webserver. That is no longer the norm, it's more typical to reverse proxy to an app server or something like that.
And sure, staging then copying everything would be faster and safer than deploying straight on top of the old version, but that's still a not-insignificant amount of time when the app is in a transient, partially-upgraded state. And that would only get worse as your app grew.
I suppose to way around that would be to deploy twice and use a load balancer that you could temporarily disable, upgrade one deployment, point the load balancer only to there, upgrade the other one, then re-enable load balancing between them.
Man, that's a lot of work, though.
I'll agree that maybe it's a relatively minor complaint, because there are likely plenty of solutions I just never needed to find, but I really do like avoiding anything that can and should be avoided. I'd rather have a "Please wait while we upgrade" page than risking something going south due to some kind of race condition.
The r-services were used a lot on and between Unices.
FTP did fail a lot indeed. That is why scene (FTP / FXP) used .rar, .sfv and Usenet .rar, .par(2). Download clients had resuming support but did not attempt verification. That's why checking checksum is important (Sfv is just CRC32 and Par is Reed Solomon, other examples are MD5 and SHA1).
Later on scene used so-called auto traders, which utilized FXP and automatically verifying checksums plus anticipating on errors (such as dupes). The I know was called Preee (Windows only, worked in Wine). Later protocols like BitTorrent implemented checksum verification within the protocol.
> it minifies your javascript and html, optimizes images, and more or less does what webpack does (minus the typescript -> js compilation).
The main point of webpack isn't minification and so on, it's wrangling dependencies. It lets you define a JS module, which can declare dependencies on other modules or assets, and so on recursively, and webpack parses everything out and bundles the necessary pieces together such that they can all see each other.
If your JS isn't in modules, but just files that can be uploaded and minimized, why were you using webpack in the first place?
I do, then I worked on single page apps with dozens if not hundreds of screens, dozens of developers, and tens of thousands of lines of code; modern Javascript standards and new transpile-to-javascript languages that all had to work down to IE 9; front-end application frameworks and 3rd party libraries that helped us to structure the application and not reinvent the wheel without having to manually sift through all files at every release (thrice or more a day) to filter out what we needed.
Uploading some files is fine for simple websites with no availability guarantees or security requirements. Mod_pagespeed is fine for most websites too, I should install it for my wordpress instance sometime. It's not effective for single page apps though.
Yes, current build process complication makes me laugh how it's actually making the development harder. With that said I've come to a pretty decent setup that's pretty much upload and reload style.
- parcel bundler (Real fast, cached compilation is under 1 second) : This bundles all my TypeScript as 1 JS that works in the browser and Stylus into 1 CSS along with dependencies such as jQuery and I no longer have to place script tag for those external dependencies but they're bundled among their dependencies from npm installed copies.
- ts-node-dev : Reloads the node server automatically in about a second as I update my server side TypeScript and it doesn't even require compilation process to spit out .js files.
- monit : To monitor those 2 processes to make sure they're alive and auto restarted in case the process quits for some reason.
The point is, setting them up is very easy. I don't even have a config file for the former 2 and easy readable one for monit.
I don't like client side transpilation and bundling as it requires every machine even under different OS to be set up properly exactly the same.
Now I no longer do any manual build or wait on each incremental changes but the "good old days" are back with the added benefit of using TypeScript and npm ecosystem in the browser with no effort. I'm impressed.
> Do you remember the time when you just copied your file over to the server, and magically everything was deployed? I do. It was a good time. Yes, I am old.
Yep.
That in sense is webpack's job: It allows you to boil down a huge list of interconnected complex dependencies into a neat, portable, highly deployable package.
I'm still doing it that way for a big 150kloc SPA. It's the way ExtJS3 apps were put together and it works fine. Everything is neatly namespaced in a global namespace and always accessible. At some point we even stopped minifying JS code, because it's gzipped/cached most of the time anyway.
Build tool - with this way of organizing things - can be `cat`, simple 20 line php script to bundle icons together with `montage` that comes with imagemagick and direct invocation of `sassc`. Build times are 100ms. Simplicity is priceless. There's never any migration, no new features, no deprecations, no maintenance issues, no instability, because everything is so stupidly basic. You can jump 10 years forward, and app will still be buildable from git with only tools available in your OS distro.
You're missing out on lots of features like easily managing many entry points, live reload / hot reload, code splitting, async chunk loading, tree shaking, proper namespacing, transpilers so you can write ES8 but support IE, and tons of other great features.
I only miss not being able to use some ES6 features, because of IE11 requirement. We can use closure compiler to get ES6 syntax support at this point, because browsers usable for development support ES6+, so it wouldn't need to be run during development, only for deploy. Webpack is not necessary for that though.
Live reload with ExtJS app would be a nightmare even with webpack. On-demand code loading optimizations are not necessary. Tree shaking would not help much at this point, because I've removed unused components manually. Proper namespacing (if you mean ES6 imports) would not help much. The app is self-contained and uses only a few external dependencies.
Ok, so it doesn't lazy load CSS and images, just js?
If so, what does this improve? Time to first paint? Time to first interaction? Time to DOM ready? I looked at their page, and it doesn't seem to answer questions like this simply. It just says when a user requires the code it loads it.... huh?
How much code are people putting into their apps that this would ever be needed? (serious question)
When I do site analyzing, 99/100 the images/css are the biggest contributors to site size, have JS libraries gone nuts?
You can use it to lazy load CSS/images/anything you have a webpack loader for. Some of the loaders like CSS can't help but be sideeffect-full, so lazy loading them isn't always the best plan, but it certainly is possible.
> If so, what does this improve? Time to first paint? Time to first interaction? Time to DOM ready?
Could be any or all of those, yes, depending on architecture. Webpack bundles CSS, images, etc into combined JS files ("chunks"), tries to get the important hot path stuff into the first chunk, and then load other chunks as needed. With dynamic loading you can further make sure that additional chunks load out of the hot path, after first paint/first interaction/DOM ready/whatever else.
I am in a couple threads discussing this, so I apologize if this is doubling things up.
To be redundant (for absolute clarity) you know how to use Webpack, you do use it, it offers value to you, but not for building websites? Can I assume it only has value (for you) with building complex webapps? (can I also assume this is for Node.js?)
At this point I've used it for web pages, web apps, Electron builds, and haven't needed to but could have also used it for Node.JS server apps as well. Webpack is certainly flexible enough for all of the above.
Electron embeds Node.js, of course, but from a platform standpoint feels sufficiently different from a Node.js server application (the more traditional use of Node.js) or console application (a growing usage of Node.js) that it can be useful to make a distinction. Especially, because the bulk of an Electron app in my experience is running in the Chromium web renderer process(es) ("client side") than in a more traditional Node.js process.
One technique I used to build an electron app was developing in the browser, and then using a webpack plugin to copy the generated bundles from my web app into a shell app & trigger the electron build. In this way I wrote one version of the code which ran in both browser & electron app. I have used the same technique for other platforms like Cordova
> Did you also have to tell your users to press Ctrl+F5 to get the updated assets?
Actually I did!
What I do now is this: there is a websocket connection always open with the connected browsers, and when there are some assets that reallyreallyreally should be refreshed, I send a command though the websocket and the browser (after a few client side checks, like making sure the user is not in the middle of filling out a form) executes ´window.location.reload(true);´
I think Webpack is actually slowly moving into that direction.
For example, with version 4 there are opinionated defaults and for some cases all you need to do is `webpack`, you don't even have to have webpack configuration file if you stick to the defaults.
Now the other problem, however there are no defaults in web - one uses react with less, other angular with sass, etc..
For anyone else who was confused by the minimal explanation of the "sideEffects: false" feature, apparently this targets cases where (1) module A exports some code, (2) module B imports from A and exports end points, some of which depend on A, and (3) module C imports code from B that doesn't reference code from A.
In this situation by default webpack assumes that the code from A might have side effects inside B, but if A declares "sideEffects: false" in its package.json, the tree-shaker will assume it's safe to omit module A from the final bundle.
Incidentally the end user can also declare in their config that module B should use sideEffects:false, even if the module doesn't declare it. Details here: https://github.com/webpack/webpack/issues/6065
Sure, but if you author a package and put sideEffects: false in your package.json, but don't actually follow through on that in your code, that's a bug in your library, and the easiest fix is a quick PR to remove sideEffects: false from package.json, with a stop gap that if I find a library with that sort of bug I could tell webpack sideEffects: true to override the behavior until the bug is fixed.
One of things I absolutely despise about the JS community is this obsession with build tools. JS is meant to be a runtime language and the amount of transpilers, minifiers, uglifiers, obfusicationifiers, is absolutely endemic to the messy state of change the language is undergoing.
However with this release I'm excited to start deleting a bunch of ridiculous build code to make Webpack work. Really a build tool is a tool, it should never be in the way of developing features. I haven't enjoyed using webpack until now. No sensible defaults, poor documentation, and no standard conventions. This release changes all of those complaints. A boost for productivity is a win in my book.
Not all web developers build complex web apps out of sport, it is demanded of us to deliver that. JS as a runtime language had poor first time support to help us manage that complexity. We are lucky that some well meaning folks decided to offer tools to help us with that, and the ecosystem has been maturing.
Probably the reason you perceive this obsession with build tools is that incrementally each one had the goal to tackle a set of problems and it did, but then new pain points appeared and other tools were developed to help with that.
That being said, we seem to have reached a rather stable period without major raptures in the landscape for a couple of years. Maybe the growing pains are subsiding.
Funny how you rarely see the same complaint about C++. You've got CMake, Make, Conan, vcpkg, and a bunch of other build tools that all require a hugely complicated setup process to get right. None of these ship with sensible defaults out of the box, and the documentation for them (especially CMake) is absolutely horrible, worse than any JS build tool.
Transpilers are nasty alright, but I like being able to write in Typescript and LESS, and to have my src split into multiple files but my distribution as a single Js file
>One of things I absolutely despise about the JS community is this obsession with build tools. JS is meant to be a runtime language and the amount of transpilers, minifiers, uglifiers, obfusicationifiers, is absolutely endemic to the messy state of change the language is undergoing.
All of it is ultimately to turn JS into a good language. Considering the state where JS started, it's been a heroic amount of effort.
- Lack of native module support.
- Lack of standard libraries.
- Need to run in the most hostile environments AKA web browsers and in windows/linux/macOs as nodejs.
Web pack, minifiers, transpilers etc are working( although, not pretty) solutions to the afore mentioned problems.
As a web dev I am extremely happy Webpack team changed their stance from "webpack config is the way it is because everyone's needs are different and also because that's just how Javascript works" to "yes, we can provide sane defaults out of the box".
Aaand that their communication has improved a thousand-fold.
Upgraded from v2 to v3 just fine, the only issues I've had is with developers using esoteric plugins which isn't a fault I can attribute to webpack itself.
Two years ago I would have invested some effort to read this thoroughly and try to stay on top of the API changes. I'm sure it's helpful to a lot of folks.
For sure I do care about my build toolchain. The nice thing about being a React developer today, is that thanks to create-react-app, more specifically react-scripts [1], I'm thankful I get to choose to not worry about it. Just like I didn't have to worry about 2.0 -> 3.0 last time.
Cheers to the CRA maintainers. Thanks for giving me some time back!
If your existing large app is a React app, the naive thing to try would be migrating the src/ folder into a new CRA app and see what happens.
If you get a lot off runtime errors you have an easier time migrating to CRA, if you eject a new CRA app and gently/iteratively adjust the webpack config on your existing project to match CRAs.
Is there really any point to webpack now that we have http2? I'm looking for a way to code split all the things and webpack is making that difficult. I want cache performance over download speed now that I've gone mobile only on data.
There is still a protocol overhead for each request compared to a single concatenated file.
The compression of the single large file is better than many small files.
Servers are slower serving many small files than a single large file.
I'm really less interested in speed though. I want to optimize for file transfer bandwidth. Going all mobile data for a week has shown me two problems. JS has severely bloated websites, and staying up to date with software patches is expensive. In both cases, the amount of new material being transferred is very low. A point update on a JS library or patch is a few lines change, but we must download the entire library again. Compounding this for JS is every site using the same libs are sending the same data, because of browser origin policies. One way around that is CDNs, but then you are submitting to a centralized point of failure/spying.
Sending the whole file again is the easiest route for developers, but the most expensive for mobile only cord cutters.
Webpack does that with chunk hash. Common chunks go into common.js, and my app goes into app.js, if I change my app's JS, only app.js gets downloaded which is probably 50kb whereas common.js is maybe 300kb.
The point is there is a tradeoff between speed & efficiency. If every line of code were its own file, you'd have a high amount of efficiency when only 1 line changes, but loading that would be a nightmare on mobile.
I asked this same question and got down voted for it. After probing some more though, I found out it's only any use for people that have many MBs of JS files. Which I can only assume comes from piles of npm dependencies getting sucked into their projects.
Also, it's not for websites with some interactivity, it's for big giant web apps... apparently.
Yes, sorry cannot find the link, but they are going to get rid of it from what I read. There is no more tightly integrated solution in the pipelines, you'll have to roll out your own with Express or something (or, more likely, a third party will come up with a solution that will become the norm)
A few years ago I was asked by a frontend team to investigate their Jenkins build. What used to take 90 seconds went to 15 minutes and it only started happening when the team switched to Webpack. I recall Webpack being hideously complicated with a lot of the documentation either missing or just plain wrong - no wonder they managed to make a mess of things so easily. Has any of that changed?
The documentation got substantially better in 2.0 (imo), and simplifying the configuration overall was a major initiative for 4.0. You can now start with zero config if you want to follow their defaults of putting your source in /src and built output in /dist, and they also added production/development "modes" that enable a bunch of sane defaults for each. You can still get into really complicated configuration cases depending on your project but it's definitely been on an upwards trajectory in that regard.
Webpack is fast. Tools like babel which webpack has to call into are slow.
Using webpack+babel will be faster than gulp/grunt+babel since webpack can track which file(s) changed & rebuild only those files. Even ignoring that, it still faster because you don't have to kill it & cold restart it when switching branches or adding new files, like with a gulp build which typically globs for files only at startup.
I don't use Webpack, would it help with regular website dev? Or is this just for web apps?
Also, does it really make things faster? What about HTTP2/SPDY, do we really care about making everything one file anymore? This seems like a paradigm that is no longer valid (like using tables instead of grid for website layout).
You pretty much will always need a bundler, unless you write everything from scratch, and don't support legacy browsers. Webpack actually does do things like code splitting for you, so no, not everything goes into one file.
Can you explain if Webpack would help with Wordpress sites? Or is Webpack really only for webapps?
What is the benefit gained from "code splitting"? Would it make a blog run faster? What if the largest amount of JS on your site is 5k? Why would there be a need to code split this?
Webpack could help with Wordpress sites. Whether or not it is worth the effort is another question. Most WP sites might not benefit from the effort.
Also, yes, HTTP2 throws in additional considerations, though the surprise for me is that I've found even with direct filesystem access like in an Electron app, with no "connection overhead" like HTTP1.1, webpack bundles can be quite effective for performance gains. Especially with modern ES Modules, it can still be extremely effective to load a sequence of larger tree-shaken JS file all at once than a large number of tiny files.
I'm not the parent, but you don't need webpack for something like a WP blog.
Code splitting is great on bigger webapps such as GMail or Facebook. For instance, when you want to see your contacts on gmail, it can load the required JS file of that module on demand. Same thing on FB: the chat widget can be loaded separately, and only if needed.
Thank you, this explains some basic things that seem to be assumed by so many web devs/blogs/etc... :P
A while back I went through the process of evaluating a lot of build tools, and I simply couldn't see any benefits of replacing my bash scripts with these tools, and you cleared it up nicely.
Maybe there are some legitimate cases in which you would prefer Webpack, but I really don't understand why have EVERYBODY migrated to Webpack, with all its bloat and breaking changes and build errors and boilerplate.
After reading the title this was exactly my first thought.
Polluting the titles with emojis feels unnecessary and childish to me. I them a mental burden that makes it difficult to read the important information.
I can not imagine a new car announcement page filled with emojis and I don't know why it should be different for software.
With emojis in the headlines, I look at a tiny picture for 0.2s (usually related to the headline) and then read the section as normal. How can that be a mental burden?
Indeed, it does also feel like the JavaScript community or rather JavaScript related projects seem to be more attracted to emojifying blog posts, release notifications etc... while perhaps a bit informal it’s probably not the end of the world however perhaps indicative of a combination of excitement of the developers involved and some slight lack of maturity.
I totally agree. At the very end of the day people shouldn't kid themselves in this industry. The whole thing is an evolving hack fest which changes each week. We are far closer to script kiddies on the web than we are software professionals but that hurts some peoples feelings.
Something that is also worrying is this culture of competition, by showing, right after the start of the post, "best" contributors, as if lined of code was a key of quality development.
I remember some tools that I appreciate, that quote all contributors, in a non hierarchical way. Even if you contributed something minor, you are listed with "main" contributors.
This isn't our intention, rather to recognize those who helped make webpack what it is today. I wouldn't have been able to fit everyone on one page of image otherwise :)
Something of an off-topic conversation but I think it's quite interesting how modern language is evolving to include pictograms that are common between different languages, and not childish at all. Why must it be a problem?
We just find that it reaches people better!!! Sometimes in the JavaScipt ecosystem it takes a lot for a 6 year old tool to stay on top of people's focus and attn. But it got yours right?
I think generally only the creators of something get to use the term 'shameless promotion' :)
We (the Rollup team) would never criticise webpack in a forum like this in order to promote our own alternative (except maybe on Twitter, because trolling Sean is just too tempting) — it's an outstanding project, and we all talk to each other and learn from each other. Rising tide lifts all boats, etc
Isn't Rollup used mainly for libraries? The big selling point of Webpack is the plugin system to transform and store non-JS files in your bundle, which Rollup doesn't have, as far as I can tell.
Rollup has a powerful plugin interface that lets you transform and load files of any kind — see here for a (possibly non-exhaustive) list: https://github.com/rollup/rollup/wiki/Plugins
Historically Rollup has indeed been more widely adopted among library authors (React, Vue, D3, Three.js, Leaflet, Moment, etc), though there's no reason it can't be used for apps — lots of us are doing just that.
I actually must use Parcel these days to avoid being left-out, joke aside, Parcel is extremely easy to use, I spent 10 minutes to read the instructions and one article about how to use it, and have been using it ever since
Remember that the Webpack team has shown to be actively hostile to people who just want to keep their 2-year-old codebases work without doing needless upgrade work. They removed the webpack 1 docs [1], so if you have a few-years-old codebase that you want to get up and running, and somehow you get a webpack error, the message from Webpack is "screw you, upgrade to webpack 3 first and only then we'll allow you to read the docs".
I have no reason to not expect the same to happen with Webpack 4, and the inevitable Webpack 5, 7 months from now. Webpack 2/3 docs will be subtly removed, retired and forgotten. We're all forced to keep doing work to make something work that already worked perfectly fine.
Now, I'm well aware that an angry rant on the internet about the voluntary work from open source contributors is not particularly constructive. I've singled out the Webpack team here, but the reality is that a big part of the web dev ecosystem, across languages, does not care much about backward compatibility. I'd love for that attitude to change.
In semver, every major version is a tragedy.
[1] https://webpack.github.io/docs/using-plugins