Game developers know how to make smooth and performant UI, to say nothing of the rest of what goes into writing a game engine, particularly a fast GPU-accelerated engine. I’m starting to think it’s primarily a cultural thing, where it’s just become acceptable in the web dev and Electron app world to ship sluggish, resource-intensive apps. I also feel like more corners are cut and performance issues swept under the rug when devs are not staring down the barrel of the hardware on a daily basis.
I used to write 4K demos and the like in assembly, and I wrote a 3D engine in the era where you still thought hard about making something a function call or not because... you know... those fractions of a microseconds add up, and next thing you know you've blown your 16.6ms frame time budget!
These days I see people casually adding network hops to web applications like it's nothing. These actually take multiple milliseconds in common scenarios such as cloud hosting on a PaaS. (I measured. Have you?)
At that point it's not even relevant how fast your CPUs are, you're blowing your "time budget" in just a handful of remote function calls.
If you stop and think about it, the "modern" default protocol stack for a simple function consists of:
- Creating an object graph scattered randomly on the heap
- Serialising it with dynamic reflection
...to a *text* format!
...written into a dynamically resizing buffer
- Gzip compressing it to another resizing buffer
- Encrypting it to stop the spies in the data centre
- Buffering
- Kernel transition
- Buffering again in the NIC
- Router(s)
- Firewall(s)
- Load balancer
and then the reverse of the above for the data to be received!
then the forward -- and -- backwards stack -- again -- for the response
You're missing the point. You're talking about the fast part which in any well optimized application is never going to be slow enough to matter. The problems start when you sprinkle 0.5MB libraries all over your code base and you start doing an excessive amount of HTTP calls.
What you are doing is like a machinist complaining about a carpenter not measuring everything in thousands of an inch or micrometers. The reality is that wood is soft and can shrink or grow. It's maybe not the best material but it's good enough for the job and it's cheap enough that you can actually afford it.
The problem with this analogy is that it makes sense to work with lower quality materials in real life, because the cost savings scale with the number if units you produce.
With web content it’s the exact opposite. Every time you are a bit lazy, and add another mushy, poorly optimized dependency, the cost is paid by every one of your users.
The better analogy is that the web is like an assembly line that serves content. Do you want wooden equipment with poor tolerances making up that assembly line which takes twice as long and occasionally dumps parts on the ground, or do you want a well-optimized system working at peak efficiency?
A lot of the problems with web development have nothing to do with time to market. There's no technical reason you could not have a toolset which is just as easy to use, but far more performant.
That would explain why they continue to be used after initial adoption. It doesn't explain why they were initially chosen if there were better options using something that already existed.
History and inertia also are nearly synonymous with "easier to use" in this context.
You,re pointing the blame at a source of EVEN WORSE performance issues, but it doesn,t remove the slowdown described.
Plain HTML renders several order of magnitudes faster than post-load JS rendering, and yes, it is noticeable, especially if you account for variable connection speeds.
Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.
> Plain HTML renders several order of magnitudes faster than post-load JS rendering
Well, "several orders of magnitude" is a bit much, but the point stands.
However, that's only during the initial load. After that, JS can just keep modifying the DOM based on the data retrieved from API, and never download HTML and construct new DOM again. If done properly (and that's a big if!), and where appropriate, this can be much faster.
> Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.
Very true! And on beefeir CPUs/GPUs, more RAM, faster storage etc.
For the last couple of years, I've been careful to develop on "midrange" hardware, exactly so I can spot performance problems earlier.
And not only is the stack you describe full of delays, several of the layers are outside of the control of the software in question and can just… fail! Sure, there are cases where I need my software to communicate with the outside world, but I get furious when some page with text on it dies because somewhere in a datacenter some NIC failed and thus the shitty webapp I was viewing fell over.
Developers use what is available off the shelf. If there is no easy and straightforward way to send data with a client code over the wire, they will send “function onload() { unjson(await xhr(endpoint, tojson(data))) }”. Blame should go to stupid runtimes, not developers.
You were motivated by submitting a cool demo, they are motivated by not being fired after deadlines. An additional network hop is nothing compared to not shipping.
Or there's nobody to blame and we're stuck in a very shitty local maximum. Developers want to deploy to every device on the globe instantaneously, users want to get their software without having to fight with the IT department, and while everybody was looking at the JVM as the runtime to beat the browser was picking up features like some demented katamari.
When I look at the massive backlog of requests from my users, not a single one is "speed."
I was referring to API calls between server components of what is essentially a monolithic application.
I've recently come across several such applications that were "split up" for no good reason. Just because it's the current fad to do the microservices thing. Someone liked that fad and decided that over-architecting everything is going to keep them employed.
To clarify: This was strictly worse in every possible way. No shortcuts were taken. No time was saved. Significant time and effort was invested into making the final product much worse.
Not any more, these days I do various kinds of systems integration work and I still dabble in development, but mostly with high-level languages like C#.
It just grinds me gears that we have all these wonderfully fast computers and we're just throwing the performance away.
My analogy to customers where I consult is this: What you're doing is like buying a dozen sticks of RAM, and then throwing ten of them into the trash. It's like pouring superglue into all but a couple of the switch ports. It's like buying a 64-core CPU and disabling 63 of those cores. It's like putting some of the servers on the Moon instead of next to each other in the same rack.
Said like that, modern development practices and infrastructure architectures suddenly sound as insane as they truly are.
I totally agree. I think about it like, you spend $3000 on a computer. $100 goes into actually doing your computing. The rest is thrown away by lazy programmers who can’t be bothered to learn how a profiler works. Most software is written the same way a lazy college student treats their dorm room - all available resources (surfaces) are filled before anything gets cleaned up. Getting a bigger room provides temporary relief before they just make more mess to fill the space.
To be fair, profiling is way more difficult than it was in the days of single-core local applications. A single-threaded single-machine application means you can get a very clear and simple tree-chart of where your program's time is spent, and the places to optimize are dead obvious.
Even if you're using async/await but are basically mostly releasing the thread and awaiting the response, the end-user experience of that time is the same - they don't give a crap that you're being thoughtful to the processor if it's still 0.5s of file IO before they can do anything, but now the profiler is lying to you and saying "nope, the processor isn't spending any time in that wait, your program is fast!".
From MCU programmers, I know you can make even a microcontroller run around a Xeon if you know how you can squeeze every cycle of performance, and exploit particularly hard tasks to optimise.
Write a riddle for a CPU with 100% cache miss rate, confusing the prefetcher to clog the memory bus, and enforcing a synchronous memory access. Such thing is very likely to run literally with an MCU speed on an x86 PC CPU.
Well yea and no, ideally you are not “throwing” that RAM away, you are paying for a more flexible software that can be more easily changed in the future, or to be able to pay much less for your developers, often both.
Nobody wants slow software, its just cheaper, in upfront and maintenance costs. Going with analogies, its like a race car mechanic complaining that a car is using like 3 cylinders where it could have 8. Sure but some people have other priorities I guess.
> you are paying for a more flexible software that can be more easily changed in the future
In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make. And then you have some poor Indian / African / Eastern European team trying to untangle and improve it.
Seen it literally tens of times over a course of 19 years career.
> Nobody wants slow software, its just cheaper, in upfront and maintenance costs
That is true. But nowadays it's more like taking a loan from the bank and running away to an uninhabited island to avoid paying it off.
> In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make.
Much of my work is in highly parallelized computing (think Spark across thousands of nodes) processing 10s or 100s of TiB at a time with declarative syntax. It's super cool. Until someone decides they're going to use this one line expression to process data because it's just so easy to write. But it turns out doing that absolutely destroys your performance because the query optimizer now has a black box in the middle of your job graph that it can't reason about.
Bad practices like that occur over and over again, and everyone just figures, "Well, we have a lot of hardware. If the job takes an extra half hour, NBD." Soon, you have scores of jobs that take eight hours to run and everyone starts to become a little uneasy because the infrastructure is starting to fail jobs on account of bad data skew and vertexes exceeding the predefined limits.
How did we get here? We severely over-optimized for engineer time to the detriment of CPU time. Certainly, there is a balance to strike, no doubt. But When writing one line of code versus six (and I'm not being hyperbolic here) becomes preferable to really understanding what your system is doing, you reap what you sow.
On the plus side, I get to come in and make things run 5x, 10x, maybe even 20x faster with very little work. It sometimes feels magical, but it would be preferable if we had some appreciation for not letting our code slowly descend into gross inefficiency.
Maybe it didn’t really come across I am totally in the performance camp and love to be able to craft a beautiful, lean and responsive UI if nothing else than for seeing the joy on users’ faces when they are delighted (amazed!) that what they wanted done happened so fast.
But time and time again I see that projects with a fast “enough” interfaces and flexible systems win out on more specialized, faster ones. And I hate that but here we are. Sometime we see a really performant piece of software hit the sweet spot of functionality for a while (for example sublime text) but then get overtaken by a fast enough but more flexible alternative (vacode)
As an Eastern European programmer, I agree. A lot of us are called to fix messes left by primadona devs (who are taking home $200K a year for the privilege of making other people's lives a living nightmare).
To be fair, most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded, but are given completely unreasonable timeframes and staffing then told to create an MVP then turn it over to offshore.
Very few people enjoy producing junk, but management (and customers) often demand junk today rather than quality tomorrow.
>> most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded
Most of them - yes.
But there's a non-negligible chunk of them who are too careless or incompetent to care about quality - they've been around long enough to gain knowledge about project and get Vice-President title(inflated ego included).
It is especially visible in big banks (I suppose it's typical for other big non-tech corps as well) where tech culture is generally on poor side.
Obviously neither me nor you can generalize -- both extremes exist.
Given the chance I'd likely collect a fat paycheck and bail out at the end of the contract as those other people did. But that attitude is responsible for the increasingly awful mess that modern software is becoming.
Almost everyone is at fault, me included. The perverted incentives of today's world are only making things worse.
Hah true dat. Been my life for the last couple of years :-D Managed to pull through a project that “failed” two times and was 2.5 years behind schedule...
Given the state and culture of web development, it's honestly a travesty that most software is consumed via the web currently.
I mean the web stack itself was never designed per se. HTML is essentially a text annotation format, which has been abused to support the needs of arbitrary layouts. The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently. And Javascript was literally designed in a week.
And then in terms of deploying web content, you have this situation where you have multiple browsers which are moving targets, so you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack to take care of all the compatibility issues, and translate a tool which is actually usable like React into an artifact which will behave predictably across all environments. I don't blame web developers for abusing libraries, because it's almost impossible to strip it all down and work with the raw interfaces.
The whole thing is an enormous hack. If you view your job as a programmer as writing code to drive computer hardware - which is what the true reality of programming is - then web development is so far divorced from that. I think it's a huge problem.
What about those weirdos who deliberately choose to use the abomination that the web stack is for desktop apps? To me it feels like they're trying to write real GUI apps in Word macros. I don't think I'll ever understand why.
Or... and I know this is just crazy-talk... there is properly separating your platform-independent business logic from the minimal platform-specific UI layer. A lost art these days it seems.
If CorelDRAW were installed on every phone and given same privileges, they’d use that. A new type of browser is like a social network – relatively easy to build one, insanely hard to get it adopted by everyone. The alternative is building for at least 4 different platforms, whose common denomination is usually either a non-barking dog or a vendor-locked monstrosity not even worth considering. And existing web browsers and committees are digging their heels in the status quo.
I've met plenty of people that prefer to write GUIs in Excel macros. If all you know about is a hammer...
I only have a problem with the ones among those hammer only people that are proud of not knowing anything else and proclaim everybody not using a hammer for everything stupid, because "look on all those perfected hammers we created! your choice doesn't have such nice ones".
In some ways I can I understand it, because if you want to deploy a GUI application which mostly consists of text and pictures across multiple platforms, this is probably most viable option in a lot of cases, but the fact that this is the case is a failure of the market and the industry
Yep. Native software development houses never invested enough in making a cross platform app toolkit as good as the web. There’s no technical reason why we don’t have something like electron, but lightweight and without javascript. But native-feeling cross platform UI is really hard (like $100M+ hard) and no individual company cares enough to make it happen. I’m sure it would be a great investment for the industry as a whole, but every actor with the resources is incentivised to solve their problems using different approaches. It’s pretty disappointing.
I don't think it's at all possible to make cross-platform GUIs that feel native. It's of course fine to share the core of your application across platforms, but you have to make the UI part separately for each platform for a truly nice result. There's no escaping that. And it's not like companies like Slack and Discord lack the resources to do so — they absolutely deliberately continue stubbornly ignoring the fact that, setting aside excessive resource usage, no one likes UIs that look and feel out of place in their OS. They totally have the resources necessary to rewrite their apps to use native UI toolkits on all supported systems.
I don't know engineers from in there but I am willing to bet $100 that part of them really want to make native OS UIs. It's just that business will never green-light that as a priority.
Although I'm not a huge fan of it, you could argue that Flutter is trying to solve this problem in some ways and has the right backing to be able to pull it off. It unfortunately doesn't feel native though (apart from on Android).
Qt still feels not quite right on macOS — because it draws the controls itself instead of using the native ones. wxWidgets is the best of the bunch, because it apparently does wrap AppKit into itself, but then again, the layouts apps use give away that it's a cross-platform thing.
> The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently … you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack
This stuff was fixed at least 5 years ago. If you can drop support for IE11 (released in 2013 and no longer supported by Office 365), you’ll find that framework-free web development has improved massively since React was first released. And if you keep it simple and rely on what browsers support natively, you can achieve great performance.
You'd be surprised how many games up until recently used Flash (Scaleform GFx), and now in some cases HTML5 (edit: Coherent GT/Hummingbird/Gameface) content for game UI.
Rendering hundreds or thousands of meshes and doing complicated 3D math for physics is no problem, UI is still extremely hard and complex, especially if you are supporting multiple arbitrary resolutions for example.
Godot, for example, has a full UI toolkit built in (the Godot editor was made using Godot components). However to actually get it working the way you want in most cases is a horrendous struggle, a struggle with ratios, screen sizes, minimum and maximum UI control sizes, size/growth flags, and before it gets any more complicated please just throw me a Tailwind flex/grid box model instead, because HTML/CSS has solved these problems repeatedly already.
I've started noticing a weird counter effect. If you make a web app that is snappy and responsive, people just assume your app is trivial. Users have effectively been trained into thinking things like list pagination are "difficult" operations.
Maybe that's like the tech equivalent of enjoying a loud vehicle because it sounds more powerful than a quieter one. (In reality, the quieter one is more efficient than the louder one.)
VS Code uses Electron and I can't say I've noticed any performance problems with it - indeed it is quite a bit faster for me than its native-code relative Visual Studio.
So responsive Electron apps are certainly possible.
I'm very interested in the general perception of VS Code being fast, because for me it's slow enough that it's the main reason I use other editors. Here are a couple of examples:
Both of the above examples revolve around opening files for the first time, and I suspect a lot of the slowness I perceive is because I open a lot of different projects and source code files when I'm working, and this is a bad use of VS Code.
In practice, VS Code behaves more like a multi-language IDE than a text editor. Slow startup times are generally acceptable in IDEs because you're exchanging speed for power. A programmer should ideally be proficient in both an IDE and a text editor, because they're tools applicable to different problems. E.g., VS Code is a terrible choice for things like analyzing log output, formatting large files, testing isolated snippets of code, or working on source code files that aren't part of the same project. I find this to be a shame because VS Code is flexible enough that it would otherwise be excellent for all of these tasks if it were just more performant for some operations that it struggles with now.
Out of interest do you mean starting a new instance of VS Code for those things or using an existing one.
I would agree that VS Code isn't the fastest thing when the editor is starting up, though I find it fine when started. I pretty much always have VS Code running so I don't find this a problem.
A lot of the overhead seems to come from making a new window (even though the app itself is already running), although notably most of the time spent in the Prettier example seems to be spent syntax highlighting the JavaScript. If you want to try a direct comparison of opening a file vs. a window, you can see the difference between opening a new file in an existing window (on Mac, `⌘N` / `File > New File`) or new window (on Mac, `⌥⌘N` / `File > New Window`). For me the latter is far slower than the former.
That being said, I immediately notice when switching from Sublime to VS Code. It’s something in the key presses...
I think it’s only noticeable if you’ve used a native application for a while. It’s not enough to go from VSC to Sublime and back to VSC again for five minutes. Make an effort to use a native app for a week or a month and then switch back.
I noticed this a bunch when I moved from emacs to Jupyter notebook.
Emacs will sometimes become slower (especially remote emacs), but it will always buffer your keypresses and do them in the correct order.
Jupyter (for whatever reason), doesn't do this with the result that I ended up wanting to create a new code block, but that keypress got lost and then i end up ruining my original code block.
I 100% noticed the difference, and it was super frustrating (fortunately I left that job, and have managed to avoid Jupyter in the new gig).
I am using Spacemacs and have spent days trying to make it work faster (I am on macOS). Took a while and some effort but with a few strange tweaks I managed to make it more responsive.
Emacs/Spacemacs can still be weirdly slow sometimes but UI responsiveness is generally miles ahead of all Electron-based software still.
Which makes it even funnier. Emacs is decades old and still uses quite a few ancient techniques that are only hampering it. Even with that, it's still so much better in terms of speed! Funny.
Atom (https://atom.io/) is another Electron-based text editor release by GitHub (before it was acquired by Microsoft). I think it predated VSCode. It certainly had more mindshare in the early days. But whereas VSCode has always been quite snappy, Atom acquired a reputation for poor performance.
Yes, and no. They have a really interesting tale of convergent evolution.
Atom was the original Electron app (as pointed out Electron was even originally named "atom-shell"), so it predates VSCode as an Electron app. But the extremely performant "Monaco code editor" that VSCode was built on top of (that forms the heart of VSCode) was started at Microsoft years before to be a code editor in parts of the Azure Portal, and also it was the code editor in IE/Edge dev tools from as far back as IE 9 or 10 I think it was (up until the Chromium Edge). It wasn't packaged into an Electron app until after Atom, but it has an interesting heritage that predates Atom and was built for some of the same reasons that GitHub wanted to build Atom.
(ETA: Monaco's experience especially in IE Dev Tools and the wild west of minified JS dumps it had to work with from day one in that environment is where a lot of its performance came from that led VSCode to jumping Atom on performance out of the gate.)
Given GitHub's Code Spaces use VSCode rather than Atom, that writing is definitely on the wall, it seems. (Arguably the feature was built for Azure and then rehomed to GitHub where it seems to fit better, but still a stronger indicator brand-wise than most of the other comparative statistics in Atom versus VSCode commit histories and GitHub/Microsoft employee contributions there to, which also seem to indicate that Atom is in maintenance mode.)
Pretty much like that, I tried Atom once (when I found platform.io and wanted to have a look) and it was just wild how slow it felt. On the upside, it made using those crappy Eclipse forks MCU manufacturers release (like CCC, Dave, etc.) fell a lot less painful
I feel like fiasco might be overstating it a little, but basically atom is incredibly slow and this is probably the main reason that it never overtook Sublime and friends in the same way the VS Code did.
Depends on which languages you work with. Many language servers are written in their own languages so it is possible to work with a lot of native code when using VS Code day to day even if most of VS Code itself isn't native code.
VS Code also used to have far more native code earlier on in its development life, but seems to be transitioning a lot of it to WASM (paralleling the Node ecosystem as a whole moving a lot of performance heavy stuff from native NAPI plugins to WASM boxes; as one example: the major source maps support library moved from JS native to Rust to WASM compiled from Rust, IIRC).