Huh. That's my transcribed/edited version of Prof Wirth's paper from my blog.
Thank you, @thunderbong. I just thought it might be more readable than the various PDFs floating about. I have had emails from blind readers of El Reg saying that some PDF versions are inaccessible.
Not my own writing, obviously. All I did was transcription.
Very few people ever even attempt to write "lean software". The definition is a little nebulous, but "software without bloat" is a good one. This means that you have few dependencies, and the ones you have themselves have few dependencies, for a starter. But right there you have the first problem, which is that dependencies grow stronger with the number of users, and the number of users grows with applicability, hence the rise of dependencies that no-one uses more than 20% of, but everyone uses a different 20%. Even if you do tree-shaking or something so that you at least don't pass on the complexity you're not using, it still doesn't feel lean, because you're cognitively bloated, so to speak. Then, if you rewrite the library (because extracting your 20% is going to be very painful) you can make it simple and small, but no-one is going to use it but you and so it will remain weak, tightly coupled to the one project that uses it, because it doesn't offer the breadth of features people need.
The other reason software bloat persists is because knowing enough about the runtime of your software to even be offended by the presence of bloat is exceedingly rare. Repetitious and pointless tasks abound in modern software, where work is done at one layer and then thrown away and redone by another, repeated N times. But to even know about this requires that you understand your runtime and what its doing in your case, and in our industry, ignorance is (economic) bliss, where "go along get along" is richly rewarded and any aesthetic or moral objection to how things are done is harshly punished.
Strong and real social and economic forces drive bloat and punish the individual that would combat it. Which is why those who make the (at some level, doomed) attempt to fight bloat deserve at least our respect and some honor, since they will not only not receive thanks, they will be actively attacked by those who (correctly) perceive their work as a challenge to the status quo, and hence dangerous, unwanted, and a target.
It's more nuanced than that. A large share of software bloat is tied to awkward combinations of device convergence and modularity. I'll give an analogy with pens:
* Dip pens and brushes, the original inking tools, rely on the user having a pot of ink available and some fine-pointed object capable of holding the ink. It's very simple, but this two-piece design limits situational use and requires the tip to be scrubbed off like used silverware. But you are relatively unlimited in what you put in the ink and the style of the nib, making it one of the best artist's tools even today.
* Fountain pens innovated by creating a reservoir inside the pen, and a feed mechanism. This creates a modular pen body, allowing the user to customize a body with different inks, reservoir mechanisms(cartridge, piston, etc.) and nibs. However, the ink has to be liquid enough to gravity feed, and cleaning the pen and fixing feed issues is a more involved process, making fountain pens reputed as a temperamental platform with many tradeoffs for portability.
Disposable ballpoints and markers devised a one-size-fits-all solution: make refills be the "whole pen", comprising tip and reservoir, and leaving the body as just a shell. This has allowed the designs to become very intensively engineered, with customized gel ink formulas, rubberized marker tips, etc. However, most of these designs are not made for longevity in some dimension - gel refills dry up, oil-based refills fade, marker tips get smashed. As well, there is a limited range of line styles achievable from a ball or marker.
Lastly, there is a recurring Kickstarter scam in which a "printer pen" is demonstrated, with selectable RGB color. Besides the fact that this product doesn't really exist, it would make the pen into something even more complex than a fountain pen.
So in solving technical problems, I think computing necessarily converges on answers similar to how pens are engineered: if you want it to be very simple and exacting, it's a dip pen, and you have to "plug it in" each time. If you want it to be easy-use, to last a while and to get out of the way, you want a pack of Bic ballpoints. If you want some mix of those things, you end up in the space of markers or fountain pens.
But software lets us make the RGB pen, actually have it work and be used by people. And that is a problem, because now our expectations are that nobody should have to use a dip pen ever again, when a large part of the population didn't even get exposed to them and lacks the agency to choose.
I write high-performance business/enterprise software every day and get rewarded for it. Keeping the architecture simple but fast by design makes everything so much easier and productive.
I really wish developers would take responsibility for their own low performance code instead of blaming everybody/everything else.
We need something that dictates how commercial software can be distributed, installed, and what it can automate and do without your consent.
Software increasingly adds itself to "run at startup" automatically. Nobody needs to have a bunch of startup user interface apps. It is completely unnecessary and has little purpose other than data collection.
A work VPN I have to use requires a startup background service. I'm not even using it, it does not need the service to operate at boot. Steam service does this too, as do many other applications.
Microsoft will remove my group policy that disables their useless and performance destroying antivirus. The software that is one of the big 10 in causes for why windows laptops are trash. FYI AV's are humorously easy to bypass. Adblockers and browser settings are more effective.
On MacOS I can just disable these easily. On windows, I need auto runs to ensure, and either the application breaks (steam) or it'll turn itself back on.
"A plea for windows' demise". Preferably the rest of Microsoft with it.
Thinking about this recently and I’m wondering if part of this is that OS developers have stopped providing common services as part of the OS and part of it is that many apps aren’t coded to the native services the OS provides in the first place.
Most apps that throw a startup process in the background are probably aiming for (in part) an auto update service. Why don’t OSes provide a service that an app can register a URL to check for updates at? Yes the App Store does this for apps installed there, but outside the App Store there’s no way to do that and to my knowledge nothing like that exists for Linux or windows (barring the few things that seem to be available in windows updates).
The other part of course would be even if the OSes did provide this, who would write code for it? Maybe electron would have a function to register with the os native endpoint but until/unless they did, would any electron devs code for it or would they keep writing their own services?
This 1000x. Every developer should get a Pentium 3 500 MHz and 256 MB of RAM and figure it out from there. And yes I dogfood, I have a small 15 year old laptop with a Core 2 Duo CPU and an old Via C3 500 MHz industrial PC where I test my stuff on.
Honestly, even just an i5 with 16GB ram with YouTube running in the background, an iPhone 11 if you're making an iPhone app and a galaxy S10 if you're making an android app.
I've come to suspect the normal way large companies are organised makes performant software almost impossible.
Features are celebrated and make money - whereas performance can be gradually salami-sliced away, with nobody advocating for it.
If you can add a feature and it adds just 50ms to a 300ms pageload - it's unlikely anyone is going to block its release on performance grounds. And once the feature is released, there's no removing it.
This means performance gradually ratchets slower and slower - until it becomes intolerable. And by then, different performance issues will be everywhere in the codebase, so fixing them will seem an insurmountable challenge.
Far cheaper to ignore the problem. "Jira is only slow because customers insist on creating tickets"
Development in any corp today, even middle-sized ones, is a big red tape festival. You have a project divided in sprints that are divided in (often meaningless) tasks, then you have a completely bloated CI/CD system and then things get deployed to three or four environments where people pretend to test everything. Things that might be developed by a journeyman developer in five days take six weeks to be done by a team of eight developers.
The whole process today exists to support a big class of managerial people(including scrum masters, PO's and PM's) whose salary depends on not understanding that the process is completely broken.
> The whole process today exists to support a big class of managerial people (including scrum masters, PO's and PM's) whose salary depends on not understanding that the process is completely broken.
I think it's even worse than that - those unnecessary processes only exist to try to help management attempt to manage content they don't understand in the first place.
And I say this as someone who regularly implements said delivery processes for customers.
Wasn't "Agile" development supposed to fix this, by keeping team sizes small and workflows as simple and iterative as possible? AIUI, the fabled "two-pizzas team" size was intended as an upper bound for the most complex projects only, not as the new normal.
(The "test everything" culture was also AIUI in the service of quick iteration; the first versions of Agile are from the early 2000s when dynamic languages were starting to get popular and there wasn't really any kind of static analysis.)
The problem is not with the Agile methodology itself. One of its core tenets is valuing "individuals and interactions over processes and tools".
The problem is that companies don't understand what this means, and find it simpler to pay an "expert" who will bastardize this definition, and implement processes and tools that make them appear useful to the company. There's an entire industry built on this model.
How large are those two pizza ? I can eat a small 8" alone but a slice from a jumbo 18" is enough... Also if someone eats a 18" pizza alone, is he a 10x programmer?
That's why software development teams are bigger in Chicago than in Rome or Naples. The size of the crust is bigger in Chicago-style pizzas, whereas italian pizza has a thinner crust.
There's a lot of overhead and inefficiencies in how large companies do anything. It's hard to organize thousands of people well. It's the social equivalent of the old 'Command-line Tools can be 235x Faster than your Hadoop Cluster'[1] conclusion.
You put a hundred developers on a project and you'll generously get twice the meaningful output of ten developers (but 50 times the LOC; as demanded by conway's law).
The upside is that this makes it all the more feasible for a small group or even single individual to be surprisingly competitive. Peak productivity per person is probably is probably found in a self-organized 1-3 person team.
>
You put a hundred developers on a project and you'll generously get twice the meaningful output of ten developers (but 50 times the LOC; as demanded by conway's law).
Based on my job experience, this is rather an artifact of how software projects are (badly?) managed.
Give each of the developers some small "hotspots" to work on (i.e. critical code fragment that is
- hard to implement correctly and fast
- which is critical to implement very fast and correctly
- a correct and intended implementation will likely consist of few highly sophisticated lines of code
), and the scaling will be much better. This is a little bit like theses are assigned to students: each thesis is one isolated non-trivial scientific challenge (often open research problem) that is to be solved by the student.
I guess the "management incentives" nevertheless prevent such a method: managers prefer/are incentivised to "tell stories" instead of explaining the deep, sophisticated challenges and solutions in making great implementations of these "hotspots".
This assumes you know what you need to build. 90% of the challenge in large scale development is nailing down requirements, especially when the problem space is too big for any individual to fully understand.
This is what in my opinion senior programmers or project managers are for. If they are not capable of handling this task decently, these people simply are not (yet) ready for this role.
Young engineer here in charge of a project and feeling quite out of their depth, I agree with this. Currently no mentorship and it will take a couple months for a senior hire. Do you have any advice? What does a senior engineer love/hate to see when they come onto a project started by engineers earlier in their career? How can I be most helpful?
My personal opinion on this topic is a little bit "postmodernist": we neither have yet a "scientific theory" of software engineering, management, and management of software project (despite the fact that there do exist people who claim otherwise). So, there exist a multitude of "schools" of how to handle these topics (with often quite conflicting opinions on what is "good" and "bad").
Having this consideration in the back of your mind:
What might be helpful concerning your questions is to consider that many programming languages suggest a specific style of approaching the programming project, structuring the code, often managing the team, ... . There exist good reasons why one talks of "Java shops", "C# shops", "Python shops", ... because these programming languages often imply very different company cultures. Note that very prevalent programming languages can also have various "subcultures" ...
So what will likely be valued is to have a good understanding of the "desired programming style", "desired problem solving approach", "desired management style", ... that is encouraged by the "programming ecosystem" in which the company is placed, and going by it.
This will likely yield a decent, conservative code base that can more easily be passed over to a more experienced programmer as soon as one becomes available: especially for such a situation, it is in my opinion much more important to "not to make huge (also architectural) mistakes" than to deliver the most fancy/ideal code (but note that if you work at a startup that has an "all or nothing" approach (i.e. if the product won't become great, the startup has failed) or is in fear of running out of money, the priorities might differ quite a lot).
Thank you for the response, it is helpful. I am not used to Python and know that I am not using the language well. So I think it is worth focusing on this.
> not to make huge (also architectural) mistakes
I'm noticing it's the first time I'm even having to make significant architectural decisions, which is difficult because I don't have much experience to draw from, so even the smallest decisions often require a lot of research.
I think big projects are worse simply in proportion to their size. I once spent months with a client adding requirements when we were supposed to be doing UAT!
People take holidays, get sick and change jobs - having 1-3 ppl team means you'll hit brick wall that will hurt sooner or later, more so if at least one of devs was really good.
It is better to sometimes be hurt than always be hurting. Organizations are so afraid of only 1 person understanding something that they create processes so that 0 persons understand it.
This is a real concern if you're looking at it as someone who is managing people... but a 1-3 person team does not need a manager to tell them what to do, and generally is not hired but spontaneously organized.
It's an inherently more unpredictable way of working, but that definitely cuts both ways. There are definitely more risks, but also far greater payoffs in terms of what you can do working in this fashion.
The principle of ensuring predictable velocity across a team comes a cost of limiting maximal produtivity amongst talented individuals.
Fun part in context of big corporations is when performance becomes awful business will call in some guys from outside because clearly people currently working don’t know how to make it perform better.
Business skips part where for years they insist on pushing features out. Where there most likely are tons of easy fixes that can speed up performance if you give the team time to focus on that.
Going back to “outside help” then you get people who don’t understand current system and propose some silver bullet like throwing noSQL because that “will fix your issues” - but they don’t say cost is basically redoing the system, project goes under, third party guys count money, put another success story on CV because no one will validate it anyway…
> If you can add a feature and it adds just 50ms to a 300ms pageload - it's unlikely anyone is going to block its release on performance grounds.
I worked on a team that did this. It was easy to justify because the benefits of the feature were almost always counterbalanced by less usage of the product, and we had numbers to back it up.
> Features are celebrated and make money - whereas performance can be gradually salami-sliced away, with nobody advocating for it.
- Optimists tend to be promoted, so the higher up in the organization
you are, the more optimistic you tend to be. If one manager says
"I can do that in 4 months", and another only promises it in 6
months, the 4 month guy gets the job. When the software is 4 months
late, the overall system complexity makes it easy to assign blame
elsewhere, so there's no way to judge mis-management when it's time
for promotions.
- There's a disconnect between engineering and marketing. It's not
surprising -- marketing wants all the whiz-bang features, it wants to
run in 16 megabytes, and it wants it yesterday. Although engineering
would like the same things, it is faced with the reality of time
limits, fixed costs, and the laws of nature.
- The complexity of our system software has surpassed the ability of
average SGI programmers to understand it. And perhaps not just average
programmers. Get a room full of 10 of our best software people, and
you'll get 10 different opinions of what's causing the lousy
performance and bloat. What's wrong is that the software has simply
become too complicated for anyone to understand.
- There was never an overall software architect
- We should sell 'bloat credits', the way the government
sells pollution credits
- SGI software has a cracked engine block, and we're trying to fix it with a tune-up.
Get 10 different opinions and then have them actually implement them 10 different ways pick your favorite. Software development is already eye watering inefficient, getting people out of the way of each other at the cost of ten different implementations may just come out ahead.
I like your take, but I'm not 100% sure this is accurate.
Obviously to some extent features make money, and "featureless" software by definition couldn't exist. But I think a lot of software contains features that don't drive demand at all - the worst contender for this that I can think of is [Microsoft Teams' Together Mode](https://www.microsoft.com/en-us/microsoft-teams/teams-togeth...) which I doubt has led to a single sale of the product worldwide, but probably adds some maintenance burden.
I think a bigger part is, like you mentioned, the organisation of companies. Engineers like building things, and although maintaining software and improving performance can be rewarding, you'd struggle to find developers that are happy to only do that. Features also lead to promotions more than maintenance, so there's a strong personal incentive to create features even if they don't have a business benefit.
At Google they argued over microseconds, but there microseconds are money. If every software engineer working on search or ads added 10 microseconds each to load times Google would become unbearably slow extremely quickly and no longer make money.
>I've come to suspect the normal way large companies are organised makes performant software almost impossible.
I think this is a bit of an over generalisation. There were (and still are) many large companies producing performant and reliable software.
I suspect when things just work (think of the global phone system) people don't realise how much software is involved.
When was the last time you picked up a fixed line hand set and there was a software bug preventing you from calling any other handset on the planet - without varying delays or dropped packets?
>If you can add a feature and it adds just 50ms to a 300ms pageload - it's unlikely anyone is going to block its release on performance grounds. And once the feature is released, there's no removing it.
Unfortunately this is happening, because compute power is cheap these days. It allows programmers to be lazy, take short cuts and not think about good software architecture. There is no economic incentive to optimise, because human perception won't perceive a difference and thus nobody complains. It's wild to think that we have basically supercomputers in our home and yet developers managed it to bring it down to a crawl with things like electron apps.
> compute power is cheap these days. It allows programmers to be ecobomical, take short cuts and select appropriate software architecture for the business problem
Just an alternative take that I don’t entirely agree with but I often find rephrasing a problem in alternative tone it can help to understand it in non adversarial terms.
In this case I see strong parallels with the auto industry and food distribution
> It allows programmers to be lazy, take short cuts and not think about good software architecture.
I think it has little to do with laziness, and more to do with the taboo of reinventing wheels. If there's a very generic and very broad solution to your very specific and very narrow problem, you're encouraged to use (or invent!) the very generic and very broad solution.
This leads to everything being abstracted to the point where most of your application does absolutely nothing. Just look at the call stack of a modern java web application. The depth of the layers of abstractions that you need to navigate to reach anything that performs any real work is vertigo inducing and nauseating.
It far from ideal and I agree that economics is behind it, but I strongly oppose the wording that it "allows" programmers to be lazy.
I do believe laziness is a big part of it. Because laziness IS about economics - in this case, personal economics. Why expend effort when you can get away with less and get rewarded the same?
That 1995 text editor didn't handle unicode. Didn't edit all the languages of the world. Didn't handle emoji. Didn't do auto-complete. Didn't replace colors in CSS with their actual color and popup an inline editor to edit them. They didn't edit remotely (editing remotely is not the same as tmux + vim). VSCode not only edits the files. When you're in a terminal on the remote machine and type 'code somefile', somefile opens on your local machine. When you start a web server in the VSCode terminal, VSCode auto forwards it to your local machine.
I'm not saying old editors weren't more efficient but the stuff editors handle today got more complex. LSP servers do way more analysis than any 1995 editor and they do it in an editor agnostic way. It costs more memory but it also lets us all jump into the future faster rather then every editor having to implement their own for every language.
I know that this is becoming a trope, but Smalltalk and Lisp Machines did all those things far before 1995. Similarly, GNU Emacs today is capable of all of the above and has been managing for multiple decades at this point in a more modern take of the world...
Remote editing back in the 1980s was such a common thing on the Smalltalk and Lisp Machines that all system code was on another machine, more times than not you wouldn't even notice that it was a remote file!
One could do "emoji" just fine as well, and files would have WYSIWYG like look to them using "fat strings" -- that is 1980s technology. There is a dungeon crawler map using that feature to render the map as graphics, it is how you would implement chess pieces, or other "picture" like stuff.
Auto-complete was already standard, similar look up of "who calls" / "who uses" functionality to figure out where things are used, online documentation, etc etc etc...
So all this was perfectly possible, and already used and abused in 1995 -- VSCode isn't doing anything new in that regard.
None of what you describe requires a lot of resources. Remote editing stubs are decades older than VS code, but also, many of us used X - for many years I did all my work over the network because there was no reason not to.
A color dialog was tens of KB of code in the 1980s.
My own editor handles Unicode well enough for most users in a few dozen lines of code. RTL would take a bit more, but not much. LSP servers if anything reduce the need for the editor resource use to grow.
It's not that these justify no extra resources use because they do, but they don't need to significantly increase resources use.
A lot of apps get away with huge resource use simply because people aren't used to paying attention to it any more, because for most it affects them little enough in isolation, per app, that when it matters addressing the resources use of one hardly makes a difference.
It's a bit theoretical because no editors exist that are smaller that do all of what vs code does. And a lot of what it does relies heavily on the notion that it's running in a browser. So, just tossing that out won't fly since you kind of need it for at least some of the features.
It's only when you subjectively remove all the features that you don't care about that it becomes doable to make smaller editors. And that's fine. But you can't have your cake and eat it.
The reason most people don't care is because it simply doesn't matter. Not even a little bit. Laptops are cheap. Memory is cheap. CPU is cheap. Your time is not. And it takes investing your time to make this stuff more optimal and faster. And VS code just does a lot of nice things that make you more productive. I use Intellij myself which uses even more resources. But it's a bit smarter and saves me even more time. The point with both is that you lose more than you gain by replacing them with something faster. It's not worth it.
My first computer was a commodore 64, so I'm well aware what that thing could do (and couldn't do). I'm writing this on a M1 macbook. Orders of magnitudes faster, doing things I could not imagine back when I had a commodore 64, etc. You can have one second hand / refurbished for next to nothing. Basically below my day rate when I'm consulting.
Back in 2014 my company switched from Skype to this hot new tool called Slack for messaging. On my £10,000 workstation with dual xeon processors, 64GB memory and a 1TB SSD, you know what was the second most resource intensive app after my c++ compiler, and above my IDE? Slack. We used to close our chat program to compile to save the 1GB memory it was using.
> It's a bit theoretical because no editors exist that are smaller that do all of what vs code does
You can't ever compare two things if you look for all features to match. Sublime is a pretty good comparison - it's wicked fast, has a bunch of the same features and language extensions. Emacs handles unicode just fine and has a huge extension surface area.
> The reason most people don't care is because it simply doesn't matter
Hard disagree here - the reason people don't care is because features sell, and as you said, the alternative option isn't there. I work in Unreal Engine most of the time, and about 3-4 years ago, there was an almost overnight exodus of game programmers who would live and die by Visual Studio who switched to Rider, primarily because it was faster than VS+VAX.
> Laptops are cheap. Memory is cheap. CPU is cheap. Your time is not
This only applies with one application. Now add Slack/Teams, Postman, Outlook, FF/Chrome, Spotify in the mix, and all of a sudden I'm running 6 full web browsers duplicated with all their resources isolated, using more menoey and CPU than Intellij does. I'm fine with Intellij pegging my 32 core thread ripper to index millions of lines of code. I'm less fine with Postman using more CPU than Intellij to display a json document.
> Im writing this on a M1 macbook
Depending on what software you're working on, your users aren't using M1 Macbookd. My partner's work machine is a5 year old i3 with 8GB of RAM. It's borderline unusable with teams and Outlook running IMO. But the person who benchmarks teams is doing so on the M1 MacBook.
We're talking about developer tools here. Editors are aimed at developers. You can expect developers to have reasonably decent hardware. If you are working wit unreal like you say you do, you presumably aren't using a ten year old macbook air to do your work. That would be madness.
Anyway, end users care even less. The paying user variety typically has a newish computer (of the last five years or so). The rest are not a great revenue stream. But of course, if you develop for users stuck on really old crappy laptops, of course you are going to invest your precious time in making sure they get a great experience and make all sorts of compromises to ensure they do. But for the rest of the users, good enough is good enough. You'll see from your revenue/usage statistics what that is.
I find the people that whine the most about this topic are exactly those people you should expect to have decent hardware (i.e. developers). Either way, use things that are useful to you.
Spotify and Slack, Teams, etc. seem to be doing OK with user popularity for example and don't seem to be getting a lot of churn over their application performance. And of course a lot of this stuff is used on mobile as well. I've used both for the last ten years without much issues on modestly sized laptops. 16GB is more than enough for me running stuff like that, vs code, intellij, a bunch of docker things, and a few other bits and bobs.
People using MS Windows seem to get a particularly rough experience. That's why lots of developers prefer mac or linux based machines.
> Slack/Teams, Postman, Outlook, FF/Chrome, Spotify in the mix, and all of a sudden I'm running 6 full web browsers duplicated with all their resources isolated
If those apps were PWAs instead, it would mean no extra browser copies are running. In my experience this only really accounts for 70-100 MB per app for the browser copy. No reason slack couldn't be a PWA, same with Spotify.
I'm not really sure how slack and others use so much RAM. I've built quite functional, complicated, and non trivial web apps. Mine typically use <50 MB with some coming in at 20-25 MB. When I'm deploying in electron I'm still in the 80-150 range.
The biggest performance questions for me are network latency vs local data and figuring out ways to mitigate network latency. The difference between 200 ms navigation and 5 ms navigation is pretty stark. Even if most people don't flinch at 200 ms.
Since your time is so valuable and you are obviously very upset about this, your company should pay Spotify to write a more efficient app.
Or your company should buy you a new 96 core Threadripper 1 TB RAM system so that when you use Spotify/Slack/Postman it doesn't impact your productivity.
I was just reflecting your thinking - somehow you feel that Postman/Slack/.... owe something to you. Pay them to do what you want, or stop using them.
You feel entitled to use a $10K machine to compensate for slow IntelliJ for maximum productivity and convenience, yet deny others (Postman/Slack/...) using the most productive and convenient technology for them (Electron). And while continuing to use their convenient products, you say they are bad. Use IRC, use curl instead of Postman.
The Postman programmers say the same thing: our users have $3K+ machines, no point in optimizing code to be fast, instead lets add more features since it's clearly working and our users are not switching. Obviously they love the iteration speed that Electron gives us.
> Pay them to do what you want, or stop using them.
I've been a paying slack customer for a decade at this point. I pulled up my email, my support ticket for "slack is using more ram than visual studio" was in February 2015. I don't have the political sway over Salesforce to makthem make these sorts of decisions.
> You feel entitled to use a $10K machine to compensate for slow...
Youre doing it again. I don't feel entitled. I don't have a choice in my chat app, my employer forces it on me. And even if I did, slack is on the whole the least worst option. As for postman. I did the same thing. I was a paying customer, I submitted support tickets, provided traces when asked and ultimately I did decide to change tool.
> while continuing to use their convenient products, you say they are bad.
Am I not allowed to have an opinion just because I have a fast machine? Am I not allowed to want my software to be better?
> The Postman programmers say the same thing:
No they say "performance is a top priority for us, we're sorry you're not happy with it. Please send us your hardware specs" and the ticket gets auto closed after 2 weeks.
> Obviously they love the iteration speed that Electron gives us.
It's not just electron - snappy electron apps exist. Startup time aside. VSCode is pretty damn good. Figma is an excellent example of how good it can be (and if you want to compare what it looks like when a company cares Vs a company doesn't, see figma and Miro).
> I've been a paying slack customer for a decade at this point
$10/month is not what I meant by "paying them". I worked at a company where clients would routinely pay us $200K to implement a particular niche feature which was not on the roadmap. If they asked for a non-roadmap feature, yes, the ticket would be closed "not-planned".
You seem to be choosing to engage with your own least charitable inferences rather than what reflects your counterpart's actual position. Viz:
> the alternative option isn't there
> I don't feel entitled. I don't have a choice in my chat app
Your responses are predicated on the option being there and the person you're responding to is just not taking it. This despite the fact that his or her responses strongly suggest they would take it if it were there, but it's simply not an option.
There is always an option. Threaten your employer you'll leave if they make you use Slack, quit programming and become a farmer who touches grass every day.
All this Electron app complaining reads like First-World Problems(TM).
Overly-reductionist arguments are not helpful. Suggesting that I quit my job because I disagree with the tech stack of a billion dollar company might be one of the dumbest things I've seen on this site in the 15 years or so I've been here.
You know we can read back the comments that were posted in this thread and check your response against the context, right? You just moved the goalposts from being willing to pay for the product that would need to be changed to address the complaints, to refusing to use the software complained about.
I'm not moving anything. Parent obviously doesn't want to pay the millions Slack would probably ask to make it "efficient" (whatever that means), you say parent has no alternative, I'm providing alternatives.
Or one can go back complaining "how the world is cruel, people are mean and greedy, I'm a good and misunderstood person which writes the most efficient and user considering software, unlike the evil people at Slack"
For me that wouldn't work, because the impact Slack has is not measured in time loss directly (or at least not only, since Slack is truly a laggy piece of crap), but instead in annoyance and feeling bad about using basically spyware on my system. Basically each interaction adds a bit of pain and questioning, why I am even doing this shit.
Not GP, but they could buy me a 1024 core monster if it exists, it would still not solve the problem of Slack.
I have been running localslackirc (it's in debian) to access slack from IRC.
I still have to open it in the browser every once in a while, to search old threads or other stuff that is not supported. But day to day I can do everything in irc. There is also a weechat plugin afaik.
It's a bit annoying to configure the access but it seems the tokens never expire (or have not yet expired) so it shouldn't be too frequent for you either.
Next to nothing of what VS code does depends on it running in a browser other than to the extent VS Code has made it so.
It's not special. If anything it's one of the most clunky editor I've used because it tries to shoehorn everything into a convoluted UI. It's because my time matters to me I avoid VS Code as much as possible.
The problem with VS Code is not that it's too slow, or too memory hungry. It could use far less, sure. And it could do so without losing any of the things about it that makes me dislike it.
Markdown and html, image, and other previews, documentation, connectivity, it's a lot more than you think.
If you don't like it, use something else of course. But there are valid reasons for it being browser based and a lot of people choose to use it at this point.
None of which requires VS code itself to be browser based, and of which do not benefit much from using a browser.
A lot of people choosing to use it is besides the point being made, which is not that people won't use it, but that it could be a lot leaner without sacrificing functionality.
there are other advantages to running in the browser. The fact that the editor is written in JavaScript/Typescript html/css means it runs in any browser. It's why there's been an explosion of online IDEs like codesandbox.io, stackblitz, github codespaces, google code cloud, repl.it and 100s of others.
My time is free. I'm not going to see a dime for any time I save by using this tool or that tool on my computer. Hardware, on the other hand, is not free, so I prefer to sacrifice time for being able to use less expensive hardware (within reason).
Of course, Sublime Text exists and does everything I want from VSCode at a fraction of the hardware usage. So I don't have to choose one or the other, because actual good software exists.
Consider an economy of time, where you have a finite amount of time to spend, so that time spent on one thing is time not spent on another thing. If you can spend money (or maybe earn less money) to avoid spending time doing things that are uninteresting, boring or unproductive, you are allowing yourself to spend more time on things that are interesting, fun or productive.
People drastically underestimate the cost of things we now expect.
An Unicode font is easily 15 mb in size. Let alone that you'll have several of those. And the code and memory that takes to do all the magic of rendering it, hinting, and subpixel antialiasing.
Then there's that a 4K framebuffer is 32MB in size.
Smooth compositing requires every running program to have a buffer it draws into. So there goes a couple hundred MB more just to make sure you don't see the screen repaint like in Windows 3.1.
Yeah, you can have compact software where your only requirement is that it uses ASCII, does it at 80x25 and doesn't do anything more fancy than editing text.
That's data size, not code. There's no fundamental reason that a program that can smoothly render unicode at 4k needs a GB download when kB could suffice.
We tried that in the Windows 9x days. We called that "DLL hell".
The idea was that programs would share libraries, and so why have a dozen identical frameworks on the same system? Install your libraries into system32. If it's already there but an earlier version, deploy your packaged one on top.
Turns out that nobody writes good installers, and binary level dependency requires too much discipline, and dependencies are a pain for users to deal with.
So shove the entire thing into a huge package, and things actually work at a cost of some disk space and memory.
> and things actually work at a cost of some disk space and memory.
I have ~10 000 .exe files on this machine, if none of them shared code and/or data (or were written in a ``modern`` language with 50+ MB hello worlds), they would not fit on my 1TB disk.
True, but I personally discovered this has limits.
What if you're working on something reasonably novel, like say, open source VR? Well, turns out you may want a quite eclectic mix of dependencies. Some you need the latest version, because it's state of the art stuff. Some is old because the new version is too incompatible. Some is dead.
Getting our work into a Linux distro is on my list, but even if dealing with all the dependencies works out, there's the issue of that we sometimes need to do protocol changes and upgrade on our own schedule, rather than whenever the new distro is released.
Distros are great for things that are supposed to integrate all together. They're less ideal for when you're working on something that is its own, separate thing like a game.
So for the time being, shoving it all into an AppImage it is.
You presume one option when the other option is a bundled but smaller renderer. The truetype renderer my terminal uses is about 700 lines of code. The C it's a translation of is about 1500. There's a sweet spot that might well be a bit higher to e.g. handle ligatures etc., but the payoff from going from that to some huge monstrosity is very small.
As somebody who actually works on a pretty large program, no, I'm absolutely not going to use your 700 LOC TTF renderer. I'm going to use the 128K LOC FreeType.
Why? Well, because it's the one everyone else uses. It's what comes with everyone's Linux distro. Therefore, if there's something wrong with it, it's pretty much guaranteed it'll break other stuff and somebody else is going to have to fix that. Also it probably supports everything anyone might ever want.
If your 700 LOC TTF renderer doesn't perform as it should, it might become my problem to figure out why, and I don't really want that.
I'm not suggesting you should. I'm pointing out that these things can be done with a whole lot less code. And a lot of the time so much less code that it is less of a liability to learn a smaller option. Put another way, I've had to dig into large font renderers to figure out problems before because they didn't work as expected and it became my problem, and I'd much prefer that to be the case with 700LOC I can be intimately familiar with than a large project. (I'm old enough to have had to figure out why Adobe's Type1 font renderer was an awful bloated mess, and in retrospect I should have just rewritten it from scratch, because it was shit; that it was used by others did not help us at all)
I ended up with this one in large part because it took less time to rewrite libschrift (the C option I mentioned) and trim it down for my use than figuring out how to make Freetype work for me. I now have a codebase that's trivially understandable in an hour or two of reading. That's what compact code buys you.
No, it won't do everything. That's fine. If I need Freetype for something where it actually saves me effort, I'll use Freetype. It's not about blindly rewriting things for the sake of it, but not lazily default to big, complex options whether or not they're the appropriate choice.
A lot of the time people pick the complex option because they assume their problem is complex, or because it's "the default", not on the merits.
There are tradeoffs, and plenty of times where the large, complex component is right, but far too often it is picked out of laziness and becomes a huge liability.
You say that as if it was some kind of failed one-off experiment of the 90s. We tried it in the Multics days, it caught on and the design philosophy is still popular to this day. It works quite well in systems with centrally managed software repositories, even if it doesn't in a system where software is typically distributed on a 3rd party shareware collection CD or download.com.
Behold! The peak of technological prowess! So many poor souls of the past died in misery and 4 MB of RAM. They could not taste those sweet fruits of progress.
There is waste that is fine, and there's waste that doesn't really come with an upside.
E.g. in "waste" that is fine, I'd categorise AmigaE's choice to read the entire source file into memory, instead of doing IO character by character, or line by line from small buffers. It was a recognition that there was no compelling reason to not sacrifice that small amount of RAM for simplicity and speed. What you gain can differ, but as long as the benefit is proportionally good enough relative to the cost, that's fine.
But so much modern software pulls in huge dependencies for very little benefit, or try to be far too much, instead of being focused tools that interoperate well.
It's not so much that the new generation is stupid, as that a lot of people (of any generation) always choose the easy option instead of stopping to think. Sometimes that's the right tradeoff, often it's not.
And hardware advances mean you can get away with more and more. Sometimes that justifies more extravagant resource use. Often it doesn't.
> in "waste" that is fine, I'd categorise AmigaE's choice to read the entire source file into memory, instead of doing IO
This is only an issue if your OS doesn't have virtual memory and mmap. Modern OS's automatically prefetch files into free RAM (so there's no such thing as "free RAM is wasted RAM" either). I think newer versions of Amiga OS were supposed to be getting virtual memory support at some point, too.
Yes, but because it was an issue, even now decades later a lot of compilers still use file IO instead of just reading a file in one go even when you gain benefits (e.g. no "ungetting" or building a token buffer - just keep the index of the start and end). I'm guilty of that myself.
It was an inspired choice, and about 30 years on it's still underutilized, on machines that typically have 3-4 orders of magnitude more RAM.
>Things never change, the old generation fights the new one and calls it stupid.
I was with you until here, which I think is the wrong take. That is, this gets it exactly backwards. It's not just that every generation gets upset at the previous generation so let's all shrug and move on, it's that this is really a thing that is unfolding from one generation to the next.
It seems like the reflex of oh well the previous generation said it so let's ignore it comes up a lot, to the point that I have this go to example that I use every time it does. I'm a baseball fan. And one thing you used to hear in the '80s, with a guy like, say, Rob Deer or Steve Balboni, was that they tried too hard to hit home runs and they struck out too much. Then you heard that in the '90s as well. Then you heard that in the 2000s, especially with money ball and guys like Jack Cust. Then it just kept getting even more extreme with guys Carlos Pena and now Joey Gallo.
So one thing you could say is, well, every generation says that there were less strikeouts in third day. But there's actually data on this and..... it's true! Almost every decade, from the 1800s through every decade of the 1900s through now, strikeouts really have been going up year to year. And so that intergenerational commentary, well, it's describing a real thing that really is happening.
The same can be said of other things, like people saying they always used to remember the environment being better. Or people saying attention spans are getting shorter. But, they are.
The instinct here I think is to dismiss these since every generation says it. But I think the conclusion should be opposite, that these are real things unfolding on a multi-generational level. So if you see it happening with software, maybe that's because there's really something to it.
There is real thing happening. But the dissatisfaction from that thing happening is what's the criticism is all about. And those are two separate things. Things changing might be real or not. But the dissatisfaction of older generation is constant for thousands of years.
>But the dissatisfaction from that thing happening is what's the criticism is all about.
The argument seems to imply that the dissatisfaction would be there regardless of the circumstances. But the point I'm making is that the dissatisfaction can be understood as meaningful and not as just a generalized disposition that's a natural consequence of getting old.
And whether or not I'm right on that, the devil is in the details there, and it's going to depend on a case-by-case basis. But it won't do just to say well the older generation is going to complain no matter what, because that doesn't credit them with the possibility of complaining for a legitimate reason.
> But the dissatisfaction of older generation is constant for thousands of years.
And there is no dissatisfaction of the younger generation? There is no, this is the old way let's do it differently because we are more clever by the young generation?
You are focusing on the generational trends, I was focusing on the individual.
Tell a kid learning to program today "you should program in assembly because it's efficient, like I did back in my days".
Kid looks around and sees it would take him 3 days to implement a hello world in assembly, but only 3 seconds to do it in Python. He has a 16 core computer with 64 GB of RAM. Both hello worlds run instantly. So how does that advice make sense? Kid calls you a crazy old man out of touch with the times. Kid goes on running locally a 50 GB LLM to make it do a hello world and feels very excited about the future of programming.
I'm not sure I would agree that the point I'm making cleanly transposes onto the details that you've selected.
For one, I think the accepted premise in this conversation up till now was that there is a real issue with software bloat. And you've switched that detail out for a different one where we assume no discernible bloat or difference in performance as time passes.
I also don't think that I understand what's going on in the pivot from generational examples to individual ones. I feel like at least the comment I replied to was pretty clearly about generational trends. But on another level I think that the upshot is the same regardless of whether your surveying that disagreement at a general level versus its equivalent manifestation at an individual level.
I think the upshot would be the same in each case as long as you keep all the details the same, and I think somewhere in the transposition from the general to the individual and agreed assumption about bloat and underperformance of software, as well as some implications about what that means about prevailing assumptions and practices surrounding software development, got lost in the translation from one to the other.
> So one thing you could say is, well, every generation says that there were less strikeouts in third day. But there's actually data on this and..... it's true! Almost every decade, from the 1800s through every decade of the 1900s through now, strikeouts really have been going up year to year. And so that intergenerational commentary, well, it's describing a real thing that really is happening.
I agree with the factual observations in your post, but there's an additional bit here, and that there's qualitative value being assigned to what The Youths don't mind and The Olds protest. In baseball, the guys who strike out a lot but hit a ton of home runs create more runs, and therefore create more wins, than most base-hit machines (obvious outliers exist, but you get the idea). On my computer, VS Code does more things that benefit me than vim does (and the outlier here, I guess, would be "a lovingly crafted vim monstrosity that uses all the LSPs etc. designed for VS Code et al in the first place"--doable but not the happy path, etc.).
There's also (and IMO this is more in code than baseball) some kind of bizarre moral valence assigned, and that I don't even pretend to understand, but that's a different story.
>I agree with the factual observations in your post, but there's an additional bit here, and that there's qualitative value being assigned to what The Youths don't mind and The Olds protest.
A few things here. I want the main center of gravity in the point that I'm making to be a way of approaching intergenerational reports of a given phenomenon, namely that they shouldn't just be dismissed as a function of old age or a function of changing perspective. After that point, pretty much any point you want to make is fair game as far as I'm concerned. In the case of baseball, there are positives and negatives. It clearly seems to be a positive trade-off for hitters who are choosing which style to take. I suppose there's another consideration at a higher level as to whether it benefits the game itself. So that can go either way in my opinion depending on what's important.
I tried at the end to throw in some other examples, shortening of attention spans, and environmental degradation. I think in those cases it's clear that there's something negative going on. But in general we don't have to agree with the value judgment if it's negative, but I think the positive or negative value judgments is an independent thing from the phenomenon of multiple generations attesting to some thing happening.
>Now we do the same, but we look at the text editors of 1995 which used 4 MB of RAM as incredibly efficient and well made, paragons of craftsmanship.
That's because the text editors don't exist in a vacuum; the 4MB-RAM text editor would be slow on a 1995 computer but blazingly-fast on a 2024 computer.
VS Code is slow and annoying to use, and RAM is just a more measurable symptom of that.
I don't care about memory usage for editors as much I care about input latency and responsiveness.
Jetbrains (IDEA IntelliJ, Pycharm, etcetc) put a lot of effort into making their IDE low latency as it was getting to a point of being almost ridiculous. Their editor is built in Java, and they run on their own runtime as they have so many hacks and tweaks to make it work as a desktop app as well (font rendering, etc).
1995 text editors didn't use a "layout engine" that executes JavaScript, attempting to JIT fragments of code. They were an event loop that processes native OS events and responds with repainting areas of the window. They also weren't able to automatically recognise language syntax and didn't have Git integration :-)
One of my favourite editors from the 1990's was FrexxEd, co-authored by the author of Curl, whose main event loop processed Arexx commands and events for every internal command, so you could rebind every event to a script in their own scripting language (FPL; C-like), and access every internal function from it. (It incidentally also came with FPL scripts that provided some degree of syntax highlighting for a few languages, and you could add more, though not as expansive as most modern editors).
Running a heavily scriptable editor with a GUI was entirely viable on a 7.16MHz Amiga with 1MB RAM, and more responsive than many modern editors.
Integration with other tooling, like RCS, compilers, or linkers was a given for Amiga apps at that time.
(FrexxEd was also notable for exposing internal buffers as files in a special filesystem, so you could e.g. lint or compile or call your revision control - limited as they were - directly on the buffers without any custom integration)
CSS grid exists now, so it should be easy enough to achieve the same "repaint small areas of the window, don't do global layout" workflow in a web-based app.
BBEdit was first released in 1992; I'm not sure what that version was like, but I'm still using it now. That said, the version on my disk right now (30.3 MB) wouldn't have fitted into the RAM of the Performa 5200 I had in the 90s (8 MB initially, can't remember what I upgraded it too in the end, 24 or 32…)
A lot of current day Application Software is of course very much bloated. But on the other hand things at the System Level have gotten so much more complex and complicated that some of the "bloat" is actually "necessary infrastructure". The above was brought home to me long ago when i compared/read Marc Rochkind's Advanced Unix Programming 1st vs. 2nd edition; You really get to see the evolution of the OS and why it has gotten so big. With modern Cloud based distributed systems things have gotten much more worse in both dimensions of "Essential vs. Accidental" complexities.
I would argue that the bloat comes when the performance impact is not perceivable compared to the development time.
The easiest optimization strategy is just to load it all up into the memory. Compare that to other strategy like catching partial file data, and it's obvious why the simpler solution is often chosen.
Another example that comes to mind is vector art vs baked art. You can render nice icons as vector art. Or you could ship perfect baked icons for all possible size variations. There are clear trade-offs here. One of them wastes more CPU cycles, and another one wastes storage space and download time.
Load what all up into memory? That doesn't solve the N+1 problem, or chained async methods. It causes caching issues (what if something else changes the files you're working with?).
I've been using pc computers for nearly 30 years and they are faster today than the've ever been. Of course not thousands times faster. Just slightly faster. The biggest kick was SSDs.
My computer could be way faster but it's fast enough that I don't care to remove from autostart the software that doesn't need to be there.
I think that's the reason of software not keeping the pace with hardware in terms of speed. It's just fast enough and if you make it faster people just pile up stuff to make it as slow as tolerable for them.
If software getting larger is a sign of "progress", then someone could write a 2TB text editor right now and call it the most advanced editor ever written. Because inevitably, some day, editors with reach 2TB, right?
In the future, with an Apple Vision Max 16, we achieved full and perfect immersion. Text editors now come with 3D environments you can "sit in" while writing text on your virtual, haptic feedback keyboard. One extreme definition 3D environment has the size of 1 TB. As a minigame, the jungle one comes with a realistic tiger that you can tame, like a tamagochi.
If you ignore the tiger, the object containing its affection values will remain null, causing a nullpointer (and crashing the 3D environment) eventually as the text editor tries to check if you fulfilled the requirements for the tiger stripe font you can get if your affection levels with the tiger are high enough.
> Walled gardens like Apple’s App Store could charge app developers based on resource usage.
They’re taking enough heat charging for (what they claim is) the costs of the services they do provide. The absolute hurricane of doom saying that would be sparked by charging for the resources the app uses on the users machine would probably take down the internet for a day or two.
Please stop blaming the tools, companies, users, God, your neighbour, politicians, capitalism, or whatever else developers traditionally like to blame for low performance software.
The truth is that developers are responsible for developing slow software. Nobody else.
So if your code is slow, take responsibility for your own work and fix it! Learn how to make your code fast. Listen to developers who know how to write fast software (game developers for example). You might learn something.
But most importantly pick architectures that are fast by design. Most popular architectures are not and never will be.
A single modern server is unbelievably powerful and can do an enormous amount of work if you code it well.
I blame Firefox and other web browsers which deny loading JS files if your page was loaded from file://. If you could simply run an HTML file from your hard drive and have a fully functional web application, you wouldn't need Electron.
Files opened from disk can run JS. But you can't do <script src="otherfile.js"> without getting a cross-origin error, even if it's in the same directory. So any kind of code organization at all is prohibited.
Thank you, @thunderbong. I just thought it might be more readable than the various PDFs floating about. I have had emails from blind readers of El Reg saying that some PDF versions are inaccessible.
Not my own writing, obviously. All I did was transcription.