Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Decline of Usability (2020) (datagubbe.se)
287 points by mmphosis on Sept 10, 2023 | hide | past | favorite | 231 comments


The past decade or so, I've found myself in a constant battle with UI/UX team members.

A big part of the battle seems to be a shift from power/tool user to consumer design paradigms. 20 years ago, most software was tools for professionals, now most software (on a usage level at least) is consumer software (literally consumers of video, social media etc.) Designers can't seem to switch modes and keep trying to apply "content consumption" rules and techniques to "jobs to be done" projects.

A favourite is removing visible tools with 3-dot menus in the name of "cleaner" or "more intuitive" user interface that turns one-click actions into two clicks: 2 clicks hundreds of times a day for the tool users. Tables that show 10 items with 4 details each in the same screen real estate that used to show 20 items with 6-8 details. Dashboards that show 4 graphs instead of 10. Menus that take up the top 10-20% of the app. Massive H1 titles.

One of my aunts changes around her furniture constantly, not because she has an idea of a better layout but because she's bored with the current layout. This seems to be a common personality trait in PMs and/or UX people. Adding a 4th table view to the product? ... "we should use this entirely different style" not because it's "better". Since COVID, I've run a weekly Google meet with friends ... it changes in some way pretty much every time we get together, most recently the request to join process has changed, again, and not for the better.


The best is that if you (junior or senior) bring this up, you're either shut down by management because you don't have the right expertise (how dare you, especially if you like me are mostly working in backend) or by the UX guy saying 1) it's a best practice, and 2) if they ran a research/survey, they found that it makes "almost no difference" - the major points are xyz (which is something really worse that requires immediate work).

At a certain point just accept the way it is, and wait until the better way of doing things becomes the new best practice...


Not just a front end problem. As a customer of facebook, netflix, hbo,prime, disney+ etc., it continually amazes that these expensive devs can't make the backend remember what I watched and not. Combined with an "engaging" front end that hides what I'm interested in and shifts the interface randomly, the experience is pretty low.


i think they do it on purpose just to give you the feeling of having a lot of content. Hate it, too.


They intentionally change cover art for exactly this purpose


Yup, it's another great example of the sort of "late stage" problems you run into with capitalism, professionalization of business management (MBA degrees), and fundamentally just about any system w/ rules and "(semi-)autonomous agents" trying to maximize (or at least perform well [enough]) the value of some metric / function. Here, profit &/ even staying in business.

Capitalism and other features of the system currently used in many of the most developed countries is great during development of markets. The performance is excellent, which makes perfect sense.* You run into problems when markets reach saturation. And, markets have never been remotely as saturated as now. Now, you're literally talking about how finely you can slice the sum total of ATTENTION, period. Think "TikTok".

Basically, you see a lot more "zero sum" behavior at such points. I.e., it's much more just Darwinian** ... some winning, some losing.

One example of this type of strategy that I find distasteful is the "retail reset". Every 6 months or so, the retail reset dairy visits the supermarket I shop at most frequently, forcing me to re-find at least some of what I want to buy. It rarely causes me to buy anything unusual or new to me, but, I understand why they do it. It's not "terrible", but there's a feel to it of a certain ... lack of morals / ethics... feeds into a pervasiveness to that feel ... I believe it to be subtly but moderately bad for societies overall...

Couldn't find a more authoritative source / less markety site quickly, but here's at least some of the description:

https://canadasbeststorefixtures.com/why-a-store-reset-boost...

Of course, this kind of thinking is taught and exchanged among the business-oriented. Makes "the disease" worse (more pervasive) ...

* "Command economy" makes very little if you actually care about overall performance. It's the problem anyone faces when trying to model [e.g., "data assimilation" techniques, very powerful but enough data is essential] - not enough data! Of course, in today's world, it might be increasingly possible to make a more "command" style work, from the data side ... and, if you want to go for that "panopticon look" ... China has headed back that way ... again, in some sense. But, it's really still not good ... you're up against all sorts of other human factors, even if you HAD the data.

** Edit: or noticeably Darwinian (in a more common usage sense). This behavior of systems regarding "niches" and competition applies at all times. It's when the "empty space(s)" or "low density space(s)" are occupied that the more directly / actively competitive behavior w/ more clear negatives for various 'participants' tends to emerge.


Almost every company internally works as command economy. I doubt that it can scale any more than that.


Netflix used to work like that, once upon a time. They intentionally changed it.


I think what happened was that a lot of UIs were really bad in the 90s. A bunch of books were research and written pointing out the problems, people made an effort to fix them, and UX became a skillset.

But it seemed like the trends that were intended to fix the original problems just kept getting followed and amplified with nobody really noticing they had overshot. Or based on some comments here, new designers without the context of the old UIs just reapplying them to already fixed UIs?

Instead of the pendulum coming back - it kinda reached escape velocity.


UX was often pretty poor in the 90s too. There's some rose tinted glasses here. This was the era in which app theming was a huge fad. Relatively few companies attempted to create consistent and usability tested approaches to their UI, and when they did, it was always operating system companies that were writing their own UI toolkits and lots of little bundled applets. They wanted these to be consistent, and they wanted to understand how to make things like file or hardware management easy.

This then dovetailed with some other trends: the software industry was moving to GUIs very fast, competitive advantage often came from how quickly you could release a Windows port back then, and the industry was relatively poor. Many big name software firms were like 20 devs and a few admin assistant types at most. So they all used the built-in APIs because that was the only way to actually get a decent GUI to market fast enough, and it outsourced all the thinking and design leadership to successful companies. If you tried to reinvent all that stuff, not only would you arrive late to the GUI party but your app would probably not fit on floppy disks anymore.

Towards the end of the 90s the internet started getting good, but OS makers were also getting fat and lazy. They dropped the ball on internet distribution and connectivity, then dropped it on managed code as well. People started writing these new fangled "DHTML Web Apps" along with Java applets to obtain a solution, and also perhaps one day to escape from the Microsoft hegemony. But the web was extremely under-designed. It barely had GUI widgets at all, let alone the large usability tested sets of UX conventions that Windows and macOS had. Java had those widgets but it came from a server company that was just aping whatever the desktop companies had done, they didn't really "get" it and their tech back then wasn't good enough to make it fly. For a while people tried to emulate desktop widgets in JS and DHTML but the results always felt second class.

But the combination of easy distribution, a mainframe style app model, open standards and free-beer end user clients was more valuable to people than the classical Windows/Mac OS offering. Not to everyone by any means, and there was much wailing and gnashing of teeth as developers started abandoning native apps. But distribution dominates everything and OS vendors didn't respond, so, we got the web.

With the web most of the old Xerox PARC desktop idioms died due to lack of proper browser support. Menu bars, context menus, files, folders, customizable toolbars, multiple windows ... none of these worked properly on the web. So people learned to make UI within those constraints. Then mobile came along and with it, a new generation of native app idioms. OS leadership was re-established and people started following that, but at the cost of it being inappropriate for productivity apps.


> UX was often pretty poor in the 90s too. There's some rose tinted glasses here.

Agreed. The thing that is new to me though is PMs and UX/UI designers pushing for things that are so obviously out of whack with what the actual users are asking for AND that it isn't being driven from external forces. Literally no one is asking for all this white space, it's coming from the team responsible for designing the app.


This tracks, thanks for a really interesting take.


> removing visible tools with 3-dot menus in the name of "cleaner" or "more intuitive" user interface

It just came to me what this truly means: professional designers tasked with designing aesthetics of man-machine interfaces inherently develops allergic fear response through overexposure to user interface elements such as buttons and scrollbars. It is annoying and too obvious to them that EACH and every BUTTONS and SCROLLBARS tell them ALL the TIME on screen CONSTANTLY what COULD be DONE but are NOT being DONE, at some point enough is enough, to them personally, so they must remove all.

I wonder at what point the UI trends would return to necessary amounts of skeuomorphism required for average users, not designers - apparently the web link paradigm is just too difficult for older and/or less experienced users, so sooner or later it'll start showing on data no one can get around.


Thanks for putting into words exactly how so many of us been feeling for the past few years. Last week Slack's UI changed, again, I lost precious time having to find the buttons I'm used to, again. It's not better or worse from my point of view, it's just different and that's very annoying.

> This seems to be a common personality trait in PMs and/or UX people.

They also are paid to do a job, and their job is to change the UI/UX... if you were paid to maintain a piece of software that would be pretty much done, you'd likely spend your time refactoring and fixing minor issues. Except not everything we do is visible to the user.


One of the most wonderful developments of the last decade is the "Command Palette" UX pattern. You can have a tool driven predominantly by keyboard and also make all the keyboard shortcuts discoverable (display them next to the command in the dropdown). Maybe just let the UX guys do whatever they want, just so long as I can press Ctrl-K and type the first letters of the command I want to do.


I use Excel's "/, Q, <search>" frequently, because I'm often searching for a feature whose name I remember, but whose place or existence in some menu or panel of the Ribbon is ephemeral or not memorable, often changing with window size.

If only activating the search box weren't so slow!

It feels like I'm starting a moderately large app and waiting for it to load (while still being faster than clicking through Ribbons and opening various subpanels by clicking on teensy "arrow in a corner" widgets).

I'd expect it to be a tiny array of text held in memory for instant access, with no perceived delay between when I finish typing "/Q" and the search interface appearing.

But then, I'd also expect the first search result to be selectable by just hitting <Enter>. Having to use an arrow key to navigate down one, then press <Enter> is unfortunate, especially when using an MS Surface's keyboard's half-size up/down arrows.


UX designer here who has done nothing but design web, mobile and related UI since about 2000, mostly working for or at large consumer brands (and now thinking of retiring).

I'd definitely agree with the idea that change for change's sake is responsible for a lot of usability problems.

But a bigger problem is the near extinction of any professional discussion or capable analysis of design by the people who are doing it.

Most designers I work with don't have the vocabulary or knowledge about what makes something "usable" in the wider sense, let alone the ability to actually solve subtle interaction design or information display problems. With no knowledge or interest in basic heuristics, they literally don't know where to start. So they just live in Figma playing with colours, shapes and effects, pausing only to show some variations of the same UI for "user testing". It's become a wilfully ignorant, "non-technical" cult of innovation that's completely lost sight of its original purpose.

To say I'm ashamed of my profession is no exaggeration. We've failed so badly.


> Most designers I work with don't have the vocabulary or knowledge about what makes something "usable" in the wider sense, let alone the ability to actually solve subtle interaction design or information display problems.

That's wild. We _had_ to read Design of Everyday Things as part of an HCI course in my Software Engineering curriculum. You couldn't get your degree without having some working knowledge of affordances and signifiers. I was under the impression, at the time, that we were dipping our toes into a world which UI/UX designers were intimately familiar with. Perhaps not.


As someone who works as a service designer and design strategist, I agree, 100%. I’ve seen a general decline in rigor in the field. Many are back to making things “pretty” with poorly done research (if that) asking small-minded questions rather than thinking about context, ecosystem and systemic effects of their design decisions.


I recently took part in a panel discussion about design strategy. I kept on being dragged into discussions about project management and process. In the end, I just gave up trying to explain and they all triumphantly celebrated a "good design strategy" as being how you intend to build the thing. The question of why, or on what basis, just seemed irrelevant.


So, that kills a small part of me to hear - but also, I think it reflects this larger pattern of the UI/UX world’s seeming need to not just reinvent the wheel but large chunks of the history of the discipline of design rather than building off of thinking about systems and systems of systems by people like Jay Doblin and others.


As a frontend dev, what are good resources to learn about principles and heuristics that I can apply in my work?

I have some knowledge, albeit very primitive, that I picked up from an HCI introduction and some good designers, but nothing too technical / formal yet.


I'm not a UI/UX designer, and am equally frustrated by the failures described in the article and other comments.

One item I can bring to the table from the world of designing cockpits for racecars and airplanes is that the primary principle to follow is:

Reduce Driver/Pilot Workload.

Does whatever you are doing around the workspace increase or decrease the work that the driver or pilot must do? Can the status of [thing] be discovered (read/heard/felt) with minimum time and effort? Does [thing] create an otherwise unnecessary need to take even a quick glance at something? Is something in the way of taking an action, requiring an extra motion?

Of course, there are some things that maybe it IS desirable to add an extra bit of work, e.g., a switch that kills critical [thing] maybe should have a cover over it requiring to flip up the cover then hit the switch; an extra moment of thought.

This one principle translates well to software interfaces. Screw whether it "looks clean" or not — does it reduce or increase the user's workload? Even a tiny change can make a huge difference, because many actions are repeated.

I hope this helps


Also, frequently needed actions should be doable mindlessly. With hotkeys, for example. The example from the world of physical UIs would be that one Tesla car that does everything — including shifting gears and adjusting AC — through a huge touchscreen. You need to actually take your eyes off the road and dashboard to operate it because you can't do it without receiving visual feedback. If it had a physical lever for gears and knobs for the AC, you would've been able to control these things using just your muscle memory.


A few key items to consider in reducing workload:

Reduce the thinking load

Implement things that actually reduce the need for the driver/pilot/user to think, allowing just trained reactions.

Be careful with this sort of "automation" — it has to be absolutely reliable, i.e., the unthinking response needs to be correct every time, and the feature does not lead the driver/pilot/user into errors — the occassional errors will force the driver/pilot/user to think MORE, e.g., "wait, will this lead me to an error this time?!?", making everything actually slower. OTOH, if you can truly reliably automatically manage some part of the driver/pilot/user workload, that frees up mental space for every other part of the workload.

Speed Is Life

The interface MUST be faster than the fastest driver/pilot/user. ANY perceptual delay causes multiples of that delay in the response input from the driver/pilot/user, or creates a bad feedback loop (start correcting, response data is delayed, leading to overcorrection, then bigger input the other direction, which data is also delayed, bigger over-reverse correction... crash). In software UI/UX, there may bot be a risk of physical crash, but the effects of microscopic delay are actually insanely corrosive on productivity, think 1-2 orders of magnitude.


> the primary principle to follow

Yes. I spend a lot of effort with people at all levels trying to get them to agree on what, fundamentally, the system should achieve for the user. Sometimes it's speed, sometimes entertainment, trust or some other outcome. But that principle needs to be expressed and agreed by those executing the design, if only to prioritise features. I'm always astonished that they seldom want to talk about that, as if it's mindlessly theoretical or something.

> a switch that kills critical [thing] maybe should have a cover over it requiring to flip up the cover then hit the switch; an extra moment of thought.

As an aside, in the digital world a better pattern is usually to have an undo (probably executed as a delayed trigger allowing you to undo it for a time). This allows you to be confident in using the system in the knowledge that if you do something you don't intend, you can just take it back. Not a thing for cars or planes though, I admit.


> I'm not a UI/UX designer,

> One item I can bring to the table from the world of designing cockpits for racecars and airplanes

Sounds like you are most definitely a UI/UX designer. Those are interfaces. Those are experiences.


I wish somebody would take that approach with Visual Studio Code, which is -- I think -- a particularly egregious offender. I would dearly love it somebody would take a 21st-century approach to time-and-motion studies for programming in Visual Studio code. I think Visual Studio actual did something along the lines of time-and-motion studies at some point in its long history. But I'm increasingly thinking that Visual Studio Code is having a a 3 or 4x impact on my productivity.

Things seem to start out well with blank projects; but eventually, somewhere around the mid-scale 30k+ line point, all the advantages of all those incredibly great features in VS Code seem to be offset by scaling issues.

There was a point recently at which I seriously thought I was losing my programming edge. But a recent adventure with a non-VSCode project, and a recent discovery with respect to the VSCode debugger is making me wonder whether it's my tools that have lost their edge.

There are a bunch of things going on in that user interface of Visual Studio code that are completely tanking my productivity.

A telling example: When using the Visual Studio Code GDB debugger, Visual Studio code fetches the local variables of ALL stack frames on ALL threads every time you single-step. This happens even when the thread views are collapsed. As a consequence, it takes between 10 and 30 seconds (depending on how thread-heavy your app is) to single step once. I was pushed over the edge when I dropped into a new project and Visual Studio Code was take three minutes to single-step. (Not a memory issue, not a disk issue, not an inadequate machine issue).

So I did some research. This has been an issue since 2016 in Visual Studio code! The solution (according to message posted in 2016 somewhere on the internet): switch to the CodeLLDB debugger extension.

The result: on the 3-minute code base, single steps take distinctly less than a third of second.

It's like the frog in a boiling pot of water: you don't realize the toxic effect that kind of latency has on your your debugging productivity until it instantly goes away. 3-minute single-stepping is obviously impossible to work with; but the cumulative effect of taking 10 seconds to single step is definitely large. I can debug things orders of magnitude faster than I could before. I don't lose my train of thought. I can set breakpoints fearlessly, and single-step through dozens of lines of code -- all things I couldn't do with the default Visual Studio debugger.

The only peculiar side-effect of CodeLLDB: the debugger expression evaluators use RUST expression syntax even for c++ code. Which isn't actually terrible. It's just very strange.

But there are other things as well: the latency on Intellisense updates, where you have to wait 10 or 20 or 30 seconds for the error squigglies to update. Every edit becomes: type a few characters, wait for 30 seconds to see if Intellisense likes its, type a few characters.... And WHATEVER you do, don't unbalance parens or curlies, which will pin all available CPUS for minutes at a time! (Huge recent productivity improvement -- set the number of Intellisense threads to number of CPUs minus 1!).

In the old days, on much less capable hardware, a compile would fail in 3 or 4 seconds; so you could just "type a few characters; press F7, wait 4(!) seconds, and see if the squiggly went away. Although typically, you would fix a few things, wait 4(!) seconds and see if the squigglies went away. (More-or-less instantaneous checks were run in the editor to check for paren/brace balancing).

Instead, VSCode accumulates rats nests of red squigglies that wont go away until background intellisense completes (which could be 90 seconds or more), or until a compile completes, which runs the constant risk of facing a list of "12,000+"(!) errors in the error window, because you have an unbalanced paren or brace, which can take 10 or 15 minutes to straighten out. And the three or four G++ errors that are legitimate get buried in thousands of Intellisense errors that wont go away (that's a recent regression; you used to be able to temporarily filter out Intellisense errors).

I'm increasingly thinking that Intellisense for C++ is dramatically tanking my productivity as well! I mean it does actually work on toolable languages, like C#+Visual Studio, or Java+Android Studio, where the tooling update time is sub-10-seconds. But for C++ it seems to be a disaster. (Wondering whether it actually does work, though).

Build turnaround: there is NO obvious point in the Visual Studio UI to indicate that a build has completed. A tiny status message buried in the status bar. For some reason, my build output window always seems to come unglued from auto-scroll. And VSCode auto-tabs away from the Errors Window to the Build Window without un-de-autoscrolling. So you hit the F7 key, your eyes glaze over while a toxicly overlong build runs for a minute past the point where the error occurred, and you snap to 3 minutes later realizing that the build finished with errors, ages ago.

Mouse-keyboard context changes. Switching your left hand from mouse to keyboard and back is an expensive operation relatively speaking. By my best estimate, I spend about 40% of my editing time switching between mouse and cursor keys. Whereas... on Visual Studio, major editing operations can be performed entirely from the keyboard. (ctrl+space, cursor cursor, carriage-return is burned into my muscle memory for some reason, even though it's been years since I used Visual Studio -- or is that Android Studio?). There's something seriously wrong there too in the arrangement of short-cut keys, mouse and cursor movements.

Ctrl+Left/Right Arrow! There's a simple thing that is bizarely broken, that has a dramatic effect on my productivity. It does something seriously wrong, I'm not sure what, but it's not productive! It consistently takes you to the wrong edge of identifiers and operators, so you end up pounding on unmodified left or right arrow to get to the right place. I'm DEAD certain Visual Studio does it differently; and it's not a problem I've noticed at all in Android Studio.

I'm convinced that a little time spent on time and motion studies could increase the productivity of average developers by dramatically large amounts. Something in the order of 2 or 3- or 4-times more productive. Most especially for Visual Studio Code, which -- for reasons I still don't completely understand -- seems to be extra-double-plus toxic for productivity.


The Design of Everyday Things, by Don Norman is a good place to start.

https://www.amazon.com/Design-Everyday-Things-Revised-Expand...


It's more about trying to work out what motivates people to do things while using some reasonably simple tools like the N/N web heuristics to guide you. More a way of thinking straight about what qualifies as a good design under a certain condition, and what hypotheses to research and test out of that. So it's contextual. It may seem like there should be a more logical formula, but there's really not a lot of technical/formal things (along the lines of, say, accounting or structural engineering) that lead you to the best approach. My frustration is not that people don't necessarily know about these things (although I bet if I mentioned Bruce Tog to anyone at work they'd give me a blank stare), but that they can't marshal them in their work.


I think it's funny you don't see a connection between "Most designers I work with don't have the vocabulary or knowledge about what makes something "usable" in the wider sense" and your inability to answer the question "how do I improve the usability of my work (in the wider sense)." Maybe that's a place to start.


Sounds like the syndrome plaguing GNOME. :)


I've always assumed that GNOME developers don't actually use GNOME.

edit - (Not to disparage a ton of work by a ton of people )


>the near extinction of any professional discussion or capable analysis of design by the people who are doing it.

People don't think through the designs then launch a poorly designed test that can't produce meaningful results. Copy the competition, latest designs, or hope the customer can do it. It's easier to do and defend. Put in the minimal amount of mental labor required to get paid.

Not assigning blame here. These folks all work in an oversubscribed field subject to the whims and opinions of execs focused more on delivering visual appeal or feelings than usability.

It's marketing. It doesn't really matter how usable your product is if people don't want to use it. Many industries make very usable things. Notably the military. They're just ugly.


The author mentions they reserve judgement about Mac OS because of lack of experience. But I would say the same decline is happening here. Especially Apple themselves seem infatuated with hiding things behind slow hover effects and way-too-subtle shades of grey.

This is hurting discoverable portions of the interface, probably in the name of looking sleek, and once you have learned that something exists somewhere behind that hover animation you can't easily target it because the actual target area is not visible and active until after you have moved the mouse in the exact area. Most egregious is the Music app as well as notification actions.

Edit: Another issue is the lack of distinct draggable area in window titles, never understood why one pixel makes the window draggable, then the one below which looks identical is not.


Agreed about discoverability issues. Everything is hidden. Hiding power user features is one thing, but you can't hide the basic stuff.

IMHO UI usability hasn't been Apple's strong suit for a long time. In fact, as someone who has to use both Mac and PC, MacOS is just as terrible in many ways in not worse.

What they do well device integration usability across their ecosystem.


Agreed. For me personally macOS (or OS X back then) usability peaked at around ~Snow Leopard and it's been _mostly_ though not entirely downhill from there on.

Hard to pick a 'worst offender', the System Settings panel in Ventura may be a good example, it is a literal struggle every single time I need to change something, typically as a result of some setting mysteriously changing itself back to what I do not want.


This thread correctly identifies "discoverability", perhaps aka "progressive disclosure", or "some form of incrementalism" as the measure by which good user interfaces are evaluated. Operating systems such as Windows, OSX, and Linux -- inevitability -- contain much complexity, and multiple modules, not all of which are necessarily in use by the users at any given time.

Therefore, the problem of GUI design becomes one of navigating the complexity/depth-of-knowledge curve, as much as it does creating a usable set of features at each point along it.

The most egregious offenders are the latest Windows, 10 and later, IMHO. When I was a kid, (coming from the world of RISC OS which had a much shallower GUI) trying to understand the big deep Windows interface, I joined the dots by telling myself "This is how they put it together. This is how they are trying to make the things work consistently and in concert." I rarely broke out of that story-telling loop.

Now, when using Windows 10 or later, and a few clicks brings one to a Windows 2000 and before-era network dialogue box, the "fourth wall" as it were, of GUI design is broken, the story-telling loop above is replaced by a jarring feeling of bewilderment or all-too familiar frustration, and it's obvious they are shipping the org chart, and doing so over time, as much older GUI components are still available.

At the risk of sounding nihilistic, I fear that both OS X and Windows have reached their respective local maximas of design, and we won't see any changes for the better (by the above metric) any time soon.


>Operating systems such as Windows, OSX, and Linux -- inevitability -- contain much complexity.

Operating systems should be modular, allowing the user to reduce as much bloating complexity as possible and needed.

I do not need unstoppable “gamed” and “studentd” services on my computer. The sealed system implemented by Apple has actually meant the overthrow of the root user by the corporation. Now I can’t reduce complexity, bloatware.

The only solution seems to be praying that BSD or Linux become useable one day


> Operating systems should be modular, allowing the user to reduce as much bloating complexity as possible and needed.

Yup, updated my comment to clarify that point -- thank you. "the overthrow of the root user by the corporation" -- damn. Good turn of phrase :-(


If the sealed system makes performance or security problems, it's a bad design, I'd rather it be optimized rather than modularized.

Modular systems have more possibilities, which means some of those possibilities might not work, and then you might find yourself unable to use an app without reconfiguring in a way that breaks some other app.

They are great for tinkerers and specialized applications, but for people who want things to Just Work, every time, no matter what, even at the cost of zero hacker friendliness, Android's model seems to completely blow Linux away.

We should still have root(I think lack of that might be part of why Android hasn't started taking over other markets), but there should be obvious and standard ways to do common things.

The commercial OSes aren't perfect, but they are popular with consumers for a reason.

I think NixOS or similar is Linux's only hope of matching that without doing containers for everything, which would likely have many of it's own problems.

Either that or we just stop using dynamic linking, which would probably make most of Linux's problems go away instantly.


>The commercial OSes aren't perfect, but they are popular with consumers for a reason.

I don't think it's clear that those reasons have much to do with any property of the OSes though.


From just a UI point of view, my preference is MacOS 8 — and I don't mean Mountain Lion, I mean Classic.

The rest of the OS would be a nightmare today for so many reasons; but the UI was fantastic.


I loved MacOS 8 as well. It was a nice improvement from the System 7.6.

The only thing I didn't like about MacOS was occasional system lock-up and application crashes due to real multi-threading. Back then, when your Mac crashed, it crashed hard... like your cursor didn't move. ;-)


Ah yes, Mac OS 8, I have very, very limited experience with it, mostly in the form of one/two-week bursts on taking over from a coworker whenever he went on holiday. It's loooong time ago but on the negatives I remember getting very annoyed with this app called Toaster, on the positives I remember thinking how extremely pretty everything looked (still does) and being blown away by little things like, to set the icon for a folder or document you could just copy and paste one over from another file :)


I am not afraid of the decadence they’ve made of the settings panel because I use Alfred.

I haven’t yet installed Ventura. I had to downgrade from Monterey because the cmd-L keyboard shortcut had stopped working on Contacs.app


You can remap cmd-L in settings globally or for each app, did you try that?


Yeah. I first started using MacOS seriously for a job a few years ago, and I was amazed at how poor the discoverability was, especially given its reputation. Important functions were hidden behind arcane modifier combinations or only available via "intuitive" gestures that you'd never work out on your own. I don't think this was just a case of unfamiliarity - as a usually-Linux user I've been unfamiliar with Windows functions before, but still managed to work them out without resorting to Google, even in the post-8 era

And the window management as a whole seems like a case of form over function, with no way to see what windows are actually open, or switch between several of the same application, without multi-step processes (Expose/MC or cmd-tab cycling)

And also the menu bar is a massive violation of the "controls should be obviously linked to the area they affect" principle, but it seems we're stuck with this space-saving measure from 40 years ago since it's now part of the brand identity


> And also the menu bar is a massive violation of the "controls should be obviously linked to the area they affect" principle,

On other hand a menu bar at the corner of the screen has an infinite size if you are using a mouse, a trackball or a touchpad. But it's size is finite if you are using a touch display. Let me explain what I mean with infinite size. You can overshoot arbitrarily far when using the mouse and you would still be inside the menu, since top of the menu is the screen corner.

While writing this, I've noticed that in a maximized Firefox under Gnome this isn't true anymore. There is at list one line of pixels above menu. When you click on them nothing happens. Therefore I cannot overshoot anymore. Another failure of modern UIs.


Somewhat related:

As someone who likes working on a single screen I typically like to open things in full-screen mode by default.

MacOS plays a sluggish animation when switching between screens. You can't turn it off, you can just switch between two similarly slow ones.

I can't imagine a good reason for this. What is their model of a user and how did they derive "slow animation when switching windows is good" from that?


People absolutely LOVE slow animations for general OS use.

Personally, I want for what I asked to happen to actually happen when I press a button, but I am far and away the minority.

What’s even worse, is that developers today have taken this “people love animations” stance and decided “hey, we can get away with utterly shit tier performance by just making an animation!” And so 15 second page loads are now the norm because you are rewarded with a few fade effects now and again.

And now the “fuck performance” has invaded literally everything.

We now even have technology being built for the express purpose of letting developers off the hook for performance characteristics (Look hard at you DLSS). Somehow, the general public has decided that DLSS is an optimization, and not actually just a way to not optimize at all. In the conversation of Starfields utterly dogshit optimization, the first “optimization” always brought up is DLSS.

Our industry is completely fucking sideways.


Every application developer or UI designer should be forced to sit next to a non-technical person for a couple of hours who uses a slow machine (partly because it's infested by programs and services that they absolutely don't need), while browsing the web or using common applications.

It breaks my heart how much precious time and nerves of these users we're wasting.

They use outdated devices, that still should work perfectly fine, but are grinded down by extremely slow programs.

They don't understand many of the fancy UI patterns _at all_. UI elements without text are the worst offender. Animations and hover effects routinely and reliably get in their way.

They get confused all the time because their UI changes: "I could always click the frobniz here, but now it's gone and everything looks different."

Their system is basically stuck in "dark patterns".

Their internet connections are often relatively slow and unreliable, but common software assume that it's always available going full blast.

To do even basic tasks, they are nudged into registering to this or that, when often a simpler, less invasive solution exists.

They spend ungodly amounts of time navigating stuff, while getting distracted by notifications, misleading information and so on. Often giving up in complete frustration.


>UI elements without text are the worst offender.

Those are only second worst. What's worst is when there's also no border or any other indication that it's interactable rather than just an icon. Actually, that's only second worst: I've seen radio buttons used for multiselection. That goes beyond merely unhelpful to outright trickery.


Honestly, yes. There is an insane amount of computer literacy that is required to use a lot of programs. You have a to understand symbols that have actually no clear meaning apart from the instinct that you get from constantly using a computer. Things that disappear suddenly like the scroll bars are totally baffling and disorienting for older users. I mean, even as a technical user I am confused as to why so often you have to dig so hard to find basic settings or why there is always seemingly 3 different ways to get to settings for something on windows. And yeah, there should be no reason that every day to day program should not be able to run on a machine from several years ago at this point.


My new favorite punching bag for this is Discourse. It's literally just a forum. Some text and maybe a few images. Nothing crazy. But their "new and improved" system has a stupid lazy-load design that takes 7 _seconds_ to load. The root document https transfer completes in 500 ms. What the flying fuck is going on in that remaining 6500 ms?

And it's not just the first time. Every time you refresh the page, 7 seconds. And they have the audacity to call this towering pile of javascript nonsense "simple": https://www.discourse.org/about


Same with animation when switching to/from full screen. I mostly just want it to happend in an instant.


It's been a staple Human Interface Guideline that applications should re-open in the layout and position they were when they quit. Personally I barely use full screen at all in Mac OS, but I can see why you want it. It is default in some apps, it should be default in all apps, it isn't.


> MacOS plays a sluggish animation when switching between screens...

Perhaps:

System Settings... Accessibility > Display: Reduce Motion


That just switches from one slow animation to another.


And I don't want or need, for my own use, some Accessibility-defined systemwide setting that tells all apps to try not having motion. It sounds nice to have…should I ever need that. But at least from Apple's description it seems invasive and like it might possibly disable untold things that I don't want to mess with.

I want to speed up or eliminate certain decorative things, like this particular animation. Since there's no obvious no-side-effects way, I just grumble and occasionally rediscover UI performance when I use some other system.


Recently I bought a MBP and I'm surprised by how hard to use it is.

Printing by both sides? Not supported, you must do it by hand. And some apps print the pages in reverse order, so you end up having to sort the pages.

Use numberpad . as . instead of ,? You have to download an app to use a sane default.

Alt+. in terminal? No luck if you aren't using an English keyboard.

Run a downloaded app? Run it, accept failure, settings, security, look for blocked app, enable it. It is even worse than Windows Vista.

Give some app permissions? Yeah, you must know which permissions, as you cannot search by app...

And so on...


It has been possible to print double-sided since at least Lion/2008…


Only if you have a printer that supports it. Otherwise you have to do it by hand https://www.businessinsider.com/guides/tech/how-to-print-dou...


If the hardware itself doesn't support it, how would the software?


Windows approach: Print even pages first, ask the user to move the printed pages back to the printer and then print the odd pages later. The result is ready to be read.

In Mac you need to print first one side, move the pages manually, print the other side. But you have to remember to add an extra page if the page count is odd, have caution if an app tries to be intelligent and sort the pages in a different way...

In Windows all those calculations are done by the system. It is easy! Just print and follow instructions. In Mac is painful.


The issue is that the printer manufacturer needs to provide a printer driver for MacOS that gives MacOS access to that printer capability. (One could argue that Apple should write the driver, and they might for very popular devices.)


The blocked app shit really gets me. Installing compilers by hand is always a fight and a half.


It always reminds me of the shade Apple threw at Windows Vista. Chickens came home to roost. https://youtu.be/8CwoluNRSSc


Or what about the scrollbar hiding automatically so its really hard to manualy grab the scrollbar handle and scroll with your mouse. Specially in long lists super annoying.


There's a setting to always show the scrollbars.


Ah, yes, the "Beware of the Leopard" school of design. Everything is possible, nothing is forbidden, but the defaults are terrible and the controls are hidden as well as they could possibly hide them.



> and way-too-subtle shades of grey

MacOS went downhill when Jonathan Ive decided to throw out all colors in favor of the 'content'. To this day I spend too much time deciphering blue-greyish icon shapes in the Finder's side bar.


I may be misremembering, but I’m pretty sure the monochrome icons began well before Ive had influence on the software. OS X moved towards shape rather than color in eg iconography to support UI scaling (which had been an effort, and a more ambitious one, for years prior); and then eventually to support adaptive theming most recognizable (for its value anyway) in dark mode.


> hiding things behind slow hover effects

I can’t for the life of me understand why it’s a good idea to hide the close button for notifications or virtual desktops behind a long mouseover on any spot that is not the actual location of where the close button pops up.


Perhaps they are creating a problem to push you onto their solution.

When you can't discover things yourself, that annoying AI voice-assistant might become needed.


Somebody whose name I can't remember said something like Apple Designs a UI that people want to buy - it looks cool in the apple store - not use.


All true. It goes even farther: hiding the file structure, and even the concept of files. Ask a nontechnical Windows or Mac user where their files are stored - they have no idea. Ask if they are in the cloud - no idea. Worse - just as with UIs - there is no consistency where different apps store their data. We also shouldn't forget the idiocy of hiding file-extensions.

To add insult to injury Windows lies to non-English speakers. In German, for example, it will tell you your home directory is in "Benutzer", but there actually is no such directory, there is only "Users". These attempts to hide complexity wind up creating confusion and more complexity.


> In German, for example, it will tell you your home directory is in "Benutzer", but there actually is no such directory, there is only "Users".

Fantastisch. /s

Reminds me of a different issue we had back at my first permanent iOS role. We wrote about a "button" in the app description, the German translation of which is "Knopf"; this triggered Apple's naughty word filter, presumably because "Knopf" is also a translation of "knob", but "Knopf" doesn't have the penile connotations in German that "knob" has in English.


> Ask a nontechnical Windows or Mac user where their files are stored - they have no idea.

This. The move towards “everything on the cloud” was a double edged sword for non-power users. It’s great that you don’t have to make Grandma plug in a external USB drive and use Time Machine, but yes nobody knows where things are anymore. Does Grandma know what “the cloud” even is?

I feel like Apple is the biggest proponent of moving away from the idea of “files” and having your “content” be something you shuttle around between different apps. But each one has a slightly different “share” and “import” and “export” paradigm as well. And now there is an iCloud Drive “files” app! What does that mean? And how do I know if it’s on my phone or I’m just seeing a representation of what’s on my iCloud Drive? Does the cloud icon mean I’ve downloaded it or not, what if it’s it slightly dark gray or slightly light gray? </rant>


> We also shouldn't forget the idiocy of hiding file-extensions.

That must be the number one reason why .jpg.exe with a jpg-looking icon still works on a surprising number of people. And that's with even the most non-technical users knowing what file extensions are and how they work.


And even if you do launch Explorer and start looking for your files, you'll find... what, THREE copies of your home-directory structure, two of which are "forbidden?"

WTF is all that shit, Microsoft?

Oh and another Explorer regression: It used to show little "+" signs next to directories that actually contained things. Microsoft GOT RID OF THOSE.

Oh... unless you happen to roll your cursor through the left pane. Then they inexplicably appear. Or did MS regress even further, from the universally-understood "+" symbol to idiotic triangles?

You can't even keep up with all of the regressions at this point.


It’s all about files. That we have a photos app with a custom database is an obvious failure of the OS file system. Why didn’t the BeOS approach work?


Yeah, people keep complaining that apps are hiding the concept of a hierarchical FS from the user, but honestly I can understand why they do it. The hierarchical model just isn't a very good model for almost any nontrivial (i.e. 1-dimensional) logical model of data. For all we rightly complain about how much time is spent reinventing CRUD programs that simply list your files in a relational interface, few people seem to ask why we keep feeling the need to do that

Proper use of file attributes could help a lot, but while almost all modern OSes and FSes support them, AFAIK only MacOS makes much non-system use of them, and they have poor portability across systems. Even copying a file can lose them if the utility is not aware of them


Seems like it would be easy enough to have first class translations at the file system API levels, so you could make a FileSystemInterface class and choose if you wanted to have standard or translated names, or stay compatible with both.

Or you could have Benutzer be a symlink to Users(which could then be hidden), so that it is still "real" and fully compatible all the time.


I see people losing files in Windows all the time.


Despite the gutting of desktop usability for the sake of being mobile compatible, many environments still have zero presence on mobile. Can I PLEASE have my fucking scrollbar back now?

I once spent a lot of my time and the time and the time of a developer trying to find a setting because there was no indication that a window had more content (a checkbox) to scroll down to. Something that would have been obvious before the onslaught of hidden scrollbars. The trouble is that having my pointer over the navigation pane--practically a guaranteed position--causes the scrollbar on the other pane to be hidden. Without the visual cue of a scrollbar there was no reason to move my pointer over to the other pane to discover there's more. Hell you might not even know it's a separate pane now that we've gotten rid of every defining border. I shared a screenshot with the developer, assured them that I was using the current version, only to have him say "scroll down." No doubt, I'm the fucking idiot (/s).

Just like on mobile, you're supposed randomly interact with every UI element in hopes of discovering how it works only to have that learned skill be unique to one fucking app. Tap it, slow tap it, slow tap it for a different amount of time, tap it faster, spam it... "google it"... oh, this time you're supposed to drag it to something that doesn't even look like a UI element. Stupid grandpas!

"Is the checkbox checked?" was never as ambiguous as "is the slider switch on?" Also, the checkbox uses less screen space! I'd argue that they optimized for neither screen space or user friendliness. It's optimized for a look and you can even make it worse by making it flatter. Go ahead make it look like two squares! Is the darker area the switch part? Who cares! It looks so clean and distraction free! I was so distracted by knowing what state the switch was in.

Sorry, time for my meds. I usually make it half way through the day.


I feel you. There is nothing today quite like _Inside Macintosh Volume I_ laying out out in drawings and prose (!) what the elements of a UI are, what they do, and how they work... almost like people had never seen a UI before. People must have had to think about it, they wrote a book for F sake. /s

They devote ink and paper to declaring that modes are to be avoided, and why. The do's and don'ts of UI on page 70 are worth repeating:

Do:

* Let the user have as much control as possible over the appearance of objects.

* Use verbs for menu commands that perform actions.

* Make alerts self explanatory.

* Use controls and other graphics instead of just menu commands.

Don't:

* Overuse modes (again!).

* Require keyboard / mouse when the operation would be easier with the other.

* Change the way the screen looks unexpectedly, especially scrolling.

* Redraw objects unnecessarily.

* Make up your own menus and give them the same name as standard ones (they define the standard ones in this book, you know: About, File, Edit as well as what goes in them. yes, yes, this is where it all started).

We've meandered into a bullshit local minimum where there is the One True UI and it's different for every app, but the same for every user. Meanwhile in industrial control where a $50,000 piece of equipment has its own app, used by maybe one person or three if it's operated 24 hours a day, the responsive mobile interface is as easy to lay out as slides in a slide deck and takes about as long to do. Hell, a customizable dashboard is a widget.

If the cloud made shoes there would be different shoes for grass and concrete, but they'd all be the same size and you'd have to cut off toes if your feet were too big or stuff them with prostheses if they were too small.


Today is your lucky day. Have a new word; Procrustean.

"forcing strict conformity through disregard of individual differences or special circumstances"

https://en.wiktionary.org/wiki/procrustean

"In Greek mythology, Procrustes ... was a rogue smith and bandit from Attica who attacked [read: killed] people by stretching them or cutting off their legs, so as to force them to fit the size of an iron bed."

"https://en.wikipedia.org/wiki/Procrustes"


Eh, Apple's ideas are not to be taken as gospel. This is the company that fielded UI that's as bad (or worse) than everything we're complaining about here, decades earlier.

For example: secret alternate menus. You can actually press modifier keys on the Mac while a menu's open and sometimes you get totally different menus. These are not indicated anywhere. So theoretically every menu on a Mac may have... let me do the math here... eight sets of contents using Control, Option, Shift and all combos of those. So according to Apple, you should open every menu and mash every combo of modifier key to see what's in each... and memorize them.

Another Apple menu defect is to start every entry with the same word:

VIEW

Show meters

Show clips

Show this

Hide that

Hide the sense

Show WTF the point is

This makes the first word of every entry nearly useless, and massively degrades the usability of the View menu. You have to sit there and parse the first part of every line, which only has two options... both of which are four characters, BTW, and thus visually the same size.

You don't do this; you use CHECKMARKS, which we learned decades ago. Some Mac apps do this, but many Apple ones still have this asinine convention of "show" and "hide" repeated over and over.

VIEW

• Meters

• Clips

  This
• That

• The sense

  WTF the point is
But that brings us to another classic Mac menu defect: The misuse of the Window menu. This menu is supposed to show names of open windows in an MDI-type situation. But Mac apps often bury View options in the Window menu, apparently expecting the user to guess that whatever they're looking for has been implemented as a window. Why would I go into the Window menu to activate audio meters, for example?

And most of the time, whatever the option is has NOT been implemented as a window.


I'm a UI/UX layperson by any measure, but as an avid vim user, I actually wish modes were everywhere and I could use every app with the keyboard alone


The way Microsoft office handles this is pretty wonderful. Alt reveals the keyboard shortcuts and they work in a very intuitive way. All pretty discoverable once you learn the one "alt" key trick.

(Side note, for mouse-work, They're now starting to f** with their ribbon a bit and the hidden/simplified version is just terrible, but they haven't forced that upon us yet.)

To handle the lack of vim modes in other apps, I just use a keyboard with my own custom QMK firmware that makes my keyboard have modes. That works well enough for 90% of what I'd be doing in VIM.


Microsoft in particular seems hellbent on putting touchscreens in everything. They REALLY want to make it happen but seemingly no one is having it. There's ONE person I know who owns a touchscreen laptop and genuinely touches its screen on purpose.

On your scrolling complaint — well, there's this desktop-specific thing that OP doesn't mention but that becomes very apparent once you start looking for it in old-school desktop UIs. Controls never, ever scroll. Only content does. If controls don't fit into a window, you don't make it scrollable — you split it into tabs or you put the extra controls into a separate window that opens via a button. This appears to be universal at least for Windows and macOS.


I've had touchscreen laptops for about 7years now. The only time that the touchscreen gets used is when somebody is showing me something, doesn't realize it's a touchscreen, and accidentally borks whatever we were looking at.

It's strange - I love to optimize my flow, but I just haven't yet hit a situation where my hands leaving keyboard and touching the screen is faster. (I'm a die-hard and proficient track point user,and my hands never leave keyboard, so maybe I'm a special weird case?)


Reaching for the screen all the time also sounds like a shoulder RSI just waiting to happen.


> is when somebody is showing me something, doesn't realize it's a touchscreen, and accidentally borks whatever we were looking at.

It's funny -- I've been using my work laptop for over a year and only yesterday discovered that it has a touch screen by doing exactly this.

I haven't bothered to figure out how to disable the touchscreen yet, but it's on my todo list.


> Microsoft in particular seems hellbent on putting touchscreens in everything. They REALLY want to make it happen but seemingly no one is having it. There's ONE person I know who owns a touchscreen laptop and genuinely touches its screen on purpose.

When I was doing a lot Android development, it was nice to be able to test touch without loading on a phone. It was equally nice to be able to markup screenshots with the pan. I switched to Linux from Windows a couple years ago and the new laptop doesn't have touch (it does have a giant 17" panel and all day battery), so I keep a tablet that has a stylus in the bag for doing UI markup and bug reports. I may switch to the 16" Gram because it does have touch and stylus... and I won't have to carry the tablet.


I have to admit that using a touchscreen laptop with a decent pen has made my university life a breeze. Ability to whip out a laptop and have all the notes, from all the years, searchable, with drawings just as good as when they were drawn and even better, because I can fix them and annotate them anytime... makes me feel like a terminator machine.

And it is the main reason I am stuck with Windows, despite also enjoying the 'normal' pc experience on linux much more than Windows bloat.


I have a touchscreen laptop for work and the only real use I've found is testing the behavior of touch-only HMIs.


But then you need fixed window sizes, which is pretty damn annoying as well.


That same Microsoft solved that nicely in win32 by using "DLUs", or "dialog units", for layouts. These scale with the font.


DLUs solve the wrong problem.

You actually want autolayout, so that different languages only need translations and otherwise don't need almost any extra work.

Linux has, I think, proved, that fixed window sizes are needless.


> You actually want autolayout, so that different languages only need translations and otherwise don't need almost any extra work.

What kind of extra work DO they need? I can only think of RTL layouts but I don't remember whether win32 "dialogs" loaded from resources automatically mirror the layout for RTL languages.

> Linux has, I think, proved, that fixed window sizes are needless.

Desktop Linux is an unfixable dumpster fire UX-wise. Don't even get me started.

A serious problem with many open-source GUIs, but especially those on Linux, is that they're built backwards: you first write the code, then build the UI. Your UI ends up being shaped by the underlying implementation of the thing it controls. When in reality you want to do it the other way around: you'd formulate user requirements ("they need to be able to do X and Y"), you'd think through all possible scenarios that the UI must accommodate, you'd make a rough outline of what a UI satisfying all these requirements would look like, and only THEN would you start actually writing any code.


> Desktop Linux is an unfixable dumpster fire UX-wise.

This is exactly how I feel about Windows. Different strokes for different folks, I guess.


Slide switches are just awful. A skeuomorphism that doesn't work. Checkboxes are far better, assuming there aren't double negatives in the label text (e.g. a check means something is disabled.... the developers who do this should be shot at dawn).


I was with you, until you got to the slider switch part. Now I'm with you AND my blood is boiling. Nice to know it's not just me though - I find the ambiguous slider switches in way too many apps now!


And what the fuck is with the wording on check boxes these days?

On top of not even knowing if the sliding circle is actually on or off, half the time I cannot figure out which of on or off I actually want. Double negatives all over the place. Weird wording. No indication of actual impact.

It’s all crazy.


> I once spent a lot of my time and the time and the time of a developer trying to find a setting because there was no indication that a window had more content (a checkbox) to scroll down to.

I've done that, man. Quite embarrassing.


Embarrassing for the app/os designers.


I try to write mine, to avoid that kind of thing. It often involves a lot of arguing with the graphic designer.


We've got some innovation from having everyone invent their own user interfaces for every app, but it's been a very high cost to pay vs standard OS GUIs.


This feels like a modern rewrite of "Falling Down" :)

* That's not a criticism. I agree with your take on it.


> Google, for example, have gotten increasingly into some kind of A/B testing of late and ...

no AB test ever performed in the history of user research has correctly measured the intense hormonal rage pheromones released by power users when a button is moved for any reason


One of my frequent hobby horses is the area of portable music players (formerly called Walkmans or MP3 players, currently "DAPs"). I simply don't understand who would want to buy any of the currently available players because they'll are so terrible to use. The good ones were killed off by a combination of streaming services and Android devices. There are no more light-weight, simple players with high capacity, physical buttons and good battery life. You know, the kind of thing you could put into your pocket take on your run without bruising yourself. The kind of thing that has a legible screen, sane menu layout, that can read tags from your files and let you simply drag playlists into them. I know that there are people out there who want the same thing, but manufacturers are instead releasing gigantic bricks with protruding knobs and weird edges, huge battery-draining touch screens, running some dreadful Android hack of an OS. "This one had TWO DACS!". Really! Well I'm not in a sound-proofed listening room with €10000 monitors, I'm on a bus with some IEMs. Reviews rarely discuss the most important aspects of a portable music device, and instead gush like wine critics about the ... I don't know, smell(?) or taste(?) of the music. I'm only half joking. The more prestigious the reviewer the more bollocks they write, and half the time I can't even find mention of the weight or size. Or when they do, they're plain wrong (eg: "The Shanling Q1 is a light weight portable device"... No it's f*cking not, it's an awful brick with a shitty OS, bad battery life, overly sensitive buttons. I don't care how well it sounds if I can't get playlists into it and don't want to use it because it's a heavy blob with a shitty interface.

13 years ago Rockbox on the Sansa Clip+ was great. If we could simply produce that with today's tech I'd take it in front of any of these ludicrous hulking house-bricks that cost an arm and a leg.

Am I off-topic?


Your comment sent me on a multi day journey into DAPs.

Indeed, it's heartbreaking how otherwise capable hardware is ruined by bad design decisions (UI, form factor, touchscreens, ...)

I found a workable solution after many hours:

1. Bought a Hifi Walker H2 on Amazon. It suffers from poor UI design, including typos. Otherwise, the hardware is great.

2. Installed RockBox on it. After a little customization, it resolves the crappy UI. The one pain point is RockBox will not support Bluetooth or USB DAC, if you care about these features.


So I have an update! I bought the HiBY R2 II directly from their website it shipped immediately (on a weekend!) and I had it in my hands (in Ireland) less than a week later. It's basically everything the Shanling Q1 should have been. It's slimmer, almost half the weight, has a sane OS (so there's no need for Rockbox) and sounds great. Also had better battery life than the Q1 and you can play/pause using an inline remote on your wired headphones (and yes, Shanling screwed that up too). Bluetooth is rock solid. All in all I love it. I can even sync my music and playlists to it with MusicBee (on Linux) with very little messing around, and with a huge SD card it holds all my music.

Edit: the walker was my next choice but I thought they were discontinued? Anyways, the HiBY is available.


My own update: decided to dust off my old iPod and repair the fault LCD, battery and hard drive.

The (minor) downsides: modern features like Bluetooth and USB-C are possible, but only through soldering.

Rockbox unfortunately lacks polish, and it for some reason turns itself off after a while playing music.


FYI, a few weeks later with the HiBY and I absolutely love it. I'm getting another one to replace my wife's aging Sony walkman (one of the extremely simple, light-weight ones). She doesn't want me to, because she loves it's size and weight. But she regularly complains about syncing music and playlists to it. And with a 512GB SDCard you can just tell MusicBee to "sync everything", which is extremely simple compared to removing stuff you don't listen to and hand-picking the newer stuff that will fit. And of course, Wifi, Bluetooth, etc.

In fact I might buy two, because I know they'll discontinue this thing and I don't know if anybody will make anything like it again.


“I NEED TO SAVE SCREEN REAL ESTATE SO I CAN… CHECKS NOTES… PUT A BUTTON THAT TAKES THE ENTIRE SCREEN”

Yeah. This one always gets me with the whole “moving to the title bar saves screen space”.

The other thing that really grinds my gears today is how many applications steal focus multiple times during launch. The number of people that have accidentally posted passwords in team chats cause of bullshit focus stealing is too damn high!


Yeah, is there a web page that tells you how to disable this focus-stealing-on-launch in most OSs ? That would be a public service.


When iOS introduced the long press, that pretty much did it for me: I couldn't tell what user interface element did what when I pressed on it, so I tried tapping everything. And now we have to long-press on things to make them do other things. I can't spend time tapping then long-pressing on elements to do what I think or hope the app can do.


Apple should have established long press for the purpose of context menus in iOS, from the start. When I got my first iPhone around 2009, I was rather surprised that this wasn’t a standard convention, like right-click on the desktop. It seemed so obvious that that’s how things should work. Instead we now have long press for random purposes.


A UI convention for "long-press-able" might be a start.


IMO it should behave like right-click (context menu) on the desktop, which generally doesn’t require an indication.


May I vote for ellipsis overlaid on one corner of the element?

  -------...-
  | Primary |
  -----------
unobtrusive, "..." indicates more was omitted in every context I can think of, and it can be any number of colors because it's so visually distinct (err, of course, except for the same color as the background, but if someone was that evil, we're right back where we started from in the "I dunno, guess" realm)


iOS didn't introduce it. Windows Mobile definitely had it before 2007. Not sure about Palm OS though.


I cannot prove it, but I think some of these UI and UX horrors are caused by the ever present need for software to change in order to demonstrate it is alive.

Imagine you reach peak usability in $YourApp, in the year of our lord 2023. It's maybe not perfect, but there's no conceivable way your app's usability can improved past this point. Users are reasonable proficient, everything mostly clicks into place past the inherent hurdles of learning to use an app.

Can this situation stand? Well, no. If you keep your UI as is, the software world will often perceive your app to be "stale" or even dead. So you have to change. But if you change anything after you reached peak usability, you will necessarily worsen your UX, sometimes in catastrophic ways.

I cannot prove it, but I think this is one of the reasons (not the only one) that explains why some app's (and desktop environments!) decent UIs get ruined so thoroughly.


Yes, it's that as well. Imagine you've already designed everything that needs designed across all your products, but you still employ an entire department of graphic designers that need something to do. You wouldn't just fire them, right? That's how all those UI redesigns happen. I've seen it myself. "We need to change it because it's been a while since we last changed it."

The omnipresence of high-speed always-on internet connections exacerbates this issue immensely. Before that, software developers were at least forced to have some defined goals and deadlines for their products so that they could put the thing on CDs or floppies or whatever and ship it to stores — with no easy opportunity to release an update. And when they did release an update, it had to be something meaningful and substantial to convince the users to go through the trouble of obtaining the new version and updating. Modern software, on the other hand, is best characterized with this saying we have in my language, roughly translated as "samurai doesn't have a goal, he only has his path".


I'm sympathetic to developers trying not to get fired -- I'm a developer myself! But from the user's side it's often ridiculous.


As a developer, I'd much rather look for a job doing something meaningful than have to waste my days doing make-work BS.


I don’t even think it’s that calculated. I think it’s just that there are teams of designers employed there that would have nothing else to do once the design is “finished”.

My ideal is to create a product that is well loved and used, automate the hell out of every conceivable customer support and operational concern, and either/both move on to a different product or kick back on the beach.

Nobody wants change. I only ever see hordes of people complaining whenever any UI changes. There is never an equivalent mob of true believers on the other side of the line.


The stages of decline from user-centred design to abuser-centred design:

-1- The Beginning. We want people to like our product. We will care and listen. We will dedicate resources to usability. Happy customers make us happy.

-2- The Middle. Usability is hard. Building ergonomics and affordance is not as easy as we thought. And then there is Internationalisation (I10N). We had to learn most of this ourselves because no one teaches these domains. Contractors are expensive.

-3- The Decline. We want to develop more features. Hire more programmers. Sack the usability team -- whatever we needed to accomplish is achieved. (Sack the documentation team too. Software developers are engineers... they can write FAQs!)

-4- The Bottom of the Barrel. We threw features galore at our customers in a series of brilliant sprints, yet our customers are unhappy. What's wrong with them?


Completely agree: change for the sake of change. I held my breath for a while after Office 2007, hoping the Open/Libre Office developers wouldn't be tripping over themselves to implement the Ribbon, too. That they've held fast all these years is one small bright spot.


The "ribbon" is when I stopped using Office.


You are even more right considering your modern app is subscription based so you must justify the monthly fee. Also you are paying an entire UX team which was useful for your nearly perfect v1, you’d better feed them some job.


I think we need to move past the first to market model and assume a best in market model. Allow things to reach peak usability then maintain them. Add security features and new usability when it's discovered but let a good product make you money.


> I cannot prove it, but I think some of these UI and UX horrors are caused by the ever present need for software to change in order to demonstrate it is alive.

The sad thing is this is largely driven by managers trying to keep their teams looking busy, and not by customer request. Most of the time, users will ask to make something work better or actually work. Make-work requests often site some internal usability/HIG guidelines without having any user interest at all.


I would say the same logic applies to products/features and influences the design component. You create a product that reaches some local maximum of utility but there's a constant pressure to churn out new features with increasingly small gains in customer utility. Those features need UI elements, items in menu bars, etc etc which results in a bloated UIs full of esoteric features


More than that, you will have to find a new product to demonstrate the value of your continued employment at the company. The team is incentivised to keep tinkering.


I think architecture has the same problem. The best way to stand out in the 50s/60s was to break with every good rule of traditional architecture and create the Brutalist style.

They completely ignored the human emotional aspect of buildings and living spaces.


> They completely ignored the human emotional aspect of buildings and living spaces.

But it turns out they didn't, since the majority of today's architectural design is even more alienating. Brutalism often seems downright cozy in comparison.


I for one miss the 3-D buttons of the Windows 95 era. It was obvious what could be clicked and what could not.


Absolutely agree; I think the anti-skeuomorphic "clean" fashion of the last decade has been terrible for discoverability even in mobile apps, and it's somewhat ironic that the same design cues have gone back to WIMP devices, giving us interfaces designed to look like touch screen mobile apps even though the tech is different — and design cues matching a different technology in order to give a sense of familiarity is the actual definition of skeuomorphic.


Tbf, Apples baroque "skeuomorphic phase" with wood paneling, shiny metal and cloth was absolutely terrible even back then and most likely responsible for the "skeumorphic backlash" that followed. Subtle 3D hints for interactive elements always made sense, and don't need to look "old fashioned".


This! I hate iOS’ flat design and the trend after that. When that UI first came out, texts and buttons looked so similar that I often had to poke to distinguish them. Now iOS is not as bad but I still prefer its original 3D design. I guess Jobs would never approve the current iOS UI.


Oh man, I can remember when the Windows 9x/Motif/Nextstep look was so modern, we thought the future of UI was going to be more of the same: huge bevels on the buttons with maybe a texture map. Think the buttons from the menus in Super Mario 64, and then go watch some old 90s scifi or other technology-themed movies and see how much their "futuristic" UIs looked like that. The Lawnmower Man and some scenes from Hackers come to mind.


It was so good that Google incorporated this idea in Material Design


And then all but killed it >:( https://m3.material.io/styles/elevation/overview#f9947307-48...

My first smartphone ran KitKat and IMO it was Google's best work to date, design-wise.


Rather spot-on criticism of all the flat-white-space-low-info-density design paradigms

Though

> On the desktop, however, a scroll bar is very useful for determining your current position in the content without having to break away from what you're presently doing and reach for the mouse.

You don't need a scrollbar for that, a tiny brightly colored (currently non-existing) indicator would suffice.

Along with

> If you're accustomed to a title bar being for handling the window and nothing else, it's very easy to misclick and activate an application feature you didn't intend to.

These are just a bunch of poor design features before "the decline" since they nudge the poor user to hunt down small slivers of windows (horizontal/vertical bar, and seriously, those tiny arrows on the scroll bar?) when instead they could have most of the window's area to perform these functions with some, dare I say, Windows key?


He's right about Gnome for sure. I used to use Ubuntu 14.04-16.04 from 2014 to 2019 along with macOS all these years.

Although macOS has a lot of problems, I returned to Ubuntu (22.04) since July in a new workplace, and it is horrible. This weird empty horizontal top panel and menus behind the hamburger button are just madness.


I can mostly live with GNOME, but making the top bar mostly composed of empty space and omnipresent while apps try as hard as possible to avoid proper menus makes no sense at all. The space for menus is right there!


Sorry for being obnoxious but I think it’s worse than madness, it’s cruelty.

The GUI makes you look for basic stuff every time, such as gas Lightning


Gnome is pathetically bad at this point. The old desktop CDE is almost as good overall despite its many quirks...


Try PopOS if you want a Linux desktop environment that's not painful to use and looks modern.

Gnome 4 just looks alien. They've abandoned every common desktop metaphor and replaced it with a scuffed android launcher.


> Despite being endlessly fawned over by an army of professionals

Yes, but these "professionals" consider everything in the vacuum of whatever software they're working on. Streamlined, cross-program UX is not just not considered, but actively discouraged. Everything needs to be its own island.

This is why as things progress I move more and more things into Emacs, with only the browser and a few websites remaining outside of it at this point. In Emacs you can't isolate your application to the point where I somehow have to learn your paradigm.


Apple?

The people which removed the menu from the window and keep it separate to tease all users? Maximized windows and full-screen are itself a pain. I cannot use MacOS. Finder is especially bad.

The partially weird idea of GNOME (Hamburger-Menu) is also a copy of CSD-Style. Putting that aside GNOME did some really good things (keyboard-centric usage, dash and overview instead of a desktop, no desktop-icons and system-tray). Evolution does a great job in this regard, they use menu-bar and CSD-menu at same time (configurable). And Evolution doesn’t follow the strict GNOME release cycle, which has some bad side-effects (Epipany and WebKitGtk should also do that).

PS: GNOME has copied a lot of bad ideas from MacOS and killed at same time a lot of bad patterns from Windows (e.g. desktop metapher).


The separated window menu which is always attached to the top of the screen, is a usability win. It gives you much more leeway to move your mouse and hit the target you're aiming for. See: Fitt's Law.


It was a win on original Mac with tiny screen.

On 4k 27", when not scaling to make it look like 1080p, the menu is nowhere close to what I am working on, and often had to move over multiple other applications.


I agree with you, except personal preferences get in the way. I would prefer the detached menu bar at the top of a monitor, others like the menu bar on every window, others want neither, or a hamburger menu, or a jumble of confusing tabs, or ...

Preferences are my preference. The interface can be switched around and modified, turned off, turned on, completely removed, installed again. Not just the menu bar, but all sorts of interfaces: windows/tabs > checkbox/radio/list/combo/popup-menus and at various levels: virtual system > system > application > document/file > paste/clipboard/shelf > view/drawer. For example, why can't we have different running Apps appearing as tabs within a single window? I think we could have much much richer interfaces.


> The separated window menu which is always attached to the top of the screen, is a usability win.

Not for me, it's not. I curse it heavily.


The decline in usability ("enshittification") is due to a readily evident change in incentives behind the software.

You're no longer selling a shrink-wrapped application in a competitive market, where you compete on usability.

You're giving it away, while figuring out how to extract value out of the user in other ways.

Online services that have captive users (e.g. the one and only government site you have to use for some service) don't have any incentive to improve their user experience. But that is not new. There was no previous version of that which had a better user experience; the previous version was an obstinate bureaucracy driven by paper pushing in the backrop of rigid protocols. What you see is just the computerization of that.


It happened to me to be part of a project where design was made by a young female friend of mine. After reviewing it (lots of gray semitones, including dark gray text on black, and subtle difference between active and not active elements), I asked her "but..... why?". She honestly answered that she perfectly differentiates one from another. Later I learned that grown men have drastically different color perception than younger women.


> What about Apple?

> I can't comment on the current state of MacOS since the time I've spent actually using a Mac during the last 8 years or so probably totals to a few hours. Apple used to be good at this, and I hear they still do a decent job at keeping things sane, even post-Jobs.

The macOS menu bar is still compulsory. First party apps generally still fill it out properly with commands for everything a user can do and put the common ones in their usual places. For third party apps it depends on whether the developer put time into platform norms. Many Electron apps don't: Slack makes many commands available only through UI popup menus; VS Code puts them in the command palette so you can only find them by search. Of course these apps have key shortcuts but you can't learn them from a menu bar item that doesn't exist.

The focused window gets a heavy shadow. I think that's also compulsory.

But first party apps and recently the system frameworks are guilty of cramming toolbars into title bars and leaving a drag area that doesn't span the width of the window. That's permanent. At least the window title is still written and you know you can drag that. A window's file proxy icon is often missing until you hover on the title. The bar at the top of the window may or may not be visually divided from the content area.

First party apps also remove color and/or silhouette distinctiveness from those toolbars' buttons. App icon silhouettes are also gone.

Scroll bars are of course hidden by default but can be always shown.

The result is probably more usable than what's described in this article on other platforms, but off peak for macOS.

Aside: iOS started the hidden-unless-scrolling scroll indicators trend, I think. Platform guidance was that you're supposed to flash them on first appearance of any scroll view whose content doesn't all fit. (To try it out, go to the Home Screen and then back to Safari.) This is the signal to the user that there is more to see. Users don't consciously think about it but they pick up the right expectation. You just call a method[1] after the view transition animation finishes. This has become arcane knowledge in the iOS field: No one remembers this is important to do in the absence of persistent scroll bars, and where it's not automatic due to using system frameworks in a high level way, many apps just don't get this right.

[1]: https://developer.apple.com/documentation/uikit/uiscrollview...


Why have desktop UI’s gotten so bad lately?

My theory is that before we had UX experts, they had to hire, like, actual graphics designers.


There's also the perverse incentive created by having dedicated UX people. At some point the UI is at a good point and doesn't need significant improvement, but the UX person needs to justify their existence, so changes happen just for the sake of making changes.

To me, Discord's UI - especially on mobile - exemplifies this perfectly. They change the image upload process every few months, alternating between various levels of decent and terrible design.

First it used to be one tap, then they put it in a submenu, now it's back to one tap but uploaded images get grouped together requiring tapping on them to scroll through, so we're forced to upload them one by one to have them show in the more convenient old way, making it still strictly a step back.

Add in the managers who think that not having large UI changes every few years is a sign of stagnation regardless of reason.


Spotify has made a lot of changes in the last 2-3 years and most of them have been bad.

A couple of instances come to mind. The first was with CarPlay. They went from list of thing (playlists, podcasts, etc.) that you click on to get to a new section to a home page with a row of useless tiles and a "view all" that is difficult to get to if using something like a scroll wheel in a BMW. Thankfully the feedback was so bad that they reversed the decision. They made the mistake of thinking that everyone with a touchscreen in their car uses it.

The second is everything related to podcasts. Podcasts and music need to be completely isolated from one another because the method of browsing and maintaining are completely different. To this day there is no way to simply view a list that has all episodes for podcasts you follow and sort by recently updated. There is a "new episodes" thing that exists but if you click on something and back out it gets removed from the list. This assumes it is even updated, because clicking into loads from cache rather than doing a refresh. The only safe way to get an updated list of episodes is by hard closing the app and then re-opening it.


Trying to find a playlist as it magically reorders them in front of my eyes is a ballache


I think this largely explains it, along with a desire for the UX designers doing something new and "modern".

This explains a lot of user-unfriendly changes in other areas, too. I worked for a prominent travel web company and the army of people in marketing were constantly looking for new sources of "incremental revenue" to justify their work. This led to pop-unders and pages packed with more ads than content.


Spotify is the worst ever for this. Major UI changes that are awful every year or so. It was very easy to find anything you wanted 10 years ago. Now there's slideouts and hovers and collapsible garbage hiding everything.


My theory is that UI has the exact same problem as programming:

We don't know what we're doing.

- Corporate interests lead to enshitification and chasing after local maxima.

- We never learn from the past, new blood comes in faster than seniority can deal with.

- Trends and fashion give us comfort and a superficial feeling of (faux) improvement.

- Preferring change over stability.

- Vague methodologies and adages that sound good, instead of dry, actionable quality metrics.

- Proliferation of bureaucracy and marketing.

- Productization instead of standardization leads to a lack of foundations that can be relied upon.


I think a lot of it is the other way around: graphics design driven by branding rather than UX.

I also think the death of native development has a lot to do with it. The web is now the UI layer of desktop, but that means there is no consistent design language across the system.


From my perspective as a dev, this is what happened: UI designers who had been trained on usability, who had a good grip on what was/wasn’t feasible technically, and knew to take content variability into account were almost wholesale replaced with more generalist print/graphic designers who were great at pumping out pretty branded PSDs, but not so great at producing usable UIs.

My guess is that this happened because real UI designers are harder to come by and cost more, while generalists are more common and cost less. I’ve not hired for these positions though so I don’t know for sure.


It can also just be a result of oversupply of certain college degrees. Back in the late 90s to early 2000s there was a big trend of graphic design degrees at universities probably far in excess of available jobs for that specialty. I knew a bunch of these and many went into web design, and now I’m sure are doing UI design now that the web engine is the de facto standard UI engine everywhere but mobile. (Even on mobile it has an increasing presence.)


This is very similar to what Grudin suggests in his HCI history "From tool to partner": Apple used to be very usability and research focussed in the 80s and early 90s, but turned away from it with Jobs and went more look-driven — and had great success with it. Other companies followed. (Which does not mean that companies ignored graphic design before, but it was the more utilitarian parts of graphic design that were important focussing on distinct shapes, readability etc.)


True but I don’t think this gets it quite right.

The big problem with classic Mac was the cooperative multitasking weird OS and the unstylish and clunky hardware. The UI needed polish and modernization but was not fundamentally bad.

The complete loss of OS coherence has a lot more to do with the web than anything else.


In the past many involved in frontend design might've had a background in human factors or ergonomics, they would also have some knowledge of how the brain perceives what's in front of them. I would view this knowledge as quite fundamental but it's increasingly rare to find such expertise in the field now. In my mind it explains a lot in terms of how many UIs are an incoherent mess and why accessibility is considered an afterthought, if at all. Doesn't matter who developed it either. FAANG are just as adept at flubbing basic design principles as anyone else.


UX replaced HCI. Graphic design replaced science.


User interface design definately seems to increasingly favour aesthetics over usability. Form over function. It is very frustrating. For example the Chrome browser on Windows is hard to move around because there is almost no title bar to drag once you have a few tabs open. That seems such a basic thing to get wrong.


I read someone say it here and I really liked the way of framing it, UI elements are the API to interface with the human and changes should be regarded as breaking changes.

Beyond that, I think there is a lot of disdain for users these days and a lot of design that doesn't think about the user, and goes for form over function, and even a lot of deliberate gimping to manipulate the user. UX in the modern day is a complete disaster, and it's not just software, it's everything. Cars, furniture, light switches, light bulbs, stoves, washing machines, you name it the UX is being destroyed. Bad UX used to be in the almost exclusive realm of government services, not anymore. It's like everyone just forgot what the point of all this is.


The problem is that UI/UX people are product and design people whose backgrounds were predominantly in art or similar. 99% of these folks have no background psychology, human factors, information science or the cognitive sciences.

What can be expected when you hire these people?


Like any other kind of employee, you the stakeholder are responsible to get them trained in the role you delegate to them, or to evaluate that they meet your needs before hire. Designers I know do art for fun, but at work they design to solve business problems. They have a lot to say about this topic. In any case it isn't due to some character flaw on their part.


100% on target. Windows, which advanced the state of the GUI nicely in the ‘90s, has regressed into an absolutely infuriating, incompetent UI disaster.


I have always felt that the pinnacle of Windows usability was Windows 2000 Professional. Clean, mostly intuitive, few gratuitous animations. It's been downhill ever since.


I didn't have any problem with XP. I can't recall a single major irritant in its UI.


Did you leave the cartoon UI enabled? Because I recall XP not being a terrible release once one put back the "yes, I'm a professional, less crayola, please" UI elements


Oh yeah! I turned that off so fast that I forgot about it. You're absolutely right. I remember deriding that as the Fisher-Price UI.

And I hated, HATED the "my this' and "my that" fad that dominated the early '00s. Yeah, I know it's MY computer, because it's on MY desk full of MY documents. So goddamned infantile.

Sadly, I've seen some resurgence of the "my" BS lately.


I think the idea that UI was consistent between 1992 and 2012 is laughable. It’s nostalgia and rose colored glasses.

Cross platform applications were often based on entirely different codebases and looked nothing like each other or their native host OS. Examples: AOL Instant Messenger, Microsoft Office (persists to this day), Internet Explorer Macintosh compared to Windows. If applications like Slack are bad at being a part of the host OS then we can’t ignore the fact that this isn’t anything new.

Numerous applications completely disregarded their host OS design language. Examples: Every Java application, RealPlayer, WinAmp, iTunes, America Online, Macromedia/Adobe Flash and Shockwave apps, HyperCard, the list goes on and on.


"Inmates are running asylum" [1] - is one of my favourite books, was written in 2004, which describes how bad most of the interfaces are and the reasons behind it.

Design solves problems which are not always obvious to the actual users. Using computers was always frustrating experience, but it improved dramatically over the last two decades for sure.

I don't know how anyone couldn't see that.

[1] https://www.amazon.com/Inmates-Are-Running-Asylum-Products/d...


> but now UI is better than ever

Now the question is, for whom? Users are not a uniform block.

I suspect: For those unwilling to learn the tools they use on a regular basis.

Because for those who are willing to put in the work, most general purpose software got worse, in terms of how many layers of menus you have to click through, how well default shortcuts are thought out, etc.

This is the old poweruser vs one-time user tension. Granted, good design can resolve that tension in a way that makes both classes of user happy, but that costs development time and research.

In doubt today's software company often priorizes habing an "intuitive UI" over making it efficient. The first onboarded user counts more than rhe one using the thing a hundred times a day. That is a design decision and it is made for economic reasons. However if the typical web startup designed a supermarket cash register we would still be waiting in the line because the animations take their time, the multi step process takes more clicks, etc.


macOS is absolutely and infuriatingly the same as the examples here. I’ve yet to find anyone that actively defends this style - just like I have yet to find people to actively defend open office workspaces. (In apples case, both were designed by the same people.)


I actually don't agree much with the article. I think that all modern OSes (macOS, Windows and even Linuxes) are much much better from the usability perspective than years ago.

Some UI elements and patterns are indeed problematic, but it doesn't really that of a big deal, and some of them are the results of compromises.

Most of these complains are because people don't like change and it is ok.

I recently started using Windows 11 after a decade on macOS (have to port some apps there), and was very surprised by the UI improvements in the core components. I asked couple of colleagues about those, and most of them hated the change and were not using new features.


There is a misconception that consistency == usability. It's an important part of usability but there is a lot more to it. Tunnel vision on consistency (or anything really) doesn't lead to good outcomes. The big picture is what matters: can someone do what they need to do?

Articles like these also have lots of opinions, no research and stats, nor have decent explanations of how much impact these "problems" actually have. Granted I have not seen much research on desktop apps, but in other fields things have measurable improved a lot.

Even going by opinion, my memory from that time was that everything was pretty bad. How often do you lose work because of crashes or not enough undo steps? And could you do the same things you can do now? Also don't forget about all the horrible Java software that didn't use the OS UI.

People romanticize (software of) the past. The stuff I use today is magic compared to the old crap.

Obviously not everything is perfect and I do recognize/understand some of the criticism in the article, but focussing on the little things really misses the big picture that is usability: completing the task that you need to complete, and that is easier than ever.

> You're old and angry! You bet! Now get off my lawn, punk.

Fair enough, I'll see myself out.


> How often do you lose work because of crashes or not enough undo steps?

Crashes and too few undo steps are not a UI/UX problem, they are a software quality/hardware limitations (enough memory or storage to hold all those steps) problem. Software reliability has definitely gotten better in many aspects, as long-lived software applications and operating systems have been through the crucible of time and learned from their reliability mistakes. But UI/UX is in many aspects worse than ever.

> Also don't forget about all the horrible Java software that didn't use the OS UI.

The article actually says the same thing: 'A few rogue applications didn't play by the rules,...'

> completing the task that you need to complete, and that is easier than ever.

Do me a favour–try getting an older person who has not used Zoom before, to host a meeting and accept guest join requests. You will really understand the sorry state we are in once you see them struggle to do this.


>are not a UI/UX problem

Never said UI, but they are absolute usability & user experience problems. Arguably the most important ones! This is exactly what I mean by missing the big picture. What do you think has more impact on completing a task and the experience of completing that task? A working maybe slightly inconsistent toolbar, or the thing crashing and loosing your work?

> The article actually says the same thing: 'A few rogue applications didn't play by the rules,...'

Fair enough, although I would say it was a lot more than a rogue few..

> Do me a favour–try getting an older person who has not used Zoom before

Great example of something that you couldn't do and now can, times are truly great right now.

It's great that you mention old people, I have helped some learning to deal with computers, back then, and also more recently. It's so much easier now.

Old computers with the software we are talking about was/is really hard to learn for them, now with ipads and smartphones they can do so many things. Can't say I have tried zoom specifically, but Hangouts, face time, whatsapp, email, even installing games and apps, no problem at all! They can achieve so many more things because we have come so far and things are more usable than ever.


> This is exactly what I mean by missing the big picture.

The point you are making is a different one than the one that the OP is making. Just because you are making a specific (different) point, doesn't make the OP's point invalid.

> Great example of something that you couldn't do and now can, times are truly great right now.

Again a different point. Of course we now have a proliferation of different and powerful software. The point I am making is about the UI/UX of the software, not about the fact that they make something possible that wasn't before.


The article states that usability is declining, the arguments to back that up are: things are somewhat inconsistent and I don't like [some UI patterns].

My point is that the claim is wrong, and the arguments don't support it and have little to do with usability.

This follows from a misconception that usability, UI, and user experience are the same thing, which is really showing a lack of understanding of what they actually are.

I write my comments in the hope that people will actually look into these topics more deeply as they are quite interesting and often require setting aside the preferences, gut instinct, and pet peeves this article is filled with.


If you insist on nitpicking with a specific definition, you can of course dismiss the points in the OP. But if you try to understand what it's saying on its own terms–it's obviously talking about UI/UX and using the term 'usability' in a colloquial way. All the examples given in there are examples of bad UI/UX. Your point that it's possible to do things now that weren't back in the day, is just completely off on a different tangent than the actual thesis of the OP. I don't even know what to say except that it's a strawman.


These things were studied in decades past. Did you demand research and stats when Microsoft made it impossible to tell which window is focused? What benefits were claimed?


My whole point is that usability is not about small details but the ability to complete a task. The big picture.

I don't use windows, I don't like it, but if [it being impossible to tell which window is focussed] doesn't impede people accomplishing the things they need to do, it's probably a cosmetic, at best a minor usability problem.


Usability is the ease of completing tasks. Not ability.

You seem very invested in dismissing constructive criticism. And you demanded research but provided none.


>the stuff I use today is magic compared to the old crap

I… what?

Software is losing features while simultaneously becoming less usable.

The only thing software does better today than yesterday is spying on you.


> Articles like these also have lots of opinions, no research and stats,

Oh you’re in the data driven club. I bet you even A/B test. Good luck with that.


No, I don't use A/B tests for usability as they aren't very helpful to find usability problems.

A/B testing is more for conversion optimization, it's to measure how persuasive something is.

User testing is my preferred method (in it's simplest form just observing people doing a task), if that's not an option I like heuristic evaluations.

I suggest you actually look into what usability is about, you might enjoy it. Good luck with your assumptions.


The defiance of purposeful and decades-old UI standards isn't just annoying and counter-productive; it can be life-threatening: https://jalopnik.com/did-jeeps-recalled-gear-shifter-contrib...


Slightly off topic, but possibly my biggest annoyance with web design is trying to find where the designer hid the Logout link. This, on a web site designed with endless negative space. I swear for a while the site of either my electric or gas utility literally didn't have one. I'd eventually just shrug and close the tab.


If they could hide the close button they would.


Man, I'm sick of flat UI's and "Modern" design in general, even outside of computing. I watched a YT video of the "Millennium Aesthetic" and (not that I recommend that one in particular) was reminded about the world of design that wasn't a combination of IKEA and a metro station's schematic map. At this "modern design" point it feels "lazy and cheap" rather than "clean and futuristic". Have we really been living this "Flat" UI for over a decade now...? Lame. Boring. Ugly. Old.


Ok so where are the good examples, anyone doing anything of note to address this?


It's not unlike a tragedy of the commons, clicking through Wikipedia maybe "Collective action problem" is even more apt? Individual app makers do things which they like and want without considering the wider implications of cross-app usability in the operating system.

In isolation many apps may be acceptable, but that falls down in the wider context.

The problem would be less of a problem if everyone did the same thing (bad or good UI) in a similar manner, because then there is predictability which is a powerful concept in user interaction.


Trinity Desktop Environment is a somewhat maintained fork of KDE3 with the same functional and understandable GUI.


The good examples are the software they replaced.


In LibreOffice applications there are huge menus and toolbars that enumerate all of the options available to me. A throwback to a simpler time.

And yet, I can never find the option that does $SIMPLE_TASK that I want to do right now.

IMO, the single best thing a UI designer could do for all of these applications is add a search box to quickly take me to the commands or dialogues I need to use. But yes, keep the menus, please.


The strangest thing is that despite all the years of UX research, many websites today are covered in pop-ups all over the content, often containing flashing video, which has not been requested by the user.

Then there is is the massive "cookie acceptance" buttons on every page. All on major, mainstream sites.

The web is really becoming unusable due to UX problems.


It has taken me a least a year to learn the right gesture combination to close apps on iPhone 13. Many operations are very subtle.

However, I seem to be far less bothered by modern usability than some others. My brain is hard wired for novelty it seems. I would not want to use Windows 2000 for example.


That sounds like a good thing. Apps should almost never be killed. The OS automatically kills suspended apps when it needs more ram


Apple support recently told me to kill all other apps to make the Files app work. It was taking ages to download and save data.


That would be true if it were not for state.


Discussed at the time (of the article):

The Decline of Usability - https://news.ycombinator.com/item?id=22901541 - April 2020 (695 comments)


See also this author's Short Thoughts on Computers and Programming:

https://news.ycombinator.com/item?id=32935466


Usability has increased, the problem is we are not the intended users, we are cattle to be sold as data or attention to the users who publish their adds.


macOS has a good user interface. I have no complaints. The last good UI on windows was 98SE. GNU/Linux is a special case, a good UI would add insult to injury.

Edit: reading all the comments I should add that you have to setup macOS the way you want before it’s nice. And to do that you need some research if it’s the first time.


Craigslist was the pinnacle of UX success


> Despite being endlessly fawned over by an army of professionals, Usability, or as it used to be called, "User Friendliness", is steadily declining.

En contraire. The more usability professionals you hire, the worse your usability gets. It seems every dev team nowadays has at least a part-time designer. They come up with new, fresh ideas all the time and, of course, want to see it implemented.

If it were only the Devs, you'd just use Google Material or "read the docs" for whatever it is you were building. As it is though, people think that you need "brand recognition across platforms", which basically means that you do whatever the hell you want and the more guidelines you break, the better. It has the added advantage that you really only need to design once and ship to all platforms. Who cares if iOS/Windows/Mac/Android are different? Just implement your own pop-up dialog!


> If it were only the Devs, you'd just use Google Material [...]

Which is all well and good until Google replaces the Material Design team with designers with no UX training... Sorry, I just can't complain enough about M3.


Even worse, the trend away from good UX is leaking from software to hardware.

Who at Kia thought this was a good idea? Three steps to open the door. Requires two hands. Good luck if you’re an amputee, have arthritis, can’t see the door unlock touch zone, or just can’t figure it out.

https://m.youtube.com/watch?time_continue=50&v=Rv5so7szIMM

Then there’s Elon’s obsession with that stupid F1-style steering wheel.

And VW’s off choice to control 4 windows with 2 switches in their EVs. So weird.

It just goes on and on. Don’t these companies hire engineers with UX experience? Sigh.


FWIW, those door handles are a copy of the Tesla Model 3 door handles (along with their usability issues).

It’s very similar to everyone copying Apple’s auto-hiding scroll bars—usability be damned.


I thought the Tesla version popped out automatically. The Kia version requires the user to push the forward edge to manually pop the back edge out.

1. Unlock with barely visible touch zone.

2. Push forward edge, which flips out back edge/main handle

3. Grab and pull main handle.


Cars are particularly bad these days, where once they were a model of streamlined usability.

My father-in-law's van has three large knobs next to each other on the dash. Which one adjusts the volume, which one the fan speed, and which one switches gears? Hope you don't pick the wrong one while driving at highway speeds!


Steering wheel buttons are another one. They do different things at different times, depending on what mode your dash display is in.

Ever wonder why old airplane dashboards look so complicated? It's because each switch and dial does one thing and that thing never changes.


I used to work at a place where the company car we'd have to drive when visiting the home office was often one of the higher-end Mercedes sedans. (I don't remember which one.) The controls were such a confusing, overdesigned mess. Over the course of 2 or 3 years of regular visits to the office, I don't think anyone on my team was ever able to figure out how to turn off the air conditioning.

We never figured out how to turn off the radio, either, but at least we did figure out how to turn the volume all the way down.


This is just a rant about some minor usability issues in a very small selection of applications used by the author, and mostly Gnome. There’s is nothing behind it.

The title is click-bait and the article is a waste of time.

There’s some very dubious claims, such as:

> There was a time (roughly between 1994 and 2012) when a reasonably computer-literate user could sit down in front of almost any operating system and quickly get to grips with the GUI, no matter what their home base was. Windows, MacOS, CDE, OpenStep, OS/2 and even outliers like Amiga, Atari and BeOS all had more in common than what set them apart.

Personally, having used many of these systems during those years, I completely disagree.

There is no further context or evidence to support this claim. It’s just “the premise”.

This is then contrasted with vague generalizations about the current situation:

> Today, it seems we're on another track completely. Despite being endlessly fawned over by an army of professionals, Usability, or as it used to be called, "User Friendliness", is steadily declining. During the last ten years or so, adhering to basic standard concepts seems to have fallen out of fashion

While there is no evidence for the claim that usability was better in the 90s, there are some examples to show that it’s worse now.

But these are more like pet-peeves, and there’s no evidence or theory to show why they’re good or bad, just more vague generalizations.

> Since Windows 2 (not 2000 - I'm really talking about Windows 2), users have been able to resize windows by dragging their top border and corners. Not so with Slack, anymore.

This is not the decline of usability. It’s the decline in respect for the reader.

There really was a time when a title faithfully represented the text that followed it, right? It must have been sometime around 1994.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: