We have the exact same problem with visual interfaces, and the combination of manual testing for major changes + deterministic UI testing works pretty well.
Actually it could be even easier to write tests for the screen reader workflow, since the interactions are all text I/O and pushing keys.
I always wonder why this isn't a bigger part of the discussion. None of us would develop a visual UI flow without trying it manually at least once, but for some reason this idea isn't extended to discussions about accessibility. The advice always fits into these three groups:
1. Follow a checklist
2. Buy my software
3. Hire blind people to test your app
I'm not saying that these are bad (although some overlay software is actually worse than nothing), but aren't people even a little bit curious to try the user experience you're shipping?
There are popular, free screen readers out there and using one can teach you a lot.
Perhaps a blindfolded person and a person who has always been blind have very different expectations of how to use software, such that they would give divergent opinions on what makes a good screen reader UI.
In theory this is certainly true. In practice the most common experience is software where UI elements are completely unreachable from the keyboard, and/or have no label at all. If you talk to tech-savvy Blind people for a while you invariably hear things like "the app doesn't have labels but I know the third link is the settings page, so I just count until I hear 'link' 3 times". Most people aren't going to hire an outside person to test their project, and frankly I think that's often reasonable for personal projects and small companies. But if you exercise the UI flow yourself, at least you know it's possible to use it.
Can't speak for others... and though visually impaired, I don't think I could handle navigating with a screen reader myself. I've sat through blind testing before and it's definitely impressive and I learned a lot. I will say that I do make an effort to do a lot of keyboard only navigation as part of testing. Just that can help a lot in terms of limiting janky UX.
Especially with flexbox and other more modern layout options.
This reminds me of the retail chain Service Merchandise which apparently used to operate this way. You'd walk around the store looking at display products, pick things out on a sheet, and then they'd appear on a conveyor belt.
If you live in Seattle and you find items are rung up for the wrong price, I highly recommend reporting it to the city. I did this once and was shocked that they promptly sent someone out to check the store and issue a citation. The inspector who emailed me was very polite and professional as well. It's rare to have something like this taken seriously and to have the enforcement properly funded.
Unfortunately I don't remember which form I filled out but I believe it was this one
People like being served by human beings, rich people especially. So that work will still be around and all the brightest and most diligent people will compete to be the one who brings Jeff Bezos's grandson his dinner.
This heavily depends on the particular era of the language and the particular coding practices of the team though.
For example, I'm happy to maintain Python codebases with type annotations and type checking enabled. Those without better have a pretty solid docstring culture and be willing to add progressive type checking. It doesn't so much matter what version of Python it is.
They're becoming the norm for newer systems written in Python though. So I don't think the language is enough information to go on. This heuristic simply biases against languages that have been around a long time because more of their code was written before practices evolved.
Actually it could be even easier to write tests for the screen reader workflow, since the interactions are all text I/O and pushing keys.
reply