First let me start off by saying that this is not a post to put anybody down or to point fingers at any application, screen reader, access program, or similar product in particular. Its merely what I think should be included, fundimentally, for blind people to be able to use applications.
That being said, I am going to point out flaws in some popular applications and reading programs. If you’re offended, I appologize, because it isn’t meant. By the same token, please don’t start flaming in the comments.
Alright, now that’s out of the way…
While lovely visuals and graphic displays are wonderful for the sighted community, and while you can most certainly include these, blind people need something to read and interact with. I cannot stress this enough. Labeling is important!
Most programming languages I’ve used — and I’ve used a fair few — include a facility to add text labels. Often, they’re called static text. Putting one of these before an edit box will make it immediately accessible to screen readers. When you implement buttons, you don’t even have to do this; Just add a label attribute to the button to make it read out its function. If your interface only uses graphics, we simply cannot use it.
Inaccessible GUI toolkits:
A point of annoyance to blind users is that some of the frameworks used on modern apps simply aren’t accessible. TK, QT, and sometimes UWP just aren’t accessible to VI users. Its important to note that the windows forms API and WX wigets are *usually* accessible. But you have to make them so. There are actually guidelines out there for doing so, but they are far beyond the scope of this post.
in built accessibility is becoming more common these days. MacOS, IOS, android, windows, and even linux include programs to make them accessible. The problem is that, sometimes, its just not good enough.
While most companies in recent years have, in fact, focused more on the usability and accessibility of their products, they all have different ways of doing so. Microsoft have made improvements to narator, but we’re missing features such as an external API to interact with it. We resort to using NVDA or JAWS for windows accessibility. Voice over on MacOS is a brilliant screen reader, but it lacks some accessibility needs such as being able to read terminal windows correctly and some of the simplisity of windows screen readers. These are just two examples. Many blind consumers don’t want to buy an android device because talkback, or android, or sometimes both, just don’t have the ease of use and immediate accessibility. Android O will improve on this, but the fact is, its too little too late. Similarly, people don’t want to pay for a screen reader such as JAWS, just because its missing key features. Recently freedom scientific decided to add support for describing what’s under the mouse. NVDA, voice over, and Orca for linux have had this for years.
What I like to see in screen readers:
As mentioned above, I like my external API. I like being able to script and make add-ons that do more than the core functionality of a screen reader. And I’m sorry, but this is why I don’t use narator. Its lack of external resources make it almost impossible to use for something like developing; something that JAWS and NVDA excell at because of external scripting. While some screen readers don’t need this — voice over, for example — they still include it. While this seems like something that is only power-user based, consider this.
How many NVDA add-ons, Jaws scripts, etc. Have you downloaded to make an application work?
Personally, and this is just my opinion, it would be good to see narator build an external API. Lets have it be able to interact with a controler client like NVDA, jaws, and other windows screen readers. I don’t want to have to switch from a core screen reader and download something else just to be able to set up my windows installation. And while microsoft have definitely improved on this in the creators update, its still not quite there.
This is the final point I’m going to touch on in this post. Web accessibility, though there are many, many guidelines to support this, is far behind what I would have expected for 2017. Developers like to include lovely flash or HTML5 based animations, without labeling, in an imbedded frame. this results in a lot of garbage text being spoken by the screen reader. I use addblockers just to stop autoplay videos and frames that pop up all over my screen. I can’t stress this point enough, websites are some of the easyest things to make accessible. Tools such as aria, regions, heading levels, etc. These all exist for a reason and while yes, they make the page look pretty, they also vastly improve on accessibility for screen readers.
While this post turned into more of a rant than I expected, it still highlights some interesting points. All of these things are important if developers wish to keep their applications accessible. While we can write add-ons, not all of us have that expertees. Some of us don’t use screen readers that support them.
So I’m going to ask a question, and this is by no means the first time its been asked.
Developers. Can you take 20 minutes out of your applications development cycle to make sure everything is clearly labeled? Can you try and minimize the amount of graphics, frames, etc. In HTML websites?
This is not my last post on this subject. Next time, I’d like to focus on game accessibility, so keep looking out for that. As always, you can leave a comment below if you found this useful, or even if you didn’t. If you want to interact with me personally, find me over on my twitter, and I’ll be glad to talk about accessibility or anything else that takes your fancy.
Thanks for reading!