ACCESSIBILITY & THE INVISIBLE EXPERIENCE: Assistive technology and navigation strategies

Have you ever noticed how you instinctively use different techniques to navigate and explore digital products such as online training courses, newspapers, ebooks, business applications and more?

Navigation and visual exploration

We develop a familiarity in exploring digital products by learning simple patterns, using real-life associations, trusting our instincts, and intuitively selecting the best navigation strategies to achieve our objectives quickly and efficiently. Our brains become accustomed to using visual cues to skim pages and scan content, enabling us to quickly identify headlines, instructions, interactive elements and other structural components. Consequently, when digital products fail to meet high design standards such as thoughtful layouts, clear design systems, and an information architecture that supports multiple ways of exploration, they’re unfortunately set to fail.

The first principle of information architecture suggested by Christina Wodtke [1] is “Design for wayfinding”. Christina explores the idea of an intelligent navigation design that, at all times, confirms to users “you are here”. This requires the designer to “locate things where users would look for them”.

Christina is referring to visual support in the form of logos, headers, breadcrumbs, navigation bars, color coded sections, links that look clickable, clear labels and the grouping of related options. Although not referring specifically to accessibility, these strategies are equally applicable when designing for users with visual impairment. An intelligent information architecture also supports wayfinding, which refers to how an information system helps users plan and find their way around environments. This results in users selecting specific strategies to navigate using assistive technologies.

Invisible navigation

The same way we use different strategies to navigate a page, users of assistive technologies also have the freedom to select a navigation strategy that best meets their goals.

Let’s compare. Just as we visually skim pages and scan content while navigating all types of screens, so does a blind user:

(1) One day I want to read absolutely everything in the page.

  • A blind user uses screen reading with arrow keys up and down to navigate the screen, combining other keys like ALT.

(2) Another day I want to check headlines to focus on the topic I want to know more about.

  • A blind user navigates headings using the H key.(*)

(3) Maybe I just want to check the interactive controls on a page.

  • A blind user navigates interactive controls using keyboard navigation, also called focus order or tabbingTAB key. JAWS supports all kinds of controls listings in its virtual mode: buttons, checkboxes, lists, etc.

(4) Or maybe I want to quickly check over groups of information before deciding on which details to study.

  • A blind user navigates groups marked by landmarks (isolated areas within a page, eg. header, navigation or main content) using the R or D keys in JAWS and NVDA [2]. Expert sighted users also use group skipping with the F6 key.

(5) On certain occasions, I just want to check the links on a page before I decide where to click.

  • A blind user navigates links using the L key while using assistive technologies.

Assistive technologies also support multiple navigation strategies from a modal. Users choose a navigation mode and navigate the list using arrow keys. Available options are links, headings, form fields, buttons, and landmarks. The image bellow show 2 dialogs displaying a structure of Heading and the other displaying a structure of landmarks.

(*) The “H” and other character keys are provided by the screen reader and NOT by the app, the framework or user agent (browser). Character key navigation works only within HTML pages and NOT on native apps or mobile devices with keyboard attached.

Why should a designer annotate the invisible experience?

To consider all possible strategies for a user to navigate using assistive technologies, UI developers need to consider correct code and markup language. The designer defines all experiences – including the invisible ones; annotating accessibility at the design phase, and ensuring that UI developers are supported to correctly implement accessibility. While the UI developer works on the code, the UX designers provide the blueprint for a seamless experience.

But how do we design these different assistive navigation strategies?

Assuming the visual layout and information architecture is well organized and consistent, designers can leverage opportunities to optimize accessibility by considering all potential navigation strategies and annotating them accordingly. The invisible experience is leveraged by the existing visual navigation strategy. The annotation for an accessible experience informs sequences for focus order and screen reading, lists headings and other strategies in a logical flow. The goal is to give UI developers an accessibility plan that enables them to select the most appropriate WAI-Arias and markup languages.

Alt description: The image presents a screen of fictitious web application with pink annotation to inform reading order and headings

People with disabilities typically interact with digital applications using a keyboard, and listening to a screen-reader describe the page and its textual content. Assistive technologies such as JAWS and NVDA provide varied navigation support to the user. While the built-in screen readers Narrator for Microsoft Windows and VoiceOver for Apple devices provide fewer options, they’re free to use, and so provide a convenient solution for designers who want to conduct quick checks on navigation and screen reading announcements. It’s strongly recommended for designers to experience how it feels using assistive technologies to navigate digital products. In that way, they can better appreciate the nature of these interactions and their inevitable constraints before designing optimized solutions.

Alt description: The image presents a screen of fictitious web application with orange annotation to inform focus order and skipping groups.

Interesting statistics

According to webaim.org [3] and reports for “Screen Reader User Survey” (Sept 2019) over 67% of users navigate lengthy pages via headings and 85% find them useful. Skip to main content or Skip navigation is used by 68% of respondents. Landmarks are used by 51% of respondents to navigate pages when this role is available.

Accessibility Annotation By Design

When a designer or developer starts to explore aspects of accessibility, many questions arise. I collected a few examples of questions created by designers, which I’m sure you’ll find intriguing.

(1) When should you use reading order annotation? Do we need to mark reading order for every element of every page?

Reading order annotation should be used always – in all pages and considering all UI elements. The only thing you may exclude from the ordering list are decorative elements. This is the most basic screen reading strategy. It enables blind users to scan absolutely everything of relevance on a page, including interactive and non-interactive UI elements.

(2) Since screen reader mode covers all usages on a page for blind users, why do we still need to plan and inform the focus order sequence for interactive controls? What’s the difference between these two modes?

In fact, focus order and screen reading strategies overlap. But focus order – also known as keyboard navigation – is a strategy for visually abled users. Screen reading for blind users and focus order for sighted users complement each other and contribute to fulfilling many accessibility requirements. Combined, they support all types of users in choosing the best navigation mode to explore a page. Side by side, these strategies enable beginners to thoroughly investigate a page, while letting expert users skip directly to the interactions.

(3) For screen readers, do landmarks and headings overlap during navigation? Mostly yes. A page structured by headings may overlap landmarks. A page has several hierarchy strategies defined by different types of visual elements: layout, groupings, headings of different font sizes, colors, etc. Most screen reader users navigate long web pages by skipping to headings and using landmarks. This is similar to how sighted users scan web content – they search for design cues, including headings.

Landmarks enable users to navigate groups and thereby scan through large sets of information using the R or D keys. Landmarks with pre-defined naming conventions include Banner (shell bar), Navigation, Main, Region, Search, Form, and Complementary. Headings and subheadings enable screen readers to scan a page for H1, H2 and H3, etc. using the H key. This is especially important in large documents, where a user can quickly skim a list of headings – H1 for the page title, H2 for major headings and H3 for major subheadings. When the visual layout doesn’t require a lower-level ranking heading, an invisible or hidden heading clarifies the page hierarchy for screen readers.

(4) How do landmarks affect announcements by screen reading?

During landmark navigation the screen reader announces the name of the landmark group – there are just a few of them: navigation, banner, region, form, complementary, search, and content info. It’s best practice to use a landmark only once, though some landmarks such as region and form can be repeated. It’s also best practice to add an invisible label to differentiate multiple landmarks of the same type. The system announces the name of the landmark (eg. region) and the invisible label (eg. content or list of examples). As a result, navigating a page with two landmark regions would look like this: 1. Region, article / 2. Region, list of examples.

(5) How does the user trigger the headings navigation mode? Assistive technologies such as JAWS and NVDA allow heading navigation using the “H” key.

(6) When a user navigates Headings, is it possible to drill down to different heading levels?

Yes. When navigating using the “H” key, the system announces the headings in hierarchical order, beginning with higher ranks and drilling down to lowers rankings, before announcing a new higher ranking again. JAWS and NVDA also provide a list of navigation strategies prompted in a dialog – including headings. In this case the user navigates the list of headings using up/down arrows and identical announcements.

(7) When navigating Headings (or landmarks), can a user stay on the group and navigate another UI element?

Yes. The user switches from “H” (or “L”) key to Up and Down arrow keys. The announcements are delivered based on the UI elements that the user is now exploring.

(8) If there’s an element in the center of the page, can I make this the first element to be read out when the user opens the page?

Yes! As a designer you can define that this element should be the focus when a user lands on the page. This is called Initial Focus and is used to direct users to the most significant UI element. It can also be used when navigating groups (F6) where Initial focus by default should be set on the first focusable element of the group.

(9) Are there guidelines to annotate reading order?

Yes, there are many strategies out there. SAP has its own set of annotations for Figma, along with some useful plugins to speed up the annotation work. There are two annotation options: One number in the sequence represents one single UI element or a group scope (i.e. cards with multiple pieces of information).

(10) Is there any technical limitation during the implementation?

Yes, UI developers face various challenges and limitations that may require collaboration between developer and designer. For instance, F6 skipping group is not used on mobile scenarios, even with an attached keyboard. Cross disciplinary alignment, with regular stakeholder feedback, is the best approach to delivering inclusive digital products.

Takeaways

Now you know how assistive technologies enable blind users to explore pages choosing various navigation strategies. A logical structure of UI elements means interactive and structural elements are quickly identified, and the user experience is then comfortable and reliable. Good design caters to all users. Remember that visual information can be just the start point and the support you need to structure invisible navigation for accessible products. Don’t overlook the essential, invisible navigation on which assistive technologies and their users depend.

Learn more about SAP product accessibility on sap.com/accessibility.

_______________

[1] Christina Wodtke “Information Architecture: blueprints for the web”, New Riders, 2003.

[2] JAWS and NVDA are the most common and primary screen reader accounting for over 80% of users. Source: webaim.org Screen reader user survey report Last update sept 2019. https://webaim.org/projects/screenreadersurvey8/#primary

[3] Source: webaim.org Screen reader user survey report Last update sept 2019 https://webaim.org/projects/screenreadersurvey9/