“Flatten the curve” was one of the most common visual representations to explain the evolution of COVID-19 worldwide. It also reflected on the response needed to combat the spread of the virus. While helping to prevent contagion, graphics like this ignored a significant audience: people with visual impairments.
According to the World Health Organization (WHO), blindness and low vision affect at least 2.2 billion people. Brazil has almost 7 million with some visual impairment, according to the latest National Health Survey (IBGE, 2019). It is a health problem that worsens with the population’s aging and prolonged screen exposure.
This audience needs to be aware of the potential of data visualization as an informative, educational, and public commitment tool. A single graphic has the impact of communicating complex databases more quickly. At the same time, visually impaired people are also more vulnerable to misinformation.
Over the last 30 years, we have witnessed an increase in researchers’ interest in non-visual forms of visualization thanks to the advancement of assistive technologies. They represent devices that improve the functional abilities of people with disabilities. In the case of people with blindness and low vision, we call it typhlotechnology (from the Greek typhlos, which means blind).
Screen readers are the most popular assistive technologies for users with blindness and low vision. They are artificial intelligence programs that provide a synthesized voice of what appears on the screen of computers and mobile devices. While they work well for reading text, they are less effective for graphics and images than they rely heavily on third-party awareness when providing descriptive text.
To amplify the sensory response, many engineering and accessibility researchers have been working with haptic technology. With this, blind users can explore visualizations and differentiate lines, points, and textures with the device’s vibration, making graphics, symbols, maps, and illustrations more accessible. 3D printing has also made it possible to build tactile visualizations to touch and analyze representations.
Simultaneously, devices activated other senses to reinforce accessibility. Sonification, for example, converts visual information into sound effects. And, more recently, the smell has been used. Researchers at the University of Maryland have developed a device called viScent, capable of emitting six scents as the user interacts with the visualization.
Despite the latest advances, the primary concern is still access to technology. These devices are expensive and require specialized people to create certain graphics. The barriers to inclusive data visualization are clear but depend on an interdisciplinary dialogue between researchers from different fields.
Inclusive data visualization is a necessary advance that must accompany a society increasingly impacted by data. The popularization of smartphones also helps create more accessible solutions using existing standards (such as WCAG and ARIA) and blending other modalities, such as audio and touch, to communicate graphics to people with low vision.
For journalism, inclusive data visualization also requires the use of functional tools so that journalists, designers, and programmers can apply accessibility to their graphics. Decades of visual language development have enabled increasingly complex and interactive images and, consequently, more inaccessible.
Above all, it is essential to understand how this audience reads and interprets data. It makes no sense to develop new products and technologies that impede access for people with blindness and low vision to visual information.
Creating non-visual communication is a fundamental challenge in the context of misinformation. Although we can hear, touch, and even smell data, some obstacles to its expansion could be solved by building bridges between communicators, designers, engineers, and accessibility researchers. And that future is close, and the next steps will be extraordinary.