Using a computer and modern software can be a chore to begin with for the visually impaired, but fundamentally visual tasks like 3D design are even harder. This Stanford team is working on a way to display 3D information, like in a CAD or modeling program, using a “2.5D” display made up of pins that can be raised or lowered as sort of tactile pixels. Taxels!
The research project, a collaboration between graduate student Alexa Siu, Joshua Miele and lab head Sean Follmer, is intended to explore avenues by which blind and visually impaired people can accomplish visual tasks without the aid of a sighted helper. It was presented this week at SIGACCESS.
The device is essentially a 12×24 array of thin columns with rounded tops that can be individually told to rise anywhere from a fraction of an inch to several inches above the plane, taking the shape of 3D objects quickly enough to amount to real time.
“It opens up the possibility of blind people being, not just consumers of the benefits of fabrication technology, but agents in it, creating our own tools from 3D modeling environments that we would want or need – and having some hope of doing it in a timely manner,” explained Miele, who is himself blind, in a Stanford news release.
Siu calls the device “2.5D,” since of course it can’t show the entire object floating in midair. But it’s an easy way for someone who can’t see the screen to understand the shape it’s displaying. The resolution is limited, sure, but that’s a shortcoming shared by all tactile displays — which it should be noted are extremely rare to begin with and often very expensive.
The field is moving forward, but too slowly for some, like this crew and the parents behind the BecDot, an inexpensive Braille display for kids. And other tactile displays are being pursued as possibilities for interactions in virtual environments.