PhD Proposal: HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments

Talk
Lee Stearns
Time: 
11.17.2016 13:00 to 14:30
Location: 

AVW 3450

Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, which can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, only a few have attempted to provide access to visual information through touch, and these have focused primarily on reading printed text. Touch is a highly attuned means of acquiring textural and spatial information, especially for people with vision impairments, and so by enabling access to visual information through touch, users may be able to obtain a better understanding of how a surface appears than is possible through other methods.

The central question of my dissertation is: how can I augment a visually impaired user’s sense of touch with interactive, real-time computer vision to help them access information about the physical world? To answer this question, I propose a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger (e.g., text, colors and textures, images). There are three key aspects of HandSight: (i) designing and implementing the physical hardware, (ii) developing signal processing and computer vision algorithms, and (iii) designing real-time auditory, haptic, or visual feedback that enables users with vision impairments to interpret surface content.
To explore this idea, I have implemented and tested four proof-of-concept prototypes consisting of a finger-mounted camera and other sensors. Thus far, I have primarily focused on two specific application areas: reading and exploring printed documents, and controlling mobile devices through touches on the surface of the body. User studies with these prototypes have demonstrated the feasibility HandSight and identified several tradeoffs that will be important to consider for future iterations. For example, reading through touch is slower and more mentally and physically demanding than traditional screen-reader approaches; however, it also enables immediate access, greater control over reading pace and order, and knowledge about spatial layout that could be useful for some types of documents.

Building on this preliminary work, I will iterate on each of the three key aspects of HandSight, focusing in particular on enabling access to color, visual texture, and spatial layout information. I will (i) investigate alternate camera types and mounting locations, and improve the design of my wearable prototype; (ii) apply state-of-the-art computer vision and machine learning algorithms to support robust recognition of surface content and touch gestures; and (iii) involve blind and visually impaired participants throughout the design and development process to ensure that the interface is efficient, easy to use, and enables access to the types of information that are most important to my target users. My research will culminate in the technical evaluation of my system’s accuracy and robustness as well as user studies to assess usability and utility.

Examining Committee:

Chair: Dr. Jon Froehlich

Dept rep: Dr. Ramani Duraiswami

Member: Dr. Rama Chellappa