PhD Proposal: Using Computer Vision and Wearable Computing to Assist Visually Impaired Users

Talk
Brandyn White
Time: 
04.24.2014 17:00 to 18:30
Location: 

AVW 3165

We propose to explore the applicability of state-of-the-art computer vision algorithms and wearable devices as an assistive technology for visually impaired users. This area of research has been under explored and recent advances in mobile devices including Google Glass, a head mounted device with an egocentric camera, and Google's Project Tango, a phone with a structured light depth camera, have enabled accessibility applications that have been largely impossible previously.

This work consists of three research topics that build on one another: 1.) a novel mobile activity recognition, object detection, and scene parsing approach to infer the user's context in real-time, 2.) hands free real-time feedback of a user's surroundings enabling obstacle avoidance, object localization, and improved awareness in social settings, and 3.) contextual triggers that are configured using a voice interface to allow a user to customize the device's behavior and automate tasks depending on their current situation. This work spans multiple research topics including Human Computer Interaction, Computer Vision, Accessibility, Distributed Systems, and Wearable Computing.

We intend to evaluate our approach in a pilot study with visually impaired user's over a period of 2-6 weeks. During this time we will custom tailor its behavior to meet their individual needs as they go about their day. Using the feedback from the pilot study, we will conduct a controlled user study to evaluate user satisfaction and quantitative performance on a series of benchmark tasks contrasted with existing approaches.
Throughout this work there are a broad range of novel technical challenges related to distributed sensing and real-time contextual analysis that will be explored in depth; however, at a high level our focus is to develop novel methods of assisting visually impaired users and we intend to explore the following research questions: 1.) What applications can head mounted devices and mobile depth sensors enable that benefit visually impaired users?, 2.) What computer vision algorithms are best suited to be used in assistive devices for visually impaired users?, 3.) To what extent can automatic inference of a user's context reduce manual input and how does this impact user satisfaction?, and 4.) What is the impact of providing visually impaired users with an end-user programming environment for customized real-time feedback and contextual triggers?
Examining Committee:
Committee Chair: - Dr. Larry S. Davis
Dept’s. Rep: - Dr. Jon Froehlich
Committee Members: - Dr. Leah Findlater