Correcting Errors in Speech Input During Non-visual Use
While speech input has improved dramatically in the past few years, reviewing and editing the dictated text during non-visual use is a known challenge. We are studying how well users can identify dictation errors based on text-to-speech output. We are also designing and evaluating mechanisms to improve speech-based error identification and correction during non-visual use.
Mobile Object Recognizers for Blind Users
The advance of machine learning enhances mobile devices to be used for people with disabilities. For example, a computer vision technique helps visually impaired users identifying objects with their mobile devices. In this project, we aim to improve the interface for visually impaired people to train their mobile devices for object recognition using deep learning and Tensorflow.