PhD Proposal: Towards Human-Centered NLP Systems through Fair and Productive Human-Machine Collaboration
Inspired by the Turing test, a long line of research in natural language processing (NLP) has focused on enhancing machines’ capability of understanding and imitating human behaviors. In recent years, with the advancement of deep learning techniques, developers started to deploy these machines into real-life applications. However, since they were designed with concentrations on machine improvements rather than user experiences, these machines face crucial challenges during deployment. Thus, alternatively, researchers emphasized the necessity of the concept human-centered AI when designing and building machine systems. Human-centered AI aims to build AI systems that amplify and augment human abilities with more productive, enjoyable, and fair AI partnerships. As human-centered AI roots system design and development based on human usage, its two primary themes are thus user experience — how to make enjoyable user-machine interaction so that users will be willing to incorporate the machine system in completing their tasks, as well as system’s practical value — how to create machine systems that are practically useful and can bring benefits to human.In my first line of research, I aim to improve user experience by assessing the fairness issue of current machine systems on previously-unconsidered social groups. In my past work, I analyzed how coreference resolution systems are biased toward binary-gendered groups than non-binary- gendered groups and proposed suggestions for gender-inclusive coreference resolution systems. Then, I went beyond the gender domains to analyze stereotypes of various social groups in natural language inference systems and demonstrated that the definition of fair could be different for model developers versus users.In my second line of research, I study how to build machine systems that can provide more practical value for users. In my past work, I inquired how machine systems can help by completing tasks human cannot do with the focus on the virtual question answering (VQA) system for virtually-impaired people. I investigated the gap between the sota VQA models, which focus on improving model understanding, versus models for accessibility and proposed challenges and opportunities for future improvements on VQA systems for virtually-impaired people. In my proposed work, I plan to explore how to build machine systems that 1) complement users’ shortcomings and 2) are rooted in users’ needs through the task of maintaining healthy conversations on social media. Towards the first aspect, my first proposed work investigates how machine systems can assist posters by encouraging them to express their opinions with more civil framing, which may help reduce conflicts and the chances of being banned. Then, my second proposed work probes into content moderators’ difficulties in completing their work and how to construct machine systems that can empower content moderators by releasing their exhaustive workloads and enabling them to make judgments more effectively.
Dr. Hal Daumé III Dr. Michelle Mazurek Dr. Katie Shilton Dr. Rachel Rudinger Dr. Hernisa Kacorri Dr. Kai-Wei Chang (UCLA)