Leveraging Social Theories to Enhance Human-AI Interaction

Talk
Harmanpreet Kaur
Talk Series: 
Time: 
03.30.2023 11:00 to 12:00

Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. The resulting socio-technical gap has led to harmful outcomes such as propagation of biases against marginalized populations and missed edge cases in sensitive domains. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my work that explains why interpretability tools do not work in practice. Interpretability tools exacerbate the bounded nature of human rationality, encouraging people to apply cognitive and social heuristics. These heuristics serve as mental shortcuts that make people's decision-making faster by not having to carefully reason about the information being presented. Looking ahead, I will share my research agenda that incorporates social theories to design human-AI systems that not only take advantage of the complementarity between people and AI, but also account for the incompatibilities in how (much) they understand each other.