Addressing Biases for Robust, Generalizable AI

Talk
Swabha Swayamdipta
Talk Series: 
Time: 
03.08.2021 13:00 to 14:00

Artificial Intelligence has made unprecedented progress in the past decade. However, there still remains a large gap between the decision-making capabilities of humans and machines. In this talk, I will investigate two factors to explain why. First, I will discuss the presence of undesirable biases in datasets, which ultimately hurt generalization. I will then present bias mitigation algorithms that boost the ability of AI models to generalize to unseen data. Second, I will explore task-specific prior knowledge which aids robust generalization, but is often ignored when training modern AI architectures. Throughout this discussion, I will focus my attention on language applications, and will show how certain underlying structures can provide useful inductive biases for inferring meaning in natural language. I will conclude with a discussion of how the broader framework of dataset and model biases will play a critical role in the societal impact of AI, going forward.