Understanding Human-Centric Properties of Deep AI Models

Talk
Bolei Zhou
Talk Series: 
Time: 
03.24.2021 13:00 to 14:00

Over the past few years, data-driven AI models such as deep networks have made significant progress in a wide range of real-world applications, from self-driving cars to protein structure prediction. In order to deploy these models in high-stakes applications such as self-driving and medical diagnosis, it is essential to ensure the model output is interpretable and trustworthy to humans. Meanwhile, humans should be able to quickly examine the models and identify potential biases and blind spots. Such interpretable Human-AI interaction is crucial for building reliable collaboration between human and intelligent machines. In this talk, I will present our effort to examine and improve deep AI models' human-centric properties beyond the performance, such as explainability, steerability, generalization, and fairness.First, I will introduce Class Activation Mapping, a simple yet effective approach to leverage the internal activation of the deep network to explain its classification output. Then, I will talk about improving the steerability of deep generative models to facilitate human-in-the-loop visual content creation. Lastly, I will briefly discuss improving the generalization of the self-driving agent through the procedural generation of reinforcement learning environments. I will conclude my talk with on-going and future works toward effective human-AI interaction and its broad applications to machine perception and autonomy.