Deconstructing models and methods in deep learning

Talk
Pavel Izmailov
Talk Series: 
Time: 
02.16.2023 11:00 to 12:00

Machine learning models are ultimately used to make decisions in the real world, where mistakes can be incredibly costly. We still understand surprisingly little about neural networks and the procedures that we use to train them, and, as a result, our models are brittle, often rely on spurious features, and generalize poorly under minor distribution shifts. Moreover, these models are often unable to faithfully represent uncertainty in their predictions, further limiting their applicability. In this talk, I will present works on neural network loss surfaces, probabilistic deep learning, uncertainty estimation and robustness to distribution shifts. In each of these works, we aim to build foundational understanding of models, training procedures, and their limitations, and then use this understanding to develop practically impactful, interpretable, robust and broadly applicable methods and models.