PhD Proposal: Robust Learning under Distributional Shifts

Talk
Yogesh Balaji
Time: 
11.12.2019 15:00 to 17:00
Location: 

IRB 5105

Robustness to shifts in input distributions is crucial for reliable deployment of deep neural networks. Unfortunately, neural nets are extremely sensitive to distributional shifts, making them undesirable in safety-critical applications. For instance, perception system of a self-driving car trained on sunny weather conditions fails to perform well on snow. In this talk, I will present several algorithms for robust learning of deep neural networks against input distributional shifts.First, I will present some results on likelihood computation using generative models, and how these likelihood estimates can be used for quantifying distributional shifts. Then, I will discuss robust learning algorithms for two broad classes of distributional shifts - naturally occuring covariate shifts, and artificially constructed adversarial shifts. For adapting to covariate shifts, I will present techniques using Generative Adversarial Networks (GANs) and regularization strategies. For adversarial shifts, I will discuss why current robust training algorithms have poor generalizing effect, and propose a technique for improving generalization.Examining Committee:

Chair: Dr. Rama Chellappa Dept rep: Dr. Soheil Feizi Members: Dr. Abhinav Shrivistava