Understanding optimization in neural networks

Talk
Tom Goldstein
University of Maryland
Talk Series: 
Time: 
09.21.2018 11:00 to 12:00
Location: 

AVW 2117

This talk explores a number of different issues related to optimization for neural networks. I begin by exploring the loss functions of neural networks using visualization methods. I demonstrate how network design choices have a surprisingly strong effect on the structure of neural loss functions, and well-designed networks have loss functions with very simple, nearly convex, geometry. Then, I look at situations where the local convexity (or lack thereof) of neural loss functions can be exploited to build effective optimizers for difficult training problems, such as GANs and binary neural networks. Next, I investigate ways that optimization can be used to exploit neural networks and create security risks. I will discuss the concept of "adversarial examples," in which small perturbations to test images can completely alter the behavior of neural networks that act on those images. I introduce a new type of "poisoning attack," in which neural networks are attacked at train time instead of test time. Finally, I ask a fundamental question about neural network security: Are adversarial examples inevitable? By approaching this question from a theoretical perspective, I then provide a rigorous analysis of the susceptibility of neural networks to attacks.