PhD Proposal: Neural Network Verification in the Wild

Talk
Ping-yeh Chiang
Time: 
02.01.2022 14:00 to 16:00
Location: 

IRB 4109

While neural networks have achieve impressive performance in a wide range of applications, its use has been limited in safety critical applications due to the lack of formal guarantees regarding safety and performance. Neural network verification techniques are proposed to derive formal guarantees that are otherwise absent. However, studies of formal guarantees tend to focus on more theoretical and toy problems. In this work, we aim to bridge the gap between theory and practice by proposing changes to the verification techniques, such that they can be applied to more practical settings.In the first part, we adapted a formal verification method, interval bound propagation, to the patch-based threat model, which we find to be a more realistic threat compared to the previously studied `p norm based threat model. In the section, we first show that prior empirical patch defenses are easily broken by a stronger adaptive adversaries thus motivating the need for verifiable defense. We then propose the first certified patch defense, and then evaluate our proposed certified defense compared to prior proposed approaches.In the second part, we propose the first certified defense for object detectors. While prior certified defense literature often focuses on simple classifiers, object detectors are much more common in real world computer vision systems. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose median smoothing, which is of independent interest.In the third part, we extend the neural network verification technique to generate the first certificate for strategyproofness of auction networks. This differ from prior neural network verification literature in that we seek to verify a quantify that is not classification accuracy nor average precision. Due to the small size of these auction networks, we are able to employ MIP approaches to certify the strategyproofness for the first time.For future work, we will focus on making neural network verification more practical by improving the speed and the size of the certificate. At the same time, we will also aim to identify potential applications of the verification techniques where the certificate offers more real world utility.Examining Committee:

Chair:Department Representative:

Dr. Tom Goldstein Dr. Rachel Rudinger Dr. John Dickerson