PhD Defense: Towards Reliable and Efficient Representation Learning

Talk
Chen Zhu
Time: 
05.13.2022 15:30 to 17:30
Location: 

IRB 3137

Large-scale representation learning has achieved enormous success during the past decade, surpassing human-level accuracy on a range of benchmarks including image recognition and language understanding. The success is supported by advances in both the algorithms and computing capabilities, which enables training large models on enormous amounts of data. While the performance continues to improve on existing benchmarks with larger model and training dataset sizes, the reliability and efficiency of large models are often questioned for deployment in practice. Uncensored datasets can have been poisoned to manipulate model behavior, while practical deployment requires models to be trained or updated quickly on the latest data, and to have low latency for inference.In this talk, I will introduce our works on improving the reliability and efficiency of representation learning. On reliability, we study the threats of data poisoning and evasion attacks and how to defend against these threats. We propose a more vicious targeted clean-label poisoning attack that is highly effective even when the target architecture is unknown. To defend against such threats, we develop a k-NN based method in the feature space to filter out the poison examples from the training set, which effectively reduces the success rate of poisoning attacks at an insignificant cost of accuracy.On efficiency, our study focuses on three dimensions: data efficiency, convergence speed and computational complexity. For data efficiency, we propose enhanced adversarial training algorithms as a general data augmentation technique to improve the generalization of models given the same amount of labeled data, where we show its efficacy for Transformer models on language understanding and vision-and-language tasks, as well as for Graph Neural Networks. For convergence speed, we propose an automated initialization scheme to accelerate the convergence of convolutional networks for image recognition and Transformers for machine translation. For computational complexity, to scale Transformers to long sequences, we propose a linear-complexity attention mechanism, which improves the efficiency while preserving the performance of full attention on a range of language and vision tasks.
Examining Committee:

Chair:Dean's Representative:Members:

Dr. Tom Goldstein Dr. Behtash Babadi Dr. David JacobsDr. Furong Huang Dr. Rachel Rudinger Dr. John P. Dickerson