PhD Defense: Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective

Talk
Sanghyun Hong
Time: 
07.23.2021 12:00 to 14:00
Location: 

Remote

As deep learning is becoming a key component in many business and safety-critical systems, e.g., self-driving cars or AI-assisted robotic surgery, adversaries have started placing them on their radar. To understand their potential threats, recent work studied the worst-case behaviors of deep neural networks (DNNs), such as mispredictions caused by adversarial examples or models altered by data poisoning. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and it overlooks a holistic picture---leaving out the security threats caused by practical hardware or system-level attacks.In this talk, on three separate projects, I will present my research on how deep learning systems, owing to the computational properties of DNNs, are particularly vulnerable to existing well-studied attacks. First, I will show how over-parameterization hurts a system's resilience to fault-injection attacks [USENIX'19]. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN's parameters have at least one-bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I will how computational regularities can compromise the confidentiality of a system [ICLR'20]. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN's often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting [ICLR'21]. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network's computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I will conclude my talk by outlining future research directions for designing secure and reliable deep learning systems.Examining Committee:

Chair: Dr. Tudor Dumitras Dean's rep: Dr. Mike Hicks Members: Dr. Dana Dachman-Soled Dr. Leonidas Lampropoulos
Dr. Abhinav Shrivastava Dr. Nicolas Papernot Dr. Nicholas Carlini