Biography

My research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. I work at the boundary between theory and practice, leveraging mathematical foundations, complex models, and efficient hardware to build practical, high-performance systems. I design optimization methods for a wide range of platforms ranging from powerful cluster/cloud computing environments to resource limited integrated circuits and FPGAs. Before joining the faculty at Maryland, I completed my PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. I have been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.

Research

Here are some of my most recent projects. I believe in reproducible research, and I try to develop open-source tools to accompany my research when possible. For a full list of software and projects, see my complete research page.

Understanding generalization

on June 1, 2019

Neural networks can generalize to test data that aren’t seen during training. The origins of generalization are mysterious and have eluded understanding. We try to gain an intuitive grasp on generalization through carefully crafted experiments.

Continue reading

Attacks on copyright systems

on May 28, 2019

We show that content control systems are vulnerable to adversarial attacks. Using small perturbations, we can fool important industrial systems like YouTube’s Content ID.

Continue reading

Adversarial training for FREE!

on March 8, 2019

Adversarial training hardens neural nets against attacks, but it costs 10-100X more than regular training. We show how to do adversarial training with no added cost, and train a robust ImageNet model on a desktop computer in just a day.

Continue reading

Are adversarial examples inevitable?

on September 5, 2018

A pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?

Continue reading

Stacked U-Nets: A simple network for image processing

on June 1, 2018

Stacked U-Nets are simple, easy-to-train neural architecture for image segmentation and other image-to-image regression tasks. SUNets attain state of the art performance and fast inference with very few parameters.

Continue reading

Attacking Neural Nets with Poison Frogs

on April 11, 2018

Data poisoning is an adversarial attack in which examples are added to the training set of a classifier to manipulate the behavior of the model at test time. We propose a new poisoning attack that is effective on neural nets, and can be executed by an outsider with no control over the training process.

Continue reading

Visualizing Neural Net Loss Landscapes

on January 5, 2018

It is well known that certain neural network architectures produce loss functions that train easier and generalize better, but the reasons for this are not well understood. To understand this better, we explore the structure of neural loss functions using a range of visualization methods.

Continue reading

Stabilizing GANs with Prediction

on December 11, 2017

Adversarial networks are notoriously hard to train, and simple training methods often collapse. We present a simple modification to the standard training method that increases stability. The method is provably stable for a class of saddle-point problems, and improves performance of numerous GANs.

Continue reading

For students

I teach courses in discrete mathematics and optimization.

View course webpages

Sponsors

My research is made possible by the generous support of the following organizations.