Neural networks can generalize to test data that aren’t seen during training. The origins of generalization are mysterious and have eluded understanding. We try to gain an intuitive grasp on generalization through carefully crafted experiments.
My research lies at the intersection of optimization and distributed computing, and targets applications in machine learning and image processing. I design optimization methods for a wide range of platforms. This includes powerful cluster/cloud computing environments for machine learning and computer vision, in addition to resource limited integrated circuits and FPGAs for real-time signal processing. My research takes an integrative approach that jointly considers theory, algorithms, and hardware to build practical, high-performance systems. Before joining the faculty at Maryland, I completed my PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. I have been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.
Here are some of my most recent projects. I believe in reproducible research, and I try to develop open-source tools to accompany my research when possible. For a full list of software and projects, see my complete research page.
Stacked U-Nets are simple, easy-to-train neural architecture for image segmentation and other image-to-image regression tasks. SUNets attain state of the art performance and fast inference with very few parameters.
Data poisoning is an adversarial attack in which examples are added to the training set of a classifier to manipulate the behavior of the model at test time. We propose a new poisoning attack that is effective on neural nets, and can be executed by an outsider with no control over the training process.
It is well known that certain neural network architectures produce loss functions that train easier and generalize better, but the reasons for this are not well understood. To understand this better, we explore the structure of neural loss functions using a range of visualization methods.
Adversarial networks are notoriously hard to train, and simple training methods often collapse. We present a simple modification to the standard training method that increases stability. The method is provably stable for a class of saddle-point problems, and improves performance of numerous GANs.