PhD Defense: Human-Centric Deep Generative Models: The Blessing and The Curse

Talk
Ning Yu
Time: 
07.20.2021 10:00 to 12:00
Location: 

Remote

Over the past years, deep neural networks have achieved significant progress in a wide range of real-world applications. In particular, my research puts a focused lens in deep generative models, a neural network solution that proves effective in visual (re)creation. But is generative modeling a niche topic that should be researched on its own? My answer is critically no. In this talk, I will present the two sides of deep generative models, their blessing and their curse to human beings. Regarding what can deep generative models do for us, I will demonstrate the improvement in performance and steerability of visual (re)creation. Regarding what can we do for deep generative models, my answer is to mitigate the security concerns of DeepFakes and improve minority inclusion of deep generative models.First, I will talk about applying attention modules and dual contrastive loss to generative adversarial networks (GANs), which pushes photorealistic image generation to a new state of the art. Next, I will introduce Texture Mixer, a simple yet effective approach to achieve steerable texture synthesis and blending. Then, I will briefly discuss one of a series of my GAN fingerprinting solutions that proactively enables the detection of GAN-generated image misuse. Lastly, I will investigate the biased misbehavior of generative models and present my solution in enhancing the minority inclusion of GAN models over underrepresented image attributes. I will conclude my talk with ongoing projects and possible future research toward human-centric visual generation.Examining Committee:

Chair: Dr. Larry Davis Dean's rep: Dr. Joseph JaJa Members: Dr. David Jacobs Dr. Matthias Zwicker
Dr. Abhinav Shrivastava