Learning to Synthesize Images

Talk
Jun-Yan Zhu
MIT
Talk Series: 
Time: 
03.07.2019 11:00 to 12:00
Location: 

People are avid consumers of visual content. Every day, we watch videos, play games, and share photos on social media. However, there is an asymmetry – while everybody is able to consume visual content, only a chosen few (e.g., painters, sculptors, film directors) are talented enough to express themselves visually. For example, in modern computer graphics workflows, professional artists have to explicitly specify everything “just right” including geometry, materials, and lighting, for a human to perceive an image as realistic. To automate this tedious process, I present several general-purpose machine learning algorithms for image synthesis. Our methods can discover the structure of the visual world from the data itself and learn to synthesize realistic high-dimensional outputs directly. I then demonstrate applications in different fields such as vision, graphics, and robotics, as well as usages by educators, developers, and visual artists. Finally, I discuss our ongoing efforts on learning to synthesize 3D objects and high-resolution videos, with the ultimate goal of building machines that can recreate the visual world and help everyone tell visual stories.