Neural Representation and Rendering of 3D Real-world Scenes

Talk
Lingjie Liu
Talk Series: 
Time: 
04.04.2022 11:00 to 12:00

High-quality reconstruction and photo-realistic rendering of real-world scenes are two important tasks that have a wide range of applications in AR/VR, movie production, games, and robotics. These tasks are challenging because real-world scenes contain complex phenomena, such as occlusions, motions and interactions. Approaching these tasks using classical computer graphics techniques is a highly difficult and time-consuming process, which requires complicated capture procedures, manual intervention, and a sophisticated global illumination rendering process. In this talk, I will introduce our recent work that integrates deep learning techniques into the classical graphics pipeline for modelling humans and static scenes in an automatic way. Specifically, I will talk about creating photo-realistic animatable human characters from only RGB videos, high-quality reconstruction and fast novel view synthesis of general static scenes from RGB image inputs, and scene generation with a 3D generative model. Finally, I will discuss challenges and opportunities in this area for future work.