Inverse Graphics for Next-Gen Video Communication

Talk
Soumyadip Sengupta
Talk Series: 
Time: 
04.14.2022 14:00 to 15:00

Video communication connects our world. The ongoing global pandemic has highlighted the necessity of video as a medium for virtual education, tele-health, business, political discourse, and creative contents. However, the visual quality of these videos significantly lack that of the professionally created videos captured with expensive equipment and edited with human expertise. In this talk, I will discuss my ongoing research on creating next-gen video communication and content creation framework by democratizing high-quality video production and editing. To improve various components of a video computationally (e.g. change lighting, replace the background etc.), we first need to infer those intrinsic components of the video related to the 3D world, and then edit them. This problem, which is often known as Inverse Graphics, is a holy-grail problem in Computer Vision. It is computationally challenging and severely under-constrained. To solve Inverse Graphics for democratizing video production, my research adopts a unique user-centric and personalized AI based approach. I will first discuss how simple user interaction strategies, such as capturing an extra image, can help us perform high-quality background replacement in real-time. Then, I will discuss how we can personalize AI models by training on the data of a specific user captured by their webcam to improve lighting.