Towards Immersive Visual Content with Machine Learning

Talk
Brandon Feng
Time: 
04.27.2023 10:00 to 12:00
Location: 

Extended reality technology stands poised to revolutionize the way we perceive, learn, and interact with the world around us. Despite the promising future, converting visual data captured through physical cameras into digital content suitable for immersive experiences continues to pose challenges. Recent advancements in machine learning have provided new capabilities for processing and representing visual data, ultimately enhancing our ability to generate immersive visual content.In this talk, I will present my research on utilizing neural fields to represent immersive visual data. First, I will discuss how the memorization capacity of neural fields significantly reduces storage and transmission costs for high-quality light fields in immersive viewing applications. Next, I will introduce an innovative approach to 3D scene geometry modeling, which melds the representational power of neural fields with the efficiency of ray-based light field principles. Moreover, I will present our work that uncovers the surprising potential of image-based neural fields to render convincing and photorealistic novel views, even without any camera pose or 3D structure inherent in the formulation. I will conclude my presentation by discussing the future of neural fields in visual computing and exploring how we can harness their potential to push the fundamental boundaries of imaging and visualization, ultimately extending the realm of visible reality for humans.

Examining Committee

Chair:

Dr. Amitabh Varshney

Dean's Representative:

Dr. Joseph JaJa

Members:

Dr. Furong Huang

Dr. Christopher Metzler

Dr. Jia-Bin Huang