PhD Proposal: Towards Inverse Rendering with Global Illumination

Talk
Saeed Hadadan
Time: 
04.20.2022 12:30 to 14:30
Location: 

IRB 4107

Neural representations have become increasingly popular in the graphics/vision community. The representational power of neural networks in high-dimensional spaces has led them to be leveraged to represent geometry, radiance, reflectance and visibility fields, to name but a few. That said, only few papers exist in photo-realistic rendering that leverage neural networks such that full global illumination effects are accounted for. In other words, inverse rendering that accounts for full global illumination effectively and efficiently is still a dream. With the prospect of making neural inverse rendering possible, such that it accounts for global illumination, we introduce two pieces of the puzzle: Neural Radiosity and Differentiable Neural Radiosity, two methods in which neural networks learn the radiance function and differential radiance function while accounting for global illumination.We introduce Neural Radiosity, an algorithm to solve the rendering equation by minimizing the norm of its residual, similar as in classical radiosity techniques. Traditional basis functions used in radiosity, such as piecewise polynomials or meshless basis functions are typically limited to representing isotropic scattering from diffuse surfaces. Instead, we propose to leverage neural networks to represent the full four-dimensional radiance distribution, directly optimizing network parameters to minimize the norm of the residual. Our approach decouples solving the rendering equation from rendering (perspective) images similar as in traditional radiosity techniques, and allows us to efficiently synthesize arbitrary views of a scene.We introduce Differentiable Neural Radiosity, a novel method of representing the solution of the differential rendering equation using a neural network. Inspired by neural radiosity techniques, we minimize the norm of the residual of the differential rendering equation to directly optimize our network. The network is capable of outputting continuous, view-independent gradients of the radiance field with respect to scene parameters, taking into account differential global illumination effects while keeping memory and time complexity constant in path length. To solve inverse rendering problems, we use a pre-trained instance of our network that represents the differential radiance field with respect to a limited number of scene parameters.Examining Committee:

Chair:Department Representative:

Dr. Matthias Zwicker Dr. Soheil FeiziDr. Ramani Duraiswami