Learning 3D Modeling and Simulation From and For the Real World

Talk
Wei-Chiu Ma
Talk Series: 
Time: 
03.13.2023 11:00 to 12:00

Humans have extraordinary capabilities of comprehending and reasoning about our 3D visual world. With just a few casual glances, we can grasp the 3D structure and appearance of our surroundings and imagine all sorts of “what-if” scenarios in our minds. Existing 3D systems, in contrast, cannot. They lack structural understanding of the world and often break apart when moved to unconstrained, partially-observed, and noisy environments. In this talk, I will present my efforts on developing robust computational models that can perceive, reconstruct, and simulate dynamic 3D surroundings from sparse and noisy real-world observations. I will first show that by infusing structural priors and domain knowledge into existing algorithms, we can make them more robust and significantly expand their applicable domains, opening up new avenues for 3D modeling. Then, I will present how to construct a composable, editable, and actionable digital twin from sparse, real-world data that allows robotic systems (e.g., self-driving vehicles) to simulate counterfactual scenarios for better decision-making. Finally, I will discuss how to extrapolate beyond these two efforts and build intelligent 3D systems that are accessible to everyone and applicable to other other real-world settings.