PhD Defense: Modeling Human Factors and Traffic Dynamics for Generalization in Autonomous Driving
Self-driving cars have long been regarded as a futuristic technology. Thanks to advancements in deep learning and large-scale computing power, this technology is becoming closer to a likely reality by the day. While state-of-the-art (SOTA) autonomous driving can operate smoothly in clear, sunny, and low-density traffic scenarios, reliable autonomy under edge-case conditions, such as adverse weather or rare traffic events, still remains a challenge with boundless conditions. While scaling data collection and model expressivity may work for the general case, scaling simply does not work in the case of edge cases, especially for risky or difficult driving scenarios.
These factors reveal a gap in driving research: the distinction between (1) model-based dynamics with (2) human factors in deep learning for autonomous driving. In this dissertation, I propose that re-visiting traffic flow, human factors, and model sensitivity explicitly in autonomous driving tasks can be an alternative to the rare event data problem.
By explicitly modeling rigid concepts of traffic flow, human factors, and model sensitivity with deep learning approaches, we may also add rigidity and robustness to models which were originally uninterpretable black boxes. In this thesis, the goal is to improve performance in both the general case and the rare cases, which include rare events, adverse weather, or non-optimal conditions such as presence of data noise. I present several works which address edge case driving scenarios from different modeling approaches. First, we look at rigid rule-based factors which may not necessarily need to rely on deep model expressivity. Second, we model the abstract notion of driving style through a series of virtual reality user studies and latent variable modeling. Lastly, we model sensitivity explicitly and efficiently to expose transformations that driving policies may be sensitive to.