How does gradient descent work?
Remote
Join Zoom Meeting: https://umd.zoom.us/j/6615193287?pwd=VC9jZ0EyVmtPK0xuVU9pUEpGVG5EZz09Meeting ID: 661 519 3287Passcode: yeyX37
Optimization is the engine of deep learning, yet the theory of optimization has had little impact on the practice of deep learning. Why? In this talk, we will first show that traditional theories of optimization cannot explain the convergence of the simplest optimization algorithm — deterministic gradient descent — in deep learning. Whereas traditional theories assert that gradient descent converges because the curvature of the loss landscape is “a priori” small, we will explain how in reality, gradient descent converges because it *dynamically avoids* high-curvature regions of the loss landscape. Understanding this behavior requires Taylor expanding to third order, which is one order higher than normally used in optimization theory. While the “fine-grained” dynamics of gradient descent involve chaotic oscillations that are difficult to analyze, we will demonstrate that the “time-averaged” dynamics are, fortunately, much more tractable. We will present an analysis of these time-averaged dynamics that yields highly accurate quantitative predictions in a variety of deep learning settings. Since gradient descent is the simplest optimization algorithm, we hope this analysis can help point the way towards a mathematical theory of optimization in deep learning.