A Variational Perspective on Optimization, Sampling, and Game Dynamics for Machine Learning

Talk
Andre Wibisono
Talk Series: 
Time: 
04.09.2020 14:00 to 15:00

Machine learning has been successful and is prevalent in everyday life, shaping many aspects of modern society. Nevertheless, many fundamental questions remain, and it is important to develop a proper theoretical understanding of machine learning to guide its future development. In this talk I will discuss the fundamental properties of optimization, sampling, and game dynamics for machine learning. In optimization, I will present a variational perspective on accelerated methods via the principle of least action in continuous time, and derive new families of accelerated methods which achieve faster convergence under refined smoothness conditions. In sampling, I will present a study of sampling as optimization in the space of measures, and show fast convergence of Langevin algorithms under isoperimetry conditions which extend classical log-concavity results. In game dynamics, I will present an analysis of minimax games as skew-optimization in the space of joint configurations, and show fast convergence of the classical fictitious play algorithm and its optimistic variant under smoothness.