Anytime Probabilistic Reasoning
Graphical models, including constraint networks, Bayesian networks, Markov random fields and influence diagrams, have become a central paradigm for knowledge representation and reasoning in artificial intelligence, and provide powerful tools for solving problems in numerous application areas. Reasoning over probabilistic graphical models typically involves answering inference queries, such as computing the most likely configuration (maximum a posteriori or MAP) or evaluating the marginals or the normalizing constant of a distribution (the partition function). A task called marginal MAP generalizes these two by maximizing over a subset of variables while marginalizing over the rest and is also highly instrumental for sequential decision making under uncertainty. Exact computation of such queries is known to be intractable in general, leading to the development of many approximate schemes, the major categories of which are variational methods, search algorithms, and Monte Carlo sampling. The key is to leverage ideas and techniques from the three inference paradigms, and integrating them to provide hybrid solutions that inherit their respective strengths. In the past decade my research group at UCI has developed state-of-the art algorithms based on such integration, winning a few solver competitions. In this talk I will review the main algorithmic principles for probabilistic reasoning. The emerging solver allow for flexible trading-off memory for time and time for accuracy and aim for anytime behavior that generates not only an approximation that improves with time, but also confidence bounds which become tighter with more time.