Human-Centric Machine Learning: Enabling Machine Learning for High-Stakes Decision-Making

Talk
Hima Lakkaraju
Stanford University
Talk Series: 
Time: 
03.09.2018 11:00 to 12:00
Location: 

AVW 4172

Domains such as law, healthcare, and public policy often involve highly consequential decisions which are predominantly made by human decision-makers. The growing availability of data pertaining to such decisions offers an unprecedented opportunity to develop machine learning models which can aid human decision-makers in making better decisions. However, the applicability of machine learning to the aforementioned domains is limited by certain fundamental challenges: 1) The data is selectively labeled i.e., we only observe the outcomes of the decisions made by human decision-makers and not the counterfactuals. 2) The data is prone to a variety of selection biases and confounding effects. 3) The successful adoption of the models that we develop depends on how well decision-makers can understand and trust their functionality, however, most of the existing machine learning models are primarily optimized for predictive accuracy and are not very interpretable. In this talk, I will describe novel computational frameworks which address the aforementioned challenges, thus, paving the way for large-scale deployment of machine learning models to address problems of significant societal impact. First, I will discuss how to build interpretable predictive models and explanations of complex black box models which can be readily understood and consequently trusted by human decision-makers. I will then outline efficient and provably near-optimal approximation algorithms to solve these problems. Next, I will present a novel evaluation framework which allows us to reliably compare the quality of decisions made by human decision-makers and machine learning models amidst challenges such as missing counterfactuals and presence of unmeasured confounders (unobservables). Lastly, I will provide a brief overview of my research on diagnosing and characterizing biases (systematic errors) in human decisions and predictions of machine learning models. I will conclude the talk by sketching future directions which enable effective and efficient collaboration between humans and machine learning models to address problems of societal impact.