Equitable Machine Learning to Advance Healthcare
Machine learning (ML) has demonstrated the potential to fundamentally improve healthcare because of its ability to find latent patterns in large observational datasets and scale insights rapidly. However, the use of ML in healthcare also raises numerous ethical concerns, especially as models can amplify existing health inequities. In this talk, I outline two approaches to characterize inequality in ML and adapt models for patients without reliable access to healthcare. First, I decompose cost-based metrics of discrimination in supervised learning into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Second, I describe a deep generative model for disease subtyping while correcting for patient misalignment in disease onset time. I conclude with a pipeline for ethical machine learning in healthcare, ranging from problem selection to post-deployment considerations, and recommendations for future research.