PhD Defense: Exploring Diversity and Fairness in Machine Learning

Talk
Candice Schumann
Time: 
05.06.2020 10:30 to 12:30
Location: 

Virtual: https://umd.zoom.us/j/96352453140 Password: 695603

With algorithms, artificial intelligence, and machine learning becoming increasingly ubiquitous in our society, we need to start thinking about the implications and ethical concerns of new models. In fact, two types of biases that impact machine learning models are social injustice bias (bias created by society) and measurement bias (bias created by unbalanced sampling). I believe that biases against groups of individuals found in machine learning models can be mitigated through the use of diversity and fairness constraints. This thesis introduces models to help humans make decisions in diverse and less biased ways.This work starts with a call to action. Bias is rife in hiring and since algorithms are being used in multiple companies to filter applicants we need to pay special attention to this application. Inspired by this hiring application I introduce new multi-armed bandit frameworks to help assign human resources in the hiring process while enforcing diversity through a submodular utility function. Moving outside of hiring I present a contextual multi-armed bandit algorithm that enforces group fairness by learning a societal bias term and correcting for it. Additionally I take a look at fairness in traditional machine learning domain adaptation. Finally I explore extensions to my core work, delving into suicidality, comprehension of fairness definitions, and student evaluations.
Examining Committee:

Chair: Dr. John P. Dickerson Dean's rep: Dr. Stuart N. Vogel Members: Dr. Jeffrey S. Foster Dr. Hal Daume III
Dr. Alex Beutel