Acquiring and Aggregating Information in Societal Contexts
The data that we give our algorithms often come from people; the predictions or decisions made by our algorithms often affect people. Yet classically, study of e.g. learning algorithms does not take into account the behavior or priorities of these participants. So: how does this "societal context" impact the understanding and design of systems that acquire and aggregate information? This talk will break this question down into three major research directions: "fairness" of algorithms' decisions, "privacy" for data provided, and "strategic behavior" of people providing information. We will briefly discuss high-level trends and research questions, then specific projects. In particular, motivated by fairness, we will examine the performance of a shortsighted "greedy" algorithm in an online learning setting; and, motivated by privacy, we will see a method for making "differentially private" machine learning more practical.