PhD Proposal: How Technology Impacts and Compares to Humans in Socially Consequential Arenas

Talk
Samuel Dooley
Time: 
01.10.2022 12:00 to 14:00
Location: 

Remote

One of the main promises of technology development is for it to be adopted by people, organizations, societies, and governments — incorporated into their life, work stream, or processes. Often, this is socially beneficial as it automates mundane tasks, frees up more time for other more important things, or otherwise improves the lives of those who use the technology. However, these beneficial results do not apply in every scenario and may not impact everyone in a system the same way. Sometimes a technology is developed which produces both benefits and inflicts some harm. These harms may come at a higher cost to some people than others, raising the question: how are benefits and harms weighed when deciding if and how a socially consequential technology gets developed? The most natural way to answer this question, and in fact how people first approach it, is to compare the new technology to what used to exist. As such, in this work, I make comparative analyses between humans and machines in three scenarios and seek to understand how sentiment about a technology, performance of that technology, and the impacts of that technology combine to influence how one decides to answer my main research question.In this work, I look at three such scenarios: (1) decision support tools, (2) facial analysis technology, and (3) Covid-19 technology. In the first setting, I explore a setting where human evaluators are tasked with finding the best individuals from a population (of people or things) and can pull on a variety of data sources to help them. An example of this is in mental health screening applications where a clinician with a variety of information sources (in-person sessions, audio recordings, social media posts) wants to find the most at-risk individuals from a population. In this area, I develop novel algorithms for this problem and evaluate the efficacy and comparative improvements on my algorithms when compared to human evaluators alone.In the second setting, I compare how humans and machines are vulnerable to making errors in facial analysis technology. I explore errors in facial verification, identification, and detection. For facial verification and identification, I compare the biases exhibited by humans to those of machines and conclude similar biases exist for both. For facial detection, I examine the robustness of commercial systems to perturbations under synthetic, naturally-simulated noise corruptions, finding biases along age, gender, skin type, and lighting conditions.Finally, with Covid-19, I show people’s perceptions about privacy and security have altered and been altered by the Covid-19 pandemic with data from field studies and survey collections. In all three settings, my findings from these three scenarios contribute to our understanding of the expansiveness of and the limits to technological interventions.Examining Committee:

Chair:Department Representative:Members:

Dr. John Dickerson Dr. Philip Resnik Dr. Tom Goldstein Dr. Elissa Redmiles