Robust, Fair, and Culturally-Aware Commonsense Reasoning for NLP

Talk
Rachel Rudinger
Talk Series: 
Time: 
10.01.2025 15:00 to 16:00

Successful communication in natural language requires common ground: that is, a mutual understanding of conversation partners’ beliefs, intents, and conversational goals. For large language models (LLMs) to build useful mental models of their human users, they must possess certain human-like cognitive abilities, including basic knowledge of the world (i.e., “common sense”) and the ability to draw inferences about people or situations from partial observations (i.e., reasoning under uncertainty). However, valid measurement of these abilities in LLMs is not a simple task. In the first part of this talk, I will discuss some of my lab’s recent work developing methods to analyze the robustness of these reasoning capabilities in LLMs beyond traditional benchmark accuracy. Drawing on theories from social psychology and cultural anthropology, the second part of the talk will present work demonstrating socio-cultural biases in how LLMs reason about people, raising fairness and safety concerns about their use in certain contexts.