PhD Proposal: Methods for Improving the Communication Efficacy of Language Models: Faithfulness and Pragmatics

Talk
Lingjun Zhao
Time: 
11.19.2025 12:00 to 14:00

Effective human–AI collaboration requires language models to communicate in ways that reflect human conversational norms. While recent models generate fluent responses, they still fall short on faithfulness and informativeness, two of Grice’s conversational maxims. This proposal introduces methods to improve these aspects through enhanced uncertainty communication, explanation faithfulness, and pragmatic communication, ultimately enabling more effective communication with humans.
First, we study how to convey uncertainty in task-oriented instruction generation to help users assess model reliability. We develop a span-level hallucination detection and correction model, and show that communicating rich uncertainty information improves human performance in long-horizon tasks.
Second, we improve explanation faithfulness by introducing a consistency metric as a necessary condition. We find over 62% of model-generated explanations lack this consistency. By training models to optimize this metric, we generate explanations that are more consistent and faithful.
Third, we enhance pragmatic communication by enabling models to anticipate listener interpretations. In simulated navigation tasks, we find listener ensembles improve performance. Extending this, we propose culturally adaptive text generation tailored to diverse audiences. We introduce methods to support pragmatic communication.