Trust in the Absence of Verifiability

Hal Daumé
Talk Series: 
09.08.2023 11:00 to 12:00

When NLP systems, in particular large language models, generate claims that are easily verified by people to be untrue, trust is irrelevant. What matters is when they generate claims that are not (easily) verified to be true. Which raises two questions: (1) How much complementarity is there between what LLMs "know" and what people do, and (2) Can LLMs themselves provide missing complementary information. I'll discuss some good news and bad news that provide partial answers to these questions.