Trust in the Absence of Verifiability
IRB 0318 or Zoom: https://umd.zoom.us/j/92721031800?pwd=dGhidU13dzl0cmI2eUM4SzJLNTZrZz09
When NLP systems, in particular large language models, generate claims that are easily verified by people to be untrue, trust is irrelevant. What matters is when they generate claims that are not (easily) verified to be true. Which raises two questions: (1) How much complementarity is there between what LLMs "know" and what people do, and (2) Can LLMs themselves provide missing complementary information. I'll discuss some good news and bad news that provide partial answers to these questions.