Teaching Machines to Read with Less Supervision

Talk
Alan Ritter
Talk Series: 
Time: 
04.10.2020 11:00 to 12:00
Location: 

In recent years, advances in speech recognition and machine translation (MT) have led to wide adoption, for example, by helping people issue voice commands to their phone and talk with people who do not speak the same language. These advances are possible due to the use of neural network methods on large, high-quality datasets. However, computers still struggle to understand the meaning of language. In this talk, I will present two efforts to scale up natural language understanding, drawing inspiration from recent successes in speech and MT. First, I will describe an effort to extract structured knowledge from text, without relying on human labels. Our approach combines the benefits of structured learning and neural networks, accurately predicting latent relation mentions given only indirect supervision from a knowledge base. In extensive experiments, we demonstrate that the combination of structured inference, missing data modeling, and end-to-end learned representations leads to state-of-the-art results on a minimally supervised relation extraction task. In the second part of the talk, I will discuss conversational agents that are learned from scratch in a purely data-driven way. To address the challenge of dull responses, which are common in neural dialogue, I will present several strategies that maximize the long-term success of a conversation.