Self-Supervised Natural Language Processing

Talk
William Wang
University of California Santa Barbara
Time: 
03.25.2019 11:00 to 12:00
Location: 
AVW 4172

Learning to reason and understand the world’s knowledge is a fundamental problem in Artificial Intelligence (AI). While it is always hypothesized that the learning models should be generalizable and flexible, in practice, most of the progress is still in classic supervised learning settings that require a large amount of annotated training data and heuristic objectives. With the vast amount of language data available in digital form, now is a good opportunity to move beyond traditional supervised learning methods. The core research question that I will address in this talk is the following: how can we design self-supervised deep learning methods to operate over rich language and knowledge representations? In this talk, I will describe some examples of my work in advancing the state-of-the-arts in methods of deep reinforcement learning for NLP, including: 1) Reinforced Co-Training, a new semi-supervised learning framework that is driven by a reinforced performance-driven data selection policy agent ; 2) AREL, a self-adaptive inverse reinforcement learning agent for visual storytelling; and 3) DeepPath, an explainable path-based reasoning agent for inferring unknown facts. I will conclude this talk by describing my other research interests and my future research plans in the interdisciplinary field of AI and data science.