PhD Defense: Interpretable Deep Learning for Time Series

Talk
Aya Ismail
Time: 
07.07.2022 11:00 to 13:00
Location: 

IRB 4105

Time series data emerge in applications across many critical domains, including neuroscience, medicine, finance, economics, and meteorology. However, practitioners in such fields are hesitant to use Deep Neural Networks (DNNs) that can be difficult to interpret. For example, in clinical research, one might ask, "Why did you predict this person as more likely to develop Alzheimer's disease?". As a result, research efforts to improve the interpretability of deep neural networks have significantly increased in the last couple of years. Nevertheless, they are mainly applied to vision and language tasks, and their applications to time series data are relatively unexplored. My work aims to identify and address the limitations of interpretability of neural networks for time series data.In the first part of my work, I extensively compare the performance of various interpretability methods across diverse neural architectures commonly used in time series in a new benchmark of synthetic time series data. I propose and report multiple metrics to empirically evaluate the performance of interpretability methods for detecting feature importance over time. I find that network architectures and saliency methods fail to reliably and accurately identify feature importance over time. For RNNs, saliency vanishes over time, biasing detection of salient features only to later time steps, and are, therefore, incapable of reliably detecting important features at arbitrary time intervals. At the same time, non-recurrent architectures fail due to the conflation of time and feature domains.The second part of my work focuses on improving time series interpretability by enhancing saliency methods, training procedures, and neural architectures. First, I improve the quality of time series saliency maps by detangling time and feature importance through two-step temporal saliency rescaling (TSR). Then, I introduce a saliency guided training procedure for neural networks to reduce noisy gradients used in predictions. Next, I propose an inherently-interpretable framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy. Finally, I design a novel RNN cell structure (input-cell attention), preserving a direct gradient path from the input to the output at all timesteps. As a result, explanations produced by the input-cell attention RNN can detect important features regardless of their occurrence in time.
Examining Committee:

Chair:Dean's Representative:Members:

Dr. Soheil Feizi Dr. Joseph Jaja Dr. Hector Corrada Bravo Dr. Thomas Aaron Goldstein Dr. Sercan Arik (Google Cloud AI Research)