Feizi Funded by NSF and Meta to Advance Underexplored Areas of Machine Learning

Descriptive image for Feizi Funded by NSF and Meta to Advance Underexplored Areas of Machine Learning

A University of Maryland expert in machine learning has been awarded significant funding by two separate entities to tackle pressing problems in the field. 

Soheil Feizi, an assistant professor in the Department of Computer Science with an appointment in the University of Maryland Institute for Advanced Computer Studies, is principal investigator of two $300K awards. 

The first comes from the National Science Foundation’s Division of Computing and Communication, with the goal of understanding robustness via parsimonious structures.

Although deep neural networks have led to significant advances in computer vision, language processing, and robotics, they’re still extremely sensitive to small perturbations to training sets, which can lead to significant vulnerabilities and attacks.

Feizi is developing a mathematical framework to better understand why deep networks can be fooled into making wrong predictions, and how to design and train networks that can withstand coordinated attacks.

“The idea is to defend against data poisoning and learn if a network can tolerate a certain amount of poison in the training set without being sensitive to it,” explains Feizi, a core member of the University of Maryland Center for Machine Learning. 

The second award, from Meta, will fund his research on multi-modal explainability with limited supervision—one of the most underexplored problems in artificial intelligence.

The goal of this project is to understand and interpret multi-modal data representations such as text, audio, images or video.

Feizi will also study the reliability and faithfulness of interpretation signals in terms of their robustness and fairness, developing methods that are robust against small adversarial or natural input perturbations.

“We would like to gain a holistic understanding of the explanation systems and their inner workings such as faithfulness and plausibility,” he says. “Finding a way to disentangle these two is an important open problem that will help us understand how to scale these methods more reliably.” 

—Story and photo by Maria Herd, UMIACS communications group

 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.