Soheil Feizi Funded by DARPA to Develop Adversarial Counterattack Program
A University of Maryland expert in machine learning has been funded by the Defense Advanced Research Projects Agency (DARPA) to develop a program that can identify the origin and sophistication level of adversarial attacks on artificial intelligence (AI) systems.
Soheil Feizi, assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is lead principal investigator of the $971K award.
He will collaborate with three researchers from the John Hopkins University Department of Electrical and Computer Engineering on the two-year project: professor René Vidal, assistant professor Najim Dehak and assistant research professor Jesus Villalba.
An adversarial attack is a type of method used to penetrate machine learning systems, in which attackers make small changes to the input data to confuse the algorithm. These attacks are an emerging security threat as AI is further applied to industrial settings, medicine, information analysis and more.
The majority of existing research in this area focuses on improving the defenses against these types of attacks, explains Feizi. Instead, his team will “attack the attacks” by developing generalizable and scalable techniques that reverse engineers the attacker’s toolchains.
These innovative methods will be used to design Reverse Engineer Deceptions (RED), a program that will not only identify the origin of the attack and its sophistication level, but also the most effective defense to use against future attacks, Feizi says.
“By extracting attack signatures, we will be able to either cluster similar toolchains, or identify novel attacks that have not been seen previously,” say the researchers.
–Story by Maria Herd (UMIACS)
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.