Research Funding Roundup

Research is integral to the mission of the Department of Computer Science. Faculty members and students conduct studies in traditional and interdisciplinary research areas. Below is a selection of recent funding awards and the work they will support.

 

Generating Computational Methods to Simulate Effect of Low-frequency Electromagnetic Fields on Human Organs

A team of UMD researchers received funding from the Army Research Laboratory (ARL) to develop innovative computational methods that can simulate the effect of low-frequency electromagnetic fields on human organs.

Ramani Duraiswami, a professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Science (UMIACS), and Nail Gumerov, a senior research scientist in UMIACS, are co-principal investigators of the $1.2 million award.

The UMD team came up with new ideas that combine mathematical approximations on macro-, micro- and nanoscale levels, scalable algorithms, and high-performance computing approaches in order to provide more detailed data using complex simulations.

During the first part of the cooperative research and development agreement funded by the ARL, they developed a number of computational techniques for the efficient simulation of low-frequency electromagnetic fields, including those generated by power lines. The team’s methods were then scaled up and demonstrated using U.S. Army supercomputers.

Assisted by Jeremy Hu, a first-year computer science doctoral student at UMD, the researchers are going to develop new approaches, which include advanced meshing and boundary integral equation techniques for the solution of reduced Maxwell equations in the domains of complex geometry and topology. Fast multipole methods and improved sparse matrix solvers will be among the algorithmic accelerators the team proposes to use.

Read more

 

Developing Standard Evaluations of Machine Learning Robustness

Soheil Feizi, assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is being funded by the National Institute of Standards and Technology (NIST) to develop metrics that will bridge the knowledge gap between empirical and certifiable defenses against adversarial attacks. Feizi is principal investigator of the $387,000 two-year project.

An adversarial attack involves penetrating machine learning systems in order to make small changes to the input data to confuse the algorithm, resulting in flawed outputs. Some of these changes are so small they can fly under the radar undetected, posing a serious security risk for artificial intelligence systems that are increasingly being applied to industrial settings, medicine, information analysis and more.

Read more

 

Designing Methods to Fend Off Quantum Attacks

Xiaodi Wu, an assistant professor of computer science and a Fellow in the Joint Center for Quantum Information and Computer Science, has received an award from the Air Force Office of Scientific Research (AFOSR) to develop new methods for protecting cryptographic systems from quantum attacks. He is one of 36 scientists and engineers to receive funding from the AFOSR Young Investigator Research Program.

Inspired by the success of the development of formal methods in the security analysis for large, real-world cryptographic systems, Wu aims to develop and apply formal method techniques in quantum cryptography for the automated security analysis of cryptographic systems under quantum attacks.

Read more

 

Creating Adversarial Counterattack Program

Soheil Feizi, assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), has been funded by the Defense Advanced Research Projects Agency (DARPA) to develop a program that can identify the origin and sophistication level of adversarial attacks on artificial intelligence systems.

Feizi is lead principal investigator of the $971,000 award and he will collaborate with three researchers from John Hopkins University on the two-year project.

An adversarial attack is a type of method used to penetrate machine learning systems, in which attackers make small changes to the input data to confuse the algorithm. The research team will “attack the attacks” by developing generalizable and scalable techniques that reverse engineer the attacker’s toolchains.

Read more

 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.