Sparsity: Challenge or Opportunity?

Talk
Bahar Asgari
Talk Series: 
Time: 
02.25.2021 13:00 to 14:00

Sparse problems – computer programs in which data lacks spatial locality in memory – are the main components in several application domains such as recommendation systems, computer vision, robotics, graph analytics, and scientific computing. Today, several computers and supercomputers containing millions of CPUs and GPUs are actively involved in executing sparse problems. Although sparse problems dominate, we have been designing our machines only for dense problems for such a long time. Because of the contradiction between the abilities of the hardware and the nature of the problems, even modern high-performance CPUs and GPUs and state-of-the-art domain-specific architectures are poorly suited to sparse problems, utilizing only a tiny fraction of their peak performance. In this talk, I present my research that provides solutions to resolve four main challenges that prevent sparse problems from efficiently achieving high performance on today’s computing platforms: computation underutilization, slow decompression, data dependencies, and irregular/inefficient memory accesses. In more detail, I focus on the last two challenges and illustrate how my research suggests converting mathematical dependencies into gate-level dependencies at the software level and exploiting dynamic partial reconfiguration at the hardware level, to execute sparse scientific problems more quickly than conventional architectures do. I also explain how my research deals with sparsity by using an intelligent reduction tree near memory to process data while gathering them from random locations of memory – neither where data reside nor where dense computations occur. In the end, I propose my plans for developing a novel approach to computing using intelligent dynamically reconfigurable computation platforms to envision the needs of data and algorithms in the future.