COSMIC - Compiling for Advanced Architectures

The COSMIC project at the University of Maryland attempts to bridge the gap between applications, operating systems, and advanced architectures with compiler analysis and optimization. We are evaluating our ideas using the COSMIC optimizing compiler, an extension to the Stanford SUIF compiler infrastructure.

Focus of Research

The goal of the COSMIC project is to support efficient machine-independent programming of advanced architectures. Users desire the ability to write programs that can run well on a variety of computers, since such programs are portable and protect software investment. However, modern processor architectures are quite varied and complex. To achieve high performance, it is critical that the compiler generates code that can efficiently utilize the underlying hardware. In particular, programs must exploit both parallelism and locality to achieve high performance.

To achieve our goal, we attempt to solve important problems with practical significance, combining sound theoretical foundations with solid empirical validation.

Research Directions

  • Interactions between optimizations and architectures

    Investigate the interactions between compiler optimizations and computer architectures to evaluate the most effective integrated approach to achieving high performance.

  • Compiler support for eXplicit Multi-Threading (XMT)

    XMT is an architecture designed to efficiently exploit fine-grain parallelism on-chip. Explicitly parallel programs with a simple no busy-wait execution model are synchronized with a scalable prefix-sum instruction. Compiling for XMT requires creating efficient parallel threads and optimizations to coarsen threads.

  • Data layout optimizations for high performance architectures

    Locality and the pervasive nature of RISC cache systems demand greater emphasis on customizing programs for caches with long lines and limited set associativity. Compilers can manipulate data layout to improve performance for both scalar and parallel programs. Integrating data and computation transformations will be challenging but worthwhile for large programs.

  • Compiler and run-time support for adaptive irregular computations

    Scientific applications are becoming more complex as they enter larger and more complex problem domains. In particular, sparse and irregular data structures are becoming common. However, exploiting parallelism and locality in irregular computations is more difficult and require sophisticated run-time support. We are developing techniques to combine compiler and run-time support to improve locality and exploit parallelism for adaptive irregular computations.

  • Compiling for software distributed-shared-memory (DSM) systems

    Software DSMs such as CVM provide a convenient shared-memory programming model for message-passing machines. Compile-time information can be used to improve their performance for many scientific computations. Software DSMs can also serve as a convenient testbed for future multiprocessor architectures that support both message passing and a variety of coherence protocols for shared memory.

  • Faculty
    Collaborating Research Groups
  • The Chaos Project
  • Coherent Virtual Machine
  • Fortran D System Group
  • High Performance Systems Software Lab
  • The Omega Project
  • SUIF Compiler Group
  • The Vortex Project
  • The Explicit Multi-Threading (XMT) Project
  • Current Support

  • NSF ASC9625531
  • NSF CCR9711514
  • NSF CCR0000988