Listing of all papers published by Yuan-Shin Hwang

Click here to remove the abstracts

  • Parallelizing Molecular Dynamics Programs for Distributed Memory Machines: An Application of the CHAOS Runtime Support Library
    University of Maryland, Department of Computer Science and UMIACS Technical Reports CS-TR-3374, UMIACS-TR-94-125

    Yuan-Shin Hwang, Raja Das, Joel Saltz, Bernard Brooks, Milan Hodoscek.

    CHARMM (Chemistry at Harvard Macromolecular Mechanics) is a program that is widely used to model and simulate macromolecular systems. CHARMM has been parallelized by using the CHAOS runtime support library on distributed memory architechtures. This implementation distributes both data and computations over processors. This data-parallel strategy should make it possible to simulate very large molecules on large numbers of processors.

    In order to minimize communication among processors and to balance computational load, a variety of partitioning approaches are employed to distribute the atoms and computations over processors. In this implementation, atoms are partitioned based on geometrical positions and computational load by using unweighted or weighted recursive coordinate bisection. The experimental results reveal that taking computational load into account is essential. The performance of two iteration partitioning algorithms, atom decompositions and force decomposition, is also compared. A new irregular force decompositional algorithm is introduced and implemented.

    The CHAOS library is designed to facilitate parallelization of irregular applications. This library (1) couples partitioners to the application programs, (2) remaps data and partitions work among processors, and (3) optimizes interprocessor communications. This paper presents and application of CHAOS that can be used to support efficient execution of irregular problems on distributed memory machines.

  • Runtime Support and Compilation Methods for User-Specified Data Distributions
    UMD Technical Report CS-TR-3194 and UMIACS-TR-93-135

    Ravi Ponnusamy, Joel Saltz, Alok Choudhary, Yuan-Shin Hwang, Geoffrey Fox.

    This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.

  • Applying the {CHAOS/PARTI} Library to Irregular Problems in Computational Chemistry and Computational Aerodynamics
    Published in Proceedings of the Scalable Parallel Libraries Conference, Mississippi State University, Starkville, MS pg: 45-46

    R. Das, Y. Hwang, M. Uysal, J. Saltz, A. Sussman.

    This paper describes a number of optimizations that can be used to support the efficient execution of irregular problems on distributed memory parallel machines. We describe software primitives that (1) coordinate interprocessor data movement, (2) manage the storage of, and access to, copies of off-processor data, (3) minimize interprocessor communication requirements and (4) support a shared name space. The performance of the primitives is characterized by examination of kernels from real applications and from a full implementation of a large unstructured adaptive application (the molecular dynamics code CHARMM).

  • Communication Optimizations for Irregular Scientific Computations on Distributed Memory Architechtures
    University of Maryland, Department of Computer Science and UMIACS Technical Reports CS-TR-3163, UMIACS-TR-93-109

    Raja Das, Mustafa Uysal, Joel Saltz, Yuan-Shin Hwang.

    This paper describes a number of optimizations that can be used to support the efficient execution of irregular problems on distributed memory parallel machines. We describe software primitives that (1) coordinate interprocessor data movement, (2) manage the storage of, and access to, copies of off-processor data, (3) minimize interprocessor communication requirements and (4) support a shared name space. We present a detailed performance and scalability analysis of the communication primitives. This performance and scalability analysis is carried out using a workload genera tor, kernels from real applications and a large unstructured adaptive application (the molecular dynamics code CHARM M).

  • Supporting Irregular Distributions in {FORTRAN 90D/HPF} Compilers
    University of Maryland, Department of Computer Science and UMIACS Technical Reports CS-TR-3268, UMIACS-TR-94-57

    Ravi Ponnusamy, Yuan-Shin Hwang, Joel Saltz, Alok Choudhary, Geoffrey Fox.

    We present methods that make it possible to efficiently support an important subclass of irregular problems using data parallel languages. The approach we describe involves the use of a portable, compiler-independent, runtime support library called CHAOS. The CHAOS runtime support library contains procedures that support static and dynamic distributed array partitioning, partition loop iterations and indirection arrays, remap arrays from one distribution to another, and carry out index translation, buffer allocation and communication schedule generation.

    The CHAOS runtime procedures are used by a prototype Fortran 90D compiler as runtime support for irregular problems. We present performance results of compiler-generated and hand-parallelized versions of two stripped down applications codes. The first code is derived from an unstructured mesh computational fluid dynamics flow solver and the second is derived from the molecular dynamics code CHARMM.

    A method is described that makes it possible to emulate irregular distributions in HPF by reordering elements of data arrays and renumbering indirection arrays. We present results that suggest that an HPF compiler could use reordering and renumbering extrinsic functions to obtain performance comparable to that achieved by a compiler for a language (such as Fortran 90D) that directly supports irregular distributions.

  • Run-time and Compile-time Support for Adaptive Irregular Problems
    Published in Supercomputing 1994 published by IEEE Press, pg: 97-106

    Shamik D. Sharma, Ravi Ponnusamy, Bongki Moon, Yuan-Shin Hwang, Raja Das, Joel Saltz.

    In adaptive irregular problems the data arrays are accessed via indirection arrays, and data access patterns change during computation. Implementing such problems on distributed memory machines requires support for dynamic data partitioning, efficient preprocessing and fast data migration. This research presents efficient runtime primitives for such problems. This new set of primitives is part of the CHAOS library. It subsumes the previous PARTI library which targeted only static irregular problems. To demonstrate the efficacy of the runtime support, two real adaptive irregular applications have been parallelized using CHAOS primitives: a molecular dynamics code (CHARMM) and a particle-in-cell code (DSMC). The paper also proposes extensions to Fortran D which can allow compilers to generate more efficient code for adaptive problems. These language extensions have been implemented in the Syracuse Fortran 90D/HPF prototype compiler. The performance of the compiler parallelized codes is compared with the hand parallelized versions.