Principle Investigators

Alan Sussman, Ph.D.
Henrique Andrade, Ph.D.
Christian Hansen, M.S.
Jae-Yong Lee, M.S.
Il-Chul Yoon, M.S.
Norman Lo, M.S.

Software Distribution

  • Current version (1.6)
  • Version 1.5
  • Version 1.1

    Related Information

  • Publication List

  • User's Manual (1.6)
  • User's Manual (1.5)
  • User's Manual (1.1)

  • Programmer's Ref (1.6)
  • Programmer's Ref (1.5)
  • Programmer's Ref (1.1)

    Related Project

  • CCA Components

  • Simulation of physical systems has become the third leg of investigation in many scientific disciplines, along with theory and experimentation. Many software projects and tools have addressed issues related to efficiently supporting simulation of individual physical models on large scale parallel and distributed systems. However, little attention has been paid to software support for simulation of complex systems requiring multiple physical models, at multiple scales and resolutions. One reason is that there have not been many efforts in scientific disciplines to model complex phenomena using multiple models. However that is changing, especially in areas such as earth science, space science and other physical sciences, where the use of multiple models can provide significant advantages over single models. Employing multiple models presents difficult challenges, both to model the physics correctly and to support efficient use of multiple simulation codes. The individual models must be coupled, so they can exchange information either at boundaries where the models align in physical space, or in areas where the models overlap in space.

    Our work concentrates on development of algorithms and techniques for effectively solving key problems in software support for coupled simulations. We attack these problems by concentrating on three main issues: (1) comprehensive support for determining at runtime what data is to be moved between simulations, (2) flexibly and efficiently determining when the data should be moved, and (3) effectively deploying coupled simulation codes in a Grid computing environment. For an effective solution, a major goal is to minimize the changes that must be made to each individual simulation code. This goal is accomplished by having an individual simulation model only specify what data will be made available for a potential data transfer and not specify when an actual data transfer will take place. Decisions about when data transfers will take place will be made through a separate coordination specification, that generally will be provided by the person building the complete coupled simulation.

    InterComm is our runtime library that achieves direct data transfers between data structures managed by multiple data parallel languages and libraries in different programs. Such programs include those that directly use a low-level message-passing library, such as MPI. Each program does not need to know in advance (i.e. before a data transfer is desired) any information about the program on the other side of the data transfer. All required information for the transfer is computed by InterComm at runtime. Such a data transfer requires that all processes of the sender and receiver programs locate data elements involved in the data transfer and that a mapping be specified between the data elements in the two data structures. Using the data distribution and mapping information , InterComm generates all the information required to execute direct data transfers between the processes in the sender program and the receiver program (a customized all-to-all communication pattern), and stores the information in a communication schedule.

    The most recently released version of InterComm, version 1.6, provides all the functionality of version 1.1. InterComm 1.6 enables a more convenient programming model that separates information about the programs to be run for a coupled application and how to couple between programs into a file external to the individual programs called an XML Job Description (XJD). These XJD-based programming models make using InterComm to couple multiple programs into a single application even simpler than for earlier versions.

    In addition to the new programming model, InterComm v1.6 provides functionality for broadcasting a local array block in a process in one program to all the processes in another program, as opposed to the all-to-all communication supported by the standard InterComm calls. This functionality is useful for sending a small number of data items from one program to another (e.g., the size of a large array that will be communicated via another InterComm call). The new functionality is supported for all InterComm programming models, and does not require the communication schedule generation required by the standard InterComm calls.

    InterComm v1.6 also supports new functionality for sending an array into multiple programs. This new functionality eliminates the need to have multiple export statements for the same region and it is supported for the XJD-based programming interface.

    InterComm is being used as the framework for coupling the various model codes being developed in the Center for Integrated Space Weather Modeling, CISM, an NSF Science and Technology Center. InterComm provides the glue that allows models developed by the various CISM space science teams to be coupled into a complete model of the effects of the solar wind on the Earth's magnetic field, or space weather.

    InterComm has also been integrated into a component that conforms to the Common Component Architecture, CCA, specification. CCA is used to develop and componentize several DOE science applications, so this integration enables InterComm to be used to couple (parallel) components from those applications.

    Last updated by Norman Lo