CMSC 411 Project
A General Overview of Parallel Processing
Fall 1998
Communication
Mark Pacifico mjpac@wam.umd.edu Mike Merrill mwm@glue.umd.edu
WHAT DO WE MEAN BY COMMUNICATION?
Communication with respect to parallel processing refers to how the processors communicate with the memory.  The most basic issue to discuss is how the parallel architecture names the different memory locations to which the instructions refer.  Synonymous to how humans communicate, machines cannot communicate without a common set of names upon which they agree.  As stated earlier, the two ways of naming memory locations are called shared memory and distributed memory.
COMMUNICATING WITH SHARED/DISTRIBUTED MEMORY
If you recall, memory of a conventional computer consists of a sequence of words, each of which is named by its unique address.  From the earlier section we know that with shared memory each word in each memory has a unique address that all processors agree on.  If we examine an example instruction like, "load r1, 12", this says to load the word at address 12 into register r1.  With shared memory, if two processors both execute the instruction then the same data from one location in memory will be fetched into register r1 of each processor.  However, with distributed memory, each processor would fetch location 12 of its own memory.  This creates a need for explicit messages to be sent over the communication network.  Because of this, communication in shared memory machines is generally faster, but for large numbers of processors, these types of machines are generally harder to build than those of the distributed memory model.
COMMUNICATION COSTS
Cost of communication plays a large role in the performance of a parallel program.   In our discussion of communication cost, we must talk about two vital terms - latency and bandwidth.  Latency refers to the amount of time required to send a message.  This is in terms of seconds.  Bandwidth, which is in units of words per second, is a measurement of the rate at which words flow through the network.  The cost of communication obviously depends greatly on these two aspects of the network.  The network acts like a pipeline, pumping data through at the bandwidth rate.  The delay in sending the data is equal to the latency in sending the first word.  Because of this, it can be shown that in many cases, it is more efficient to send one large message as opposed to many small ones.  Often, the amount of communication that a parallel algorithm performs determines whether the algorithm is "good" or "bad".
Proposal | Introduction | History | Parallelism | Communication | Synchronization | Summary & Questions
Copyright 1998, Mark Pacifico & Mike Merrill