CMSC 411 Project
A General Overview of Parallel Processing
Fall 1998
An Introduction to Parallel Processing
Mark Pacifico mjpac@wam.umd.edu Mike Merrill mwm@glue.umd.edu
WHAT IS PARALLEL PROCESSING?
Parallel processing, or multiprocessing, is the use of a collection of processing elements that can communicate and cooperate to solve large problems quickly.  Although the concept of parallel processing has been around since before the dawn of electronic computing, its use hasn't become commonplace in personal computers because the performace of uniprocessors has been improving quickly enough to satisfy most users' time demands.  An alternative to using one processor to solve a problem quickly is using several processors at the same time to solve the problem even more quickly.  In the future, parallel processing will help increase productivity when the increase in the speed of uniprocessors begins to plateau.
WHAT ARE THE BENEFITS OF PARALLEL PROCESSING?
Parallel processing allows us to solve large, complex problems in a reasonable time frame.  These same problems would take a ridiculous amount of time to solve using just one processor, but combining several of these processors allows the problem to be divided and conquered.  Today, most of your everyday computer applications do not require such computing force to run efficiently.  In the future, however, parallel processing may help counter a possible slowdown in the improvement of processor performance that could affect computer users of all levels.  Using multiple processors in parallel can increase the performance of and improve the availability of those processors that run our computers.
IN GENERAL, HOW DOES PARALLEL PROCESSING WORK?
Parallel processing works by distributing portions of a program's instructions to several different processors to be executed at the same time.  In contrast, a uniprocessor executes the entire set of instructions itself and therefore takes longer to complete execution.  Assuming the instructions can be divided evenly without causing any data hazards due to dependences or otherwise, n processors working in parallel (ideally) have the potential to speed up the program's execution time by a factor n.  This ideal is difficult to approach, but a means of speeding up execution time by any amount is helpful nonetheless.  There are two different parallel processing models that have been implemented to date, SIMD and MIMD, but they will be explained further along.
WHAT IS THE PURPOSE OF THIS WEBPAGE?
The purpose of this webpage is to help the reader get a firm grasp on the concept of parallel processing.  We wish to give a general overview of what it is, how it works, and why it is important to the world of computing.  Along the way, we'll provide a little bit of the history of parallel processing, and explain some concepts that will help the reader understand and visualize the way parallel processing attacks a problem much differently then uniprocessing does.  We will also provide a few questions (and solutions) so the reader can test his or her understanding of the subject. 
Proposal | Introduction | History | Parallelism | Communication | Synchronization | Summary & Questions
Copyright 1998, Mark Pacifico & Mike Merrill