CMSC 411 Project
A General Overview of Parallel Processing
Fall 1998
The History of Parallel Processing
Mark Pacifico Mike Merrill
THE 1950s
In 1955, IBM introduces the 704.  A man by the name of Gene Amdahl is the principal architect on the project, and it becomes the first commercial machine with floating-point hardware.  Over the next couple of years, several other computers are developed, and several companies are formed (namely Digital Equipment Corporation and Control Data Corporation).  In 1958, two IBM employees by the names of John Cocke and Daniel Slotnick first discuss the use of parallelism in numerical calculations in a research memo.  Within two years, work begins around the globe on the development of  parallel computing architectures.
THE 1960s
In 1962, Burroughs introduces the D825 symmetrical MIMD multiprocessor.  1 to 4 CPUs access 1 to 16 memory modules using a crossbar switch.  The CPUs are similar to the later B5000; the operating system is symmetrical, with a shared ready queue.   In 1964, Control Data Corporation (CDC) produces the CDC 6600, which is both a technical and commercial success.  The machine contains on 60-bit CPU and 10 peripheral processing units (PPUs), and uses a scoreboard to manage instruction dependences.  Over the next couple of years several large companies, along with academic and military guidance, develop massively parallel machines.  In 1967, Gene Amdahl and Daniel Slotnick have published debate at AFIPS Conference about the feasibility of parallel processing.  Amdahl's argument about limits to parallelism becomes known as "Amdahl's Law".  By 1970, a handful of companies deliver multiprocessing computers, including Honeywell, who introduces its first Multics system in 1969.  The Multics is a symmetric multiprocessor system capable of running up to 8 processors in parallel.    
THE 1970s
In the early seventies, steady improvement of parallel systems continues.  CDC delivers its hardwired Cyberplus parallel radar imaging system to Rome Air Development Center, where it achieves an astounding 250 times the performance of the CDC 6600, released just seven years ago.  In 1971, Intel produced the world's first single-chip CPU, the 4004 microprocessor.  A year later, Seymour Cray leaves CDC to found Cray Research Inc.  Cray will become known for their powerful multi-processor computers in the decades ahead.  In 1974, CDC delivers the STAR-100, the first commercial pipelined vector supercomputer, to the Lawrence Livermore National Laboratory.  Progress continues in 1975 and 1976, and in 1977 the C.mmp multiprocessor is completed at Carnegie-Mellon University.  The machine contains 16 PDP-11 minicomputers connected by a crossbar to shared memories, and supports much early work on languages and operating systems for parallel machines.  A year later, in his Turing Award address, John Backus (inventor of FORTRAN) argues against the use of conventional imperative languages, and for functional programming. The difficulty of programming parallel computers in imperative languages is cited as one argument against them.  Advances are made in microprocessor development over the final years of the decade that begin to convince many people that parallel processing will not be a necessity in computing situations that do not require extremely powerful computers.  
THE 1980s
In 1980, the PFC (Parallel FORTRAN Compiler) is developed at Rice University under the direction of Ken Kennedy.  Later that year, DEC develops the KL10 symmetric microprocessor.  In addition, several researchers publish new descriptions and models relavent to parallel computing.  Among them are the concept of random routing, a method of reducing contention in message-routing networks, and the ultracomputer model, in which processors are connected by a shuffle/exchange network.  Over the next few years, several new companies are founded with the goal of developing powerful computers, and many existing companies unveil new systems that take advantage of multiprocessing.  In 1984, the CRAY X-MP family is expanded to include 1 and 4 processor machines.  A CRAY X-MP running CX-OS, the first Unix-like operating system for supercomputers, is delivered to NASA Ames.  Also in 1984, Multiflow is founded by Josh Fisher and others from Yale to produce very long instruction word (VLIW) supercomputers.  Several companies continue to develop supercomputers that utilize parallel processing to achieve spectacular performance.  In 1987, the first Gordon Bell Prizes for parallel performance are awarded.  The recipients of the award are responsible for phenomenal speedup (up to 600) on a variety of applications running on different types of parallel machines.  In the meantime, personal computers are finding their ways into more and more homes across the United States and the world.  The market boom in personal computers draws attention from the world of parallel computing, and several companies fail or change their products to meet the new market demand.  Because the technology exists to make rather inexpensive yet semi-powerful PCs using uniprocessor systems, the development and advancement of parallel systems becomes less important to the computing world.  Nonetheless, several companies continue dedicated research in parallel processing due to the notion that uniprocessor speedup will eventually plateau and force computer builders to consider parallel architectures as a means of improving computer performance.  
THE 1990s
In the early 1990s, the trends established towards the end of the previous decade continue.  Uniprocessors continue to increase in speed at a steady rate, and parallel computing is left to the developers of supercomputers.  Companies like Cray and Sun continue to make powerful computers that begin to approach the size and usability of some desktop personal computers.  Many more companies involved in multiprocessor systems research and development close their doors.  The companies that do remain productive continue to improve performance on their parallel machines and find customers in the government, business, and the miliary.  Here in 1998, parallel computing is indeed alive and well, but is hidden to the view of the average computer user due to the fact that uniprocessor systems provide (in most cases) more than enough power for the everyday home or business user.  In the future, we may need to rely on parallelism to a large extent if we begin to reach the limit on speed using just one processor.  Already computers are taking advantage of the multiprocessor technology that has been developed since the 50s, and they will surely make use of it in the next millenium.  
Proposal | Introduction | History | Parallelism | Communication | Synchronization | Summary & Questions
Copyright 1998, Mark Pacifico & Mike Merrill