Introduction to Parallel Computing (CMSC416)

 

Assignment 1: MPI

Due: Tuesday March 7, 2023 @ 11:59 PM Eastern Time

 

The purpose of this programming assignment is to gain experience in parallel programming on a cluster and MPI. For this assignment, you have to write a parallel implementation of a program to simulate the Game of Life.

Serial Algorithm

The game of life simulates simple cellular automata. The game is played on a rectangular board containing cells. At the start, some of the cells are occupied, the rest are empty. The game consists of constructing successive generations of the board. The rules for constructing the next generation from the previous one are:

  1. death: cells with 0,1,4,5,6,7, or 8 neighbors die (0,1 of loneliness and 4-8 of over population)
  2. survival: cells with 2 or 3 neighbors survive to the next generation.
  3. birth: an unoccupied cell with 3 neighbors becomes occupied in the next generation.
For this assignment, the game board has finite size. The x-axis starts at 0 and ends at X_limit-1 (supplied on the command line). Likewise, the y-axis starts at 0 and ends at Y_limit-1 (supplied on the command line).

You can use the provided serial C++ code (also available on zaratan at /home/asussman/public/416/Project1-MPI/serial.C) as a baseline to develop your parallel implementation. It provides some basic functionality such as parsing the input file and exporting the final board state to a CSV file. Your task is to implement the parallel version using C, C++, or Fortran, and MPI.

Input/Initialization

Your program should read in a file containing the coordinates of the initial cells. Sample files are located here: life.1.256x256.data and life.2.256x256.data (256x256 board). Each line in this file represents the coordinates of a cell on the board that is live. For instance, the following entry:

 
          1,3
          
means that the cell at position [1, 3] is live in the initial state. You can also find many other sample patterns on the web (use your favorite search engine on "game of life" and/or "Conway").

Your program should take four command line arguments: the name of the data file, the number of generations to iterate, X_limit, and Y_limit. To be more specific, the command line of your program should be:


          ./life <input file name> <# of generations> <X_limit> <Y_limit>
          

The number of processes the program will run on is specified as part of the mpirun command with the -np argument.


          mpirun -np <number of processes> ./life <input file name> <# of generations> <X_limit> <Y_limit>
          

Output

Your program should write a single file called <input-file-name>.<no-of-generations>.csv (from one designated rank) that contains comma separated values representing the board. There should be one line (containing the x coordinate, a comma, and then the y coordinate) for each occupied cell at the end of the last iteration. Sample output files are available:

Similar to the output files above, your output in the file should be sorted by the X and Y coordinates.

Note that all sample input and output data files are also available on zaratan in the /home/asussman/public/416/Project1-MPI/data directory.

The only output from your program sent to standard output should be from process 0 that looks like this:


        TIME: Min: 25.389 s Avg: 27.452 s Max: 41.672 s
        
where Min, Avg and Max time (in seconds) are calculated using MPI reduction operations over the individual time measurements of the "main" loop (over generations) on different processes for the sample final.512x512.data input file.

Parallel Version

Figure out how you will decompose the problem for parallel execution. Remember that MPI (at least the OpenMPI implementation) does not always have great communication performance and so you will want to make message passing infrequent. Also, you will need to be concerned about static load balancing during data distribution/domain decomposition. You should use a 1D decomposition (over rows) and implement two different versions: one using blocking Send/Recv and the other using non-blocking Isend/Irecv.

You can assume that X_limit and Y_limit will be powers of 2 and the number of processes you will be running on will also be a power of 2. You can also assume that you will be running the program on a minimum of 4 processes and X_limit and Y_limit are much larger than the number of processes.

When you need to distribute the initial state of the board to different processes, you can read the entire file on one process (say rank 0) and then send messages or do collective communication from rank 0 to all other processes.

What to Submit

You must submit the following files and no other files:

You should put the code, Makefile and report in a single directory (named LastName-FirstName-assign1), compress it to .tar.gz (LastName-FirstName-assign1.tar.gz) and upload that to gradescope.

If you want to try a bigger board, to see if you can get better speedups with more processes, try running on the input file life.1024x1024.data.

Tips

Grading

The project will be graded as follows:

Component Percentage
Runs correctly with 4 processes 30 (15% each version)
Runs correctly with 16 processes 40 (20% each version)
Speedup with 4 processes 10 (non-blocking version)
Speedup with 16 processes 10 (non-blocking version)
Writeup 10
NOTE: If your program does not compile when submitted on gradescope, you get 0 points. If your program does not run correctly, you do NOT get any points for performance/speedup.

The speedup will be computed using the provided serial code as the baseline. You will get full points as long as your speedup is within 2 times (2x) of the expected speedup. For performance worse than 2x of the expected speedup, you will receive credit proportional to your speedup.

Web Accessibility