Introduction to Parallel Computing (CMSC416)

Assignment 2: Performance Tools

Due: October 11 13, 2023 @ 11:59 PM Eastern Time

The purpose of this programming assignment is to gain experience in using performance analysis tools for parallel programs. For this assignment, you will run an existing parallel code, LULESH and analyze its performance using HPCToolkit and Hatchet.

Downloading and building LULESH

You can get LULESH by cloning its git repository as follows:

        git clone
You can use CMake to build LULESH on zaratan by following these steps:

        mkdir build
cd build
This should produce an executable lulesh2.0 in the build directory.

Running LULESH

Lets say you want to run LULESH on 8 processes for 10 iterations/timesteps. This would be the mpirun line:

        mpirun -np 8 ./lulesh2.0 -i 10 -p

Using HPCToolkit and Hatchet

HPCToolkit is available on zaratan via the hpctoolkit/gcc module. You can use HPCtoolkit to collect profiling data for a parallel program in three steps.

  1. Step I: Running the code (LULESH) with hpcrun:
    mpirun -np <num_ranks> hpcrun ./exe <args>
    This will generate a measurements directory.
  2. Step II: Creating a hpcstruct file (used in Step III) using the measurements directory
    hpcstruct <measurements_directory>
  3. Step III: Post-processing the measurements directory generated by hpcrun:
    hpcprof <measurements-directory>
    This will generate a database directory.
Hatchet can be used to analyze the database directory generated by hpcprof using its from_hpctoolkit reader.

You can use the installed Hatchet on zaratan using:

        module load python
source /scratch/zt1/project/cmsc416/shared/hatchetenv/bin/activate
export PYTHONPATH=/scratch/zt1/project/cmsc416/shared/hatchet:$PYTHONPATH

Assignment Tasks

  1. Task 1: You will run LULESH on 1, 8 and 27 MPI processes in the default (weak scaling) mode (with the parameters suggested above), and compare the performance of various executions. Identify the functions/statements that the code spends most of its time in. Identify the functions/code regions that scale poorly as you run on more processes.
  2. Task 2: You will run LULESH on 1, 8 and 27 MPI processes with the additional argument -s 45 and compare the performance of these executions with those in the default mode. Identify the functions/code regions where the code spends disproportionately more time compared to the default mode in task 1.
  3. Task 3: You will run LULESH on 1, 8, and 27 MPI processes in the strong scaling mode (use additional arguments, -s 45, -s 22, and -s 15 respectively), and compare the performance of various executions. Identify the functions/code regions that scale poorly as you run on more processes in this strong scaling mpde. Compare the results with the functions you identified in task 1.

What to Submit

You must submit the following files and no other files:

  • Python scripts that use hatchet for the analyses:,, and
  • Database directory generated on 1 process for Task 1 renamed to lulesh-default-1process.
  • A report that describes what you did, and identifies the main bottlenecks in the source code in the various scenarios above (does not need to exceed 1 page per task). Which function does consume most of the time?
You should put the code, and report in a single directory (named LastName-FirstName-assign2), compress it to .tar.gz (LastName-FirstName-assign2.tar.gz) and upload that to gradescope. Do not include irrelevant files in the tarball. Replace LastName and FirstName with your last and first name, respectively.


  • Don't follow the build and running instructions in this assignment blindly. The goal is for you to learn to compile and run parallel code, and learn how to use HPCToolkit and Hatchet.
  • Helpful resources: HPCToolkit user manual and Hatchet User Guide
  • If you have questions about using these tools or Python and pandas, try using Google first.


The project will be graded as follows:

Component Percentage
Analysis 1 30
Analysis 2 30
Analysis 3 30
Writeup 10