SURF Research Boot Camp 2018-11-02
Extras - Calculating π

HU logo

SURF logo

This is an exercise from the Extras part of the Tutorial SURF Research Boot Camp 2018-11-02.

In this advanced part of our HPC Cloud tutorial we ask you to play around with a parallel processing technique on a shared-memory system. For this purpose, we will be running a Monte Carlo simulation to calculate an approximation of the value of π.

NOTE:

You are now in the advanced section of the workshop. You have your laptop and an Internet connection. We expect you will be able to find out more on your own about things that we hardly/don’t explain but which you think you need. For example, if we were you, at this point we would’ve already googled for several things:

  1. Monte Carlo simulation
  2. Monte Carlo pi
  3. Parallel processing
  4. Shared-memory
  5. OpenMP cheatsheet

If you are unable to find out enough on your own, please call any of the instructors for help :-).

One of the advantages of using an HPC System like the HPC Cloud is that you can run your program over multiple cores and multiple machines. That way, your compute work is spread among multiple processing items (cores) at the same time, which (hopefully) results in getting your results much sooner than when you run your program on a small computer.

We now propose 2 exercises so that you can see this effect. In the first exercise, you will run a program in a single thread, and see how long it takes. Then you will run it with more cores, and you can probably see that it takes less time. We recommend that you run each of the versions several times (e.g.: 10) and that you average the measured times.

a) Setting up a VM for the exercise

sudo apt-get install build-essential
# Optionally verify gcc and GNU make installation and version
gcc -v  
make -v
cd
wget https://doc.hpccloud.surfsara.nl/bootcamp-20181102/code/gridpi-mp.tar
tar -xvf gridpi-mp.tar
cd gridpi-mp/
ls -l

b) Serial runs

gcc -std=c99 -Wall -Werror -pedantic gridpi-serial.c -o gridpi-serial
./gridpi-serial

Food for brain b1:

  • Can you make a batch of several runs (e.g.: 10) and calculate the average runtime and standard deviation?

c) Running the OpenMP optimised alternative version

gcc -std=c99 -Wall -Werror -pedantic -fopenmp gridpi-mp-reduction.c -lm -o gridpi-mp-reduction
./gridpi-mp-reduction

Food for brain c1:

  • Can you make a batch of several runs (e.g.: 10) and calculate the average runtime and standard deviation?
  • How many threads are running?
  • Can you explain the differences in the code between this file and those of previous exercise b)? In particular:
    • What runs in parallel? What not?
    • Which variables are used where?

d) More cores

Food for brain d1:

  • How do times with more cores compare to those before?
    (hint: make a table where each row is each exercise, one column is the average time and deviation you measured before and the second column is what you measured now)
  • Play around with the parameters in the source files (e.g. POINTS_ON_AXIS)
    (hint: add an extra column to the table for each parameter you change)
  • Does the performance scale for all of the implementations? Do you see any number where it ceases to make sense to scale? Can you explain?
  • Can you draw some curves (graphs) with the measurements you have gathered? How do they compare?

NOTE: Do not forget to shutdown your VM when you are done with your performance tests.

Next: Detach from work

You have completed the part of the tutorial where you experience scaling your compute capacity, of the Tutorial SURF Research Boot Camp 2018-11-02. Please continue with part Detach from work.

NOTE:

Before you move to the next sections, remember to shut your VMs down.