COMP4300/8300 2017 - Tutorial/Laboratory 1

Introduction to the NCI Raijin System and MPI

The aim of this session is get up and running on the NCI Raijin system and give you an introduction to MPI.

To do this session, you need to have obtained an NCI login ID and password, by registering for an account (well) beforehand.

See the post on Piazza.

The Raijin system is supported by the National Computational Infrastructure program. Staff at Australian Universities are allocated time on this system through a competitive process for use in their research projects. We are extremely fortunate to have been given access to this system for this course. Please use the machine with respect. Note that it is NOT administered by the CS Technical Support Group.

TIP

There is comprehensive documentation for the NCI Raijin system available here. You should familiarize yourself with the content. It will be referenced in what follows.

Log on to the Raijin system using your user ID: ssh raijin.nci.org.au -l <username>

Each user has a file space quota. CPU time is also limited collectively over the entire group. This means one user can exhaust all the time of the entire group. Thus please monitor your usage of this machine.

Read the section of the userguide labelled Accounting. Execute the following commands on Raijin:

nci_account

lquota

nf_limits

Example Programs

A tar file containing all the programs for this session is available here. Save this tar file on your local desktop and then transfer it to the Raijin system. Thus from a terminal window on your desktop and in the directory where you have saved the lab1.tar file execute the scp command: scp prac1.tar <username>@raijin.nci.org.au:~

then in a terminal window that is logged on to the Raijin untar the file: tar -xvf prac1.tar

Modules

Raijin uses modules to provide different user environments. This allows, for instance, users to access old versions of libraries or compilers. Take a quick look at the Environment Modules section of the user manual.

You can both load and unload a module. For this prac we will run with the default environment.

Editing files

Standard UNIX editors are installed including nano, vim and emacs. For graphical editing, you may choose to forward X Windows to your desktop i.e. ssh raijin.nci.org.au -l <username> -X.

which will allow you to run emacs as a graphical editor. Alternatively, kate will allow you to edit files over Secure FTP. For example, from your desktop, open the file sftp://<username>@raijin.nci.org.au/home/444/<username>/prac1/mpiexample1.c.

mpiexample1.c

Open the file mpiexample1.c.

This program is just to get started. Note there are 3 basic requirements for all MPI codes:

#include "mpi.h"
MPI_Init(&argc, &argv); 
MPI_Finalize(); 

You can find the header file in /apps/openmpi/1.6.3/include/mpi.h. (Do you know what version of OpenMPI you are using now?) Take a look at it. It provides the definition of MPI_COMM_WORLD in a complicated fashion involving a global structure that is initialized in another function in the library (it used to be easier!).

MPI_Init() and MPI_Finalize() should be the first and last executable statements in your code -- basically because it is not clear what happens before or after calls to these functions!! man MPI_Init says:

The MPI Standard does not say what a program can do before an MPI_Init or after an MPI_Finalize. In the Open MPI implementation, it should do as little as possible. In particular, avoid anything that changes the external state of the program, such as opening files, reading standard input, or writing to standard output.


If you want to know what an MPI function does you can:

Note that at the moment we are only interested in MPI1.


Compile the code: make mpiexample1

This will result in:

mpicc -c mpiexample1.c
mpicc -o mpiexample1 mpiexample1.o 

mpicc is a wrapper that will end up calling a standard C compiler (in this case gcc) Do mpicc -v mpiexample1.c to see all the details. mpicc also ensures that the program links with the MPI library.

Run the code interactively by typing ./mpiexample1.

You should find the executable runs using just one process. With some MPI implementations the code will fail because you have not defined the number of processes to be used. Using OpenMPI this is done using the command mpirun.

Try running the code interactively again but this time by typing mpirun -np 2 ./mpiexample1.

Now try: mpirun -np 6 ./mpiexample1.

Try using -np 20; it will fail - why? What is the maximum number of MPI processes you can create interactively?

If you run this program enough times you may see that the order in which the output appears changes. Output to stdout is line buffered, but beyond that can appear in any order.

mpirun has a host of different options. Do man mpirun for more information. The -np refers to the number of processes that you wish to spawn.

So far we have only been running our code on one of the Raijin nodes. In total the Raijin has 3592 nodes (and 57,472 cores). Six of these are reserved for interactive logins; the remaining nodes are only available via a batch queuing system. (Which of the six interactive nodes are you logged on to? Run the command hostname if unsure.) Go back to the userguide and read the section entitled Job Submission and Scheduling including subsections of Queue Structure, PBSPro Scheduling and PBSPro Basics.

Now we will run the same job, but using the PBS batch queuing system. To submit a job to the queuing system we have to write batch script. An example of this is given in file batch_job. Take a look at this. Lines starting with PBS are commands to the queuing system, informing it of how much resources you require and how your job should be executed. We use one of these lines to set the number of processors you want to use. Very important is the line to limit the walltime:

Please ensure you limit walltime similarly for any batch job that you use. After all this setup information you run the job by issuing the mpirun command, but taking the number of processes from the number of processors allocated by the queuing system.

To submit your job to the queuing system, run qsub batch_job.

It will respond with something like

$ qsub batch_job
9485588.r-man2

where 9485588.r-man2 is the id of the job in the queuing system. To see what is happening on the batch queue, run qstat:

aaa444@raijin:~/prac1> qstat

Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
9266041.r-man2    arg.62A          ahf564            285:28:5 R normal-node

                    ---lots of jobs---

9485672.r-man2    batch_job        aaa444                   0 Q express-node

This gives a long list of jobs. In the above the top job is running as indicated by the R in the S column, while my job is queued as indicated by the Q.

Now compare the result of running nqstat.

To track the progress of only your job, try qstat 9485672. To track all of your current jobs, use qstat -u $USER.

To delete a job from the queue, run qdel 9485672.r-man2.

When your job completes, the combined standard output and error will be put in a file, in this case named batch_job.o9485672. Inspect this file.

Make sure you are happy with the above since you will need to use the batch system later.

Exercise 1

Modify the code in mpiexample1.c to also print out the name of the node each process is executing on. Do this by using the system call:

gethostname(name, sizeof(name));
  1. Run your modified version of mpiexample1 interactively. What nodes of the cluster are being used?
  2. Repeat the above, but now use the batch file. What nodes are now being used?
  3. Modify the batch script so that your MPI code has enough processes to run on at least two different nodes of the Raijin system. After you know how to do this return to using one node.

Exercise 2

Throughout the course we will be measuring the elapsed time taken to run our parallel jobs. So we start by assessing how good our various timing functions are.

  1. What is the difference between timer overhead and timer resolution?
  2. We can assess the overhead and resolution of a timer by calling it twice in quick succession, printing the difference, and repeating this whole process many times. Why is this? (See lecture on performance models)
  3. Code that does the above for the gettimeofday() system call is provided in walltime.c. Compile and run this, and from the output estimate the overhead and resolution of gettimeofday() (if you are not familiar with this system call, do man gettimeofday).
  4. MPI provides its own timing routine, MPI_Wtime() (do man MPI_Wtime). Insert extra code to test the resolution of this routine. What do you estimate the resolution to be?
  5. What does the function MPI_Wtick() do? What value does it report?

Exercise 3

In mpiexample2.c, each process allocates an integer buffer of size BUFFLEN (= 128 integers). Each buffer is initialized to the rank of the process. Process 0 sends its buffer to process 1 and vice versa, i.e. process 0 sends a message of zeros and receives a message of 1s, while process 1 does the opposite.

  1. Compile and run the code interactively using two processes. Verify that it works as you expect.
  2. Now change the code so that BUFFLEN is 1024. Attempt to run the code. You should find that it fails to complete. Why? Fix the code so that it completes for any value of BUFFLEN.

Exercise 4

mpiexample3.c is a basic pingpong code. Run the code and make sure it works. After doing so several times, do you observe a potential problem for timing this operation?

  1. Currently the code only does pingpong between processes 0 and 1 for a message containing 64 integers and measures the time using MPI_Wtime(). Modify the code so that it runs with a message length len from 1 to 4*1024*1024 integers in powers of 4 (i.e. 1, 4, 16, 64, 256, 1024, ...). Also rectify the timing problem. Have the code print out the average time and the corresponding bandwidth (on process 0). As always, test the code interactively first. Are the results what you expected?
  2. What latency did you measure and what peak bandwidth? How does the bandwidth change with message length?
  3. Further modify the code so that it measures the pingpong time between process 0 and all other processes in MPI_COMM_WORLD for messages of 1, 1024 and 1048576 integers.
  4. Run your code on the batch system using 32 CPUs and complete the following table:
    Message Size (ints) time for pingpong between two processes
    within a nodebetween two nodes
    1  
    1024  
    1048576  
  5. What results did you expect to see? Are the results in line with these expectations? If not why not?