In the previous lesson, we went over the essentials of collective communication. We covered the most basic collective communication routine – MPI_Bcast. In this lesson, we are going to expand on collective communication routines by going over two very important routines – MPI_Scatter and MPI_Gather. We will also cover a variant of MPI_Gather, known as MPI_Allgather. The code for this tutorial is available here.
So far in the beginner MPI tutorial, we have examined point-to-point communication, which is communication between two processes. This lesson is the start of the collective communication section. Collective communication is a method of communication which involves participation of all processes in a communicator. In this lesson, we will discuss the implications of collective communication and go over a standard collective routine – broadcasting. The code for the lesson can be downloaded here.
It’s time to go through an application example using some of the concepts introduced in the sending and receiving tutorial and the MPI_Probe and MPI_Status lesson. The code for the application can be downloaded here. The application simulates a process which I refer to as “random walking.” The basic problem definition of a random walk is as follows. Given a Min, Max, and random walker W, make walker W take S random walks of arbitrary length to the right. If the process goes out of bounds, it wraps back around. S can only move one unit to the right or left at a time.
Although the application in itself is very basic, the parallelization of random walking can simulate the behavior of a wide variety of parallel applications. More on that later. For now, let’s overview how to parallelize the random walk problem.
In the previous lesson, I discussed how to use MPI_Send and MPI_Recv to perform standard point-to-point communication. I only covered how to send messages in which the length of the message was known beforehand. Although it is possible to send the length of the message as a separate send / recv operation, MPI natively supports dynamic messages with just a few additional function calls. I will be going over how to use these functions in this lesson. The code for this tutorial is located here.
Sending and receiving are the two foundational concepts of MPI. Almost every single function in MPI can be implemented with basic send and receive calls. In this lesson, I will discuss how to use MPI’s blocking sending and receiving functions, and I will also overview other basic concepts associated with transmitting data using MPI. The code for this tutorial is available here.
In this lesson, I will show you a basic MPI Hello World application and also discuss how to run an MPI program. The lesson will cover the basics of initializing MPI and running an MPI job across several processes. This lesson is intended to work with installations of MPICH2 (specifically 1.4). If you have not installed MPICH2, please refer back to the installing MPICH2 lesson.
MPI is simply a standard which others follow in their implementation. Because of this, there are a wide variety of MPI implementations out there. One of the most popular implementations, MPICH2, will be used for all of the examples provided through this site. Users are free to use any implementation they wish, but only instructions for installing MPICH2 will be provided. Furthermore, the scripts and code provided for the lessons are only guaranteed to execute and run with the lastest version of MPICH2.
MPICH2 is a widely-used implementation of MPI that is developed primarily by Argonne National Laboratory in the United States. The main reason for choosing MPICH2 over other implementations is simply because of my familiarity with the interface and because of my close relationship with Argonne National Laboratory. I also encourage others to check out OpenMPI, which is also a widely-used implementation.
The Message Passing Interface (MPI) first appeared as a standard in 1994 for performing distributed-memory parallel computing. Since then, it has become the dominant model for high-performance computing, and it is used widely in research, academia, and industry.
The functionality of MPI is extremely rich, offering the programmer with the ability to perform: point-to-point communication, collective communication, one-sided communication, parallel I/O, and even dynamic process management. These terms probably sound quite strange to a beginner, but by the end of all of the tutorials, the terminology will be common place.