MPI Introduction

MPI stands for Message Passing Interface. It is a specification for the developers and users of message passing libraries. It is not a library itself. In the '80s and early '90s, distributed memory systems were becoming common, and MPI was developed as a standard to implement messge passing: how data should be shared between cooperating processes. As shared memory processors were put on networks to make hybrid systems, new versions of MPI libraries were created to handle the different architectures. Even though it runs on different systems nowadays (distributed memory, shared memory, hybrid), MPI *is* a distributed model, where all parallelism is explicitly coded.

  1. MPI_Init

    USAGE: int MPI_Init(int *argc, char ***argv)

    EXAMPLE: ierr = MPI_Init (&argc, &argv);

    This is required to initialize the MPI execution environment; it can be in your code only once, by one process.

    Related: int MPI_Initialized( int *flag ), which sets flag as true if MPI_Init has been called

  2. MPI_Finalize

    USAGE: int MPI_Finalize()

    EXAMPLE: MPI_Finalize( void )

    MPI_Finalize terminates MPI execution environment. This can also be in your code only once.

    Related: int MPI_Finalized( int *flag ), which sets flag as true if MPI_Finalize has been called

  3. Time for an example

  4. MPI_Comm_rank

    USAGE: int MPI_Comm_rank( MPI_Comm comm, int *rank )

    EXAMPLE: ierr = MPI_Comm_rank ( MPI_COMM_WORLD, &processId );

    Returns the rank of the calling process in the communicator.

  5. MPI_Comm_size

    USAGE: int MPI_Comm_size( MPI_Comm comm, int *size )

    EXAMPLE: ierr = MPI_Comm_size ( MPI_COMM_WORLD, &numProcesses );

    This determines the number of processes (size of the group) associated with a communicator.

  6. MPI_Send

    USAGE: int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

    EXAMPLE: ierr = MPI_Send(&work, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD);

    This performs a thread-safe send (MPI can deal with threads, e.g. running in hybrid OpenMP-MPI codes). This routine may block until the message is received by the destination process. Note the tag parameter; the tag of the send call will need to match that of the receive call.

  7. MPI_Recv

    USAGE: int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

    EXAMPLE: ierr = MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);

    A receive command is needed to receive a message.

The exercise