MPI: IT'S EASY TO GET STARTED
[INLINE]
For basic applications, MPI is as easy to use as any other
message-passing system. The sample code below contains the complete
communications skeleton for a dynamically load balanced master/slave
application. Following the code is a description of the few functions
necessary to write typical parallel applications.
#include
#define WORKTAG 1
#define DIETAG 2
main(argc, argv)
int argc;
char *argv[];
{
int myrank;
MPI_Init(&argc, &argv); /* initialize MPI */
MPI_Comm_rank(
MPI_COMM_WORLD, /* always use this */
&myrank); /* process rank, 0 thru N-1 */
if (myrank == 0) {
master();
} else {
slave();
}
MPI_Finalize(); /* cleanup MPI */
}
master()
{
int ntasks, rank, work;
double result;
MPI_Status status;
MPI_Comm_size(
MPI_COMM_WORLD, /* always use this */
&ntasks); /* #processes in application */
/*
* Seed the slaves.
*/
for (rank = 1; rank