TOP 10 REASONS TO PREFER MPI OVER PVM
[INLINE]
* MPI has more than one freely available, quality implementation.
* MPI defines a 3rd party profiling mechanism.
* MPI has full asynchronous communication.
* MPI groups are solid and efficient.
* MPI efficiently manages message buffers.
* MPI synchronization protects 3rd party software.
* MPI can efficiently program MPP and clusters.
* MPI is totally portable.
* MPI is formally specified.
* MPI is a standard.
Top 10 Reasons to Prefer MPI Over PVM (annotated)
* MPI has more than one freely available, quality implementation.
There are at least LAM, MPICH and CHIMP. The choice of development
tools is not coupled to the programming interface.
* MPI defines a 3rd party profiling mechanism.
A tool builder can extract profile information from MPI
applications by supplying the MPI standard profile interface in a
separate library, without ever having access to the source code of
the main implementation.
* MPI has full asynchronous communication.
Immediate send and receive operations can fully overlap
computation.
* MPI groups are solid and efficient.
Group membership is static. There are no race conditions caused by
processes independently entering and leaving a group. New group
formation is collective and group membership information is
distributed, not centralized.
* MPI efficiently manages message buffers.
Messages are sent and received from user data structures, not from
staging buffers within the communication library. Buffering may,
in some cases, be totally avoided.
* MPI synchronization protects the user from 3rd party software.
All communication within a particular group of processes is marked
with an extra synchronization variable, allocated by the system.
Independent software products within the same process do not have
to worry about allocating message tags.
* MPI can efficiently program MPP and clusters.
A virtual topology reflecting the communication pattern of the
application can be associated with a group of processes. An MPP
implementation of MPI could use that information to match
processes to processors in a way that optimizes communication
paths.
* MPI is totally portable.
Recompile and run on any implementation. With virtual topologies
and efficient buffer management, for example, an application
moving from a cluster to an MPP could even expect good
performance.
* MPI is formally specified.
Implementations have to live up to a published document of precise
semantics.
* MPI is a standard.
Its features and behaviour were arrived at by consensus in an open
forum. It can change only by the same process.
[IMAGE]
LAM Project / Ohio Supercomputer Center / lam@tbag.osc.edu