MPI (Message Passing Interface) is a standard for writing parallel programs in message-passing environments. For more information please see the MPI web site at http://www.epm.ornl.gov/~walker/mpi/.
The current Legion implementation supports a core MPI interface, which includes messages passing, data marshaling and heterogeneous conversion. Legion supports legacy (native) MPI codes and provides an enhanced MPI environment using Legion features such as security and placement services. A link-in replacement MPI library uses the primitive services provided by Legion to support the MPI interface. MPI processes map directly to Legion objects.
There are two different ways to run MPI in a Legion system: legion MPI and native MPI. Legion MPI programs have been adapted to run in Legion, are linked to Legion libraries, and can only be run on machines that have the Legion binaries installed.
This guide discusses Legion MPI. Click here for information about native MPI support. For more information about MPI in Legion, please see the Basic User manual.
Table of Contents
|
|
Click on the
to move to the selected text.
$ . ~legion/setup.sh
or
$ source ~legion/setup.cshThe following style conventions are used in these tutorials:
Let's say that you have the following MPI program:
program foo
implicit none
include 'mpif.h'
integer ierr, nodenum, numprocs
integer i, j, k
call mpi_init( ierr )
call mpi_comm_rank( mpi_comm_world, nodenum, ierr )
call mpi_comm_size( mpi_comm_world, numprocs, ierr )
if (nodenum .eq. 0) then
open (10, file = 'input', status = 'old')
read (10,*) i, j, k
close (10)
endif
call do_work (i, j, k)
if (nodenum .eq. 0) then
open (11, file = 'output', status = 'new' )
write (11, *) i, j, k
close (11)
endif
call mpi_finalize(ierr)
stop
end
In order to run this program over Legion and get the benefit of remote I/O, it is necessary to insert extra I/O calls. The MPI calls do not change. Here is the result, in which the changes are in all upper-case:
program foo
implicit none
include 'mpif.h'
integer ierr, nodenum, numprocs
integer i, j, k
CHARACTER*256 INPUT, OUTPUT
call mpi_init( ierr )
call mpi_comm_rank( mpi_comm_world, nodenum, ierr )
call mpi_comm_size( mpi_comm_world, numprocs, ierr )
if (nodenum .eq. 0) then
call LIOF_LEGION_TO_TEMPFILE ('input', INPUT, ierr)
open (10, file = INPUT, status = 'old')
read (10,*) i, j, k
close (10)
endif
call do_work (i, j, k)
if (nodenum .eq. 0) then
call LIOF_CREATE_TEMPFILE (OUTPUT, IERR)
open (11, file = OUTPUT, status = 'new' )
write (11, *) i, j, k
close (11)
call LIOF_TEMPFILE_TO_LEGION (OUTPUT, 'output', IERR)
endif
call mpi_finalize(ierr)
stop
end
CHARACTER*256 INPUT, OUTPUT
call LIOF_LEGION_TO_TEMPFILE ('input', INPUT, ierr)
open (10, file = INPUT, status = 'old')
call LIOF_CREATE_TEMPFILE (OUTPUT, IERR)
open (11, file = OUTPUT, status = 'new' )
call LIOF_TEMPFILE_TO_LEGION (OUTPUT, 'output', IERR)
Running this Legion-MPI Program
In order to run this program under Legion MPI, we need to:
% f77 -c example.f -I$LEGION/include/MPI % legion_link -Fortran -mpi -o example example.o
% . /home/appnet/setup.sh
% source /home/appnet/setup.csh
% legion_tty my_tty
% legion_mpi_register example ./example $LEGION_ARCH
% legion_cp -localsource ./input input
% legion_mpi_run -n 4 /mpi/programs/example
legion_cp -localdest output ./output
While this approach allows you to run MPI programs and transparently read and write files remotely, it does have one limitation: it does not support heterogeneous conversions of data. If you run this program on several machines which have different formats for an integer, such as Intel PC's (little-endian) and IBM RS/6000's (big-endian), the result of using unformatted I/O will be surprising. If you want to use such a heterogeneous system, you will have to either use formatted I/O (all files are text) or use the "typed binary" I/O calls instead of Fortran READ and WRITE statements. These "typed binary" I/O calls are discussed in "Buffered I/O Library, low impact interface," in the Legion Developer Manual.
Modifying the program for the Typed Binary Interface
Here's how the program would change if we used the typed binary interface:
program foo
implicit none
include 'mpif.h'
integer ierr, nodenum, numprocs
integer i, j, k
integer fd, ierr
call mpi_init( ierr )
call mpi_comm_rank( mpi_comm_world, nodenum, ierr )
call mpi_comm_size( mpi_comm_world, numprocs, ierr )
if (nodenum .eq. 0) then
call LIOF_OPEN ('input', 0, FD)
call LIOF_READ_INTS (FD, I, 1, IERR)
call LIOF_READ_INTS (FD, J, 1, IERR)
call LIOF_READ_INTS (FD, K, 1, IERR)
call LIOF_CLOSE (FD, IERR)
endif
call do_work (i, j, k)
if (nodenum .eq. 0) then
call LIOF_OPEN ('output', 0, FD)
call LIOF_WRITE_INTS (FD, I, 1, IERR)
call LIOF_WRITE_INTS (FD, J, 1, IERR)
call LIOF_WRITE_INTS (FD, K, 1, IERR)
call LIOF_CLOSE (FD, IERR)
endif
call mpi_finalize(ierr)
stop
end
Running the program would be the same.
Other relevant on-line documents:
Last modified: Thu Jun 15 16:19:52 2000
|
[Testbeds] [Et Cetera] [Map/Search]
legion@Virginia.edu
|