MPI (Message Passing Interface) is a standard for writing parallel programs in message-passing environments. For more information please see the MPI web site at http://www.epm.ornl.gov/~walker/mpi/.
The current Legion implementation supports a core MPI interface, which includes messages passing, data marshaling and heterogeneous conversion. Legion supports legacy (native) MPI codes and provides an enhanced MPI environment using Legion features such as security and placement services. A link-in replacement MPI library uses the primitive services provided by Legion to support the MPI interface. MPI processes map directly to Legion objects.
There are two different ways to run MPI in a Legion system: legion MPI and native MPI. Legion MPI programs have been adapted to run in Legion, are linked to Legion libraries, and can only be run on machines that have the Legion binaries installed.
This tutorial discusses native MPI. Native MPI code is code written for an MPI implementation. Legion supports running native MPI programs without any changes. You only need the binary and a host to run it on. You can, if you wish, adapt your program to make Legion calls. You will need a Legion host object with native MPI properties set to run these programs (see section 1). Click here for information about Legion MPI.Table of Contents
1.Setting up a native MPI host |
2. Running native MPI
2.3.1. Running with Legion calls
2.3.2. Checking running objects
2.3.3. Killing the program
Other on-line tutorials & documentation
If you or your users are running native MPI code through Legion (via legion_native_mpi_run) you will need to install one class and set certain properties on the host.
The optional parameter allows you to specify an architecture for which an implementation for this class can be registered. You can run the command multiple times to specify multiple architectures.
The optional parameter allows you to specify a wrapper script that locates mpirun on the host. The default specifies the legion_native_mpich_wrapper script, which is for an MPICH implementation. The script is provided with the current release in:
This script can be found in $LEGION/src/Tools/.
Native MPI code may be compiled independently of Legion, unless your code makes Legion calls (see section 2.3.1). In that case, you must link your program to the Legion libraries. A sample makefile for this situation is below.
------------------------------------------------------------------------- CC = mpiCC MPI_INC = /usr/local/mpich/include mpimixed: mpimixed.c $(CC) -g -I$(MPI_INC) -I$(LEGION)/include -D$(LEGION_ARCH) -DGNU \ $(LEGION)/lib/$(LEGION_ARCH)/$(CC)/libLegion1.$(LEGION_LIBRARY_VERSION).so \ $(LEGION)/lib/$(LEGION_ARCH)/$(CC)/libLegion2.$(LEGION_LIBRARY_VERSION).so \ $(LEGION)/lib/$(LEGION_ARCH)/$(CC)/libLegion1.$(LEGION_LIBRARY_VERSION).so \ $(LEGION)/lib/$(LEGION_ARCH)/$(CC)/libBasicFiles.so $< -o $@ -------------------------------------------------------------------------
Run legion_native_mpi_register. For example, to register /myMPIprograms/charmm (the binary path) as using a Linux architecture, enter:
$ legion_native_ mpi_register charmm /myMPIprograms/charmm linux
You can run register a program multiple times, perhaps with different architectures or different platforms. If you have not registered this program before, this will create a context in the current context path (the context's name will be the program's class name -- in this case, charmm) and registers the name in Legion context space.
MPI programs are started using the program legion_native_mpi_run.
Be sure to use the -legion flag if your MPI code makes Legion calls (e.g., BasicFile calls). If you do not use this flag and your code attempts to make Legion calls, your program may not run correctly.
If your program makes Legion calls you must:
You can examine the running objects of your application using:
$ legion_ls program_name
This context will have an entry for each object in this MPI application.
If you need to kill the program and its implementations, run:
$ legion_rm program_name
We have provided two sample native MPI programs, available in $LEGION/src/ServiceObjects/MPI/examples/. The first, nativempi.c, produces exactly the same output as the mpitest_c.c program discussed in section 10.1.6 in the Basic User manual. The second, mixedmpi.c, is the same code but with Legion calls.
Please note two important adaptations (highlighted in red in the online copies) that were made to mixedmpi.c in order to access Legion files. There are two new include files:
#include "legion/Legion_libBasicFiles.h" #include "legion/LegionNativeMPIUtils.h"
and a new function call:
These lines must be added to a native MPI code that makes any kind of Legion call.
Other relevant on-line documents:
Click on the to go to the page.
Logging in to a running Legion system
Introduction to Legion context space
Legion tty objects
Running a PVM code in Legion
Running a Legion MPI code
Running native MPI code
Quick list of all 1.7 Legion commands
Usage of all 1.7 Legion commands
FAQs for running programs in Legion
Starting a new Legion system
Legion host and vault objects
Adding host and vault objects
Brief descriptions of all on-line tutorials
Other relevant on-line documents:
Last modified: Thu Jun 15 16:22:10 2000
[Testbeds] [Et Cetera] [Map/Search]