From Fedora Project Wiki
Line 85: Line 85:
|<code>MPI_COMPILER</code>||<code>%{name}</code>
|<code>MPI_COMPILER</code>||<code>%{name}</code>
|-
|-
|<code>MPI_SUFFIX</code>||<code>_mpi</code>
|<code>MPI_SUFFIX</code>||The suffix used for programs compiled against <code>%{name}</code>: <code>_mpi</code> for OpenMPI and <code>_%{name}</code> for other compilers.
|}
|}



Revision as of 10:39, 26 July 2009

Warning.png
This is a draft document

Introduction

Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 [1]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy.

There are many MPI implementations available, such as LAM-MPI (in Fedora, obsoleted by Open MPI), Open MPI (the default MPI compiler in Fedora and the MPI compiler used in RHEL), MPICH (Not yet in Fedora), MPICH2 (in Fedora) and MVAPICH1 and MVAPICH2 (Not yet in Fedora).

As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.



Packaging of MPI compilers

The MPI compiler RPMs MUST be possible to build with other compilers as well and support simultaneous installation of versions compiled with different compilers (e.g. in addition to a version compiled with {gcc,g++,gfortran} a version compiled with {gcc34,g++34,g77} must be possible to install and use simultaneously as gfortran does not fully support Fortran 77). To do this, the files of MPI compilers MUST be installed in the following directories:

File type Placement
Binaries %{_bindir}/%{name}-%{_arch}%{?_opt_cc_suffix}/
Libraries %{_libdir}/%{name}%{?_opt_cc_suffix}/
Config files %{_sysconfdir}/%{name}-%{_arch}%{?opt_cc_suffix}/

Here %{?_opt_cc_suffix} is null when compiled with the normal {gcc,g++,gfortran} combination, but would be e.g. -gcc34 for {gcc34,g++34,g77}.


As include files and manual pages are bound to overlap between different MPI implementations, they MUST also placed outside normal directories:

Architecture independent file placement for MPI compilers
File type Placement
Man pages %{_mandir}/%{name}%{?_opt_cc_suffix}/
Include files %{_includedir}/%{name}%{?_opt_cc_suffix}/

In the case the man pages or include files are architecture specific (they contain architecture specific stuff), the -%{_arch} suffix MUST be added to %{name} in the above.

Architecture and compiler (%{?_opt_cc_suffix}) independent parts not placed in -devel MUST be placed in a -common subpackage that is BuildArch: noarch on >= Fedora 11.


The MPI compiler's spec file MUST support the use of the following variables

# We only compile with gcc, but other people may want other compilers.
# Set the compiler here.
%global opt_cc gcc
# Optional CFLAGS to use with the specific compiler...gcc doesn't need any,
# so uncomment and define to use
#global opt_cflags
%global opt_cxx g++
#global opt_cxxflags
%global opt_f77 gfortran
#global opt_fflags
%global opt_fc gfortran
#global opt_fcflags

# Optional name suffix to use...we leave it off when compiling with gcc, but
# for other compiled versions to install side by side, it will need a
# suffix in order to keep the names from conflicting.
#global cc_name_suffix -gcc

The runtime of MPI compilers (mpirun, the libraries, the manuals etc) MUST be packaged into %{name}, and the development headers and libraries into %{name}-devel.

As the compiler is installed outside PATH, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using environment modules.

The module file MUST prepend the MPI bindir %{_libdir}/%{name}/%{version}-<compiler>/bin into the users PATH and set LD_LIBRARY_PATH to %{_libdir}/%{name}/%{version}-<compiler>/lib. The module file MUST also set some helper variables (primarily for use in spec files):

Variable Value
MPI_BIN %{_bindir}/%{name}-%{_arch}%{?_opt_cc_suffix}/
MPI_CONFIG %{_sysconfdir}/%{name}-%{_arch}%{?opt_cc_suffix}/
MPI_INCLUDE %{_includedir}/%{name}%{?_opt_cc_suffix}/
MPI_LIB %{_libdir}/%{name}%{?_opt_cc_suffix}/
MPI_MAN %{_mandir}/%{name}%{?_opt_cc_suffix}/
MPI_COMPILER %{name}
MPI_SUFFIX The suffix used for programs compiled against %{name}: _mpi for OpenMPI and _%{name} for other compilers.

MUST: By default, no files are placed in /etc/ld.so.conf.d. If the packager wishes to provide alternatives support, it MUST be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.

The MPI compiler package MUST provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in /etc/rpm/macros.openmpi

%_openmpi_load \
 . /etc/profile.d/modules.sh; \
 module load openmpi-%{_arch}; \
 export CFLAGS="$CFLAGS %{optflags}";
%_openmpi_unload \
 . /etc/profile.d/modules.sh; \
 module unload openmpi-%{_arch};

loading and unloading the compiler in spec files is as easy as %{_openmpi_load} and %{_openmpi_unload}.

If the environment module sets compiler flags such as CFLAGS (thus overriding the ones exported in %configure, the RPM macro MUST make them use the Fedora optimization flags %{optflags} once again (as in the example above in which the openmpi sets CFLAGS).

Packaging of MPI software

Software that supports MPI MUST be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: foo).

The packager MUST package at least a version compiled against Open MPI. Packages made against other MPI compilers in Fedora SHOULD be made, but that is left up to the maintainer. The MPI enabled bits MUST be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: foo-mpi for Open MPI [the traditional MPI compiler in Fedora] or foo-mpich2 for MPICH2).

The packages MUST have explicit requires on the used MPI runtime, as rpm might not pick up the correct version. - needs to be checked, at least libmpi is provided by all of them(?)

Each MPI build of shared libraries SHOULD have a separate -libs subpackage for the libraries (e.g. foo-mpich2-libs). Each MPI build MUST have a separate -devel subpackage (e.g. foo-mpich2-devel) that includes the development libraries and Requires: %{name}-devel that includes the headers.

To prevent name clashes, there are two possibilities in the installation location:

  1. Placing in system directories
    • The binaries of the software placed in %{_bindir} MUST be suffixed with %{_mpi_suffix} (e.g. bar_mpi [for Open MPI] or bar_mpich2 [for MPICH2]).
    • The libraries of the software placed in %{_libdir} MUST be suffixed with %{_mpi_suffix} (e.g. libbar_mpi.so [for Open MPI] or libbar_mpich2.so [for MPICH2]).
    • Files installed in %{_datadir} SHOULD be placed in a -common subpackage that is required by all of the packages containing binaries, unless including them in the package of the serial version and requiring it is deemed more appropriate.
  2. Placing in a separate directory
    • The software MUST be installed in %{_libdir}/%{name}/%{version}-%{_mpi_compiler}/ (e.g. code>%{_libdir}/foo/1.0-openmpi-gcc/, including libraries and man files.
    • Architecture and compiler independent headers MUST be placed as normal into %{_includedir}. If the headers contain e.g. some declaration about the MPI compiler used, the headers MUST be placed with the rest of the files in %{_libdir}/%{name}/%{version}-%{_mpi_compiler}/.
    • Files normally installed in %{_datadir} SHOULD be placed in a -common subpackage that is required by all of the packages containing binaries, unless including them in the package of the serial version and requiring it is deemed more appropriate.
    • An environment module enabling the use of the software MUST be written and be made available as /etc/modulefiles/%{name}-%{compiler}-%{_arch}. The module MUST require the module of the used compiler. More info on environment modules.


Note.png
Note on file placement
By placing the files in %{_bindir} and %{_libdir} the user has to load the MPI module with e.g. $ module load openmpi-i386 and call for the correct executable with $ mpirun -np 4 foo_mpi, whereas the serial version can be run with just $ foo.


If the files are placed in a separate directory (with an environment module provided), running is more transparent: one loads the software module $ module load foo-openmpi-i386 which pulls in openmpi-i386 and sets all of the relevant environment variables and runs the MPI program with $ mpirun -np 4 foo (the environment module has prepended to the PATH the directory containing the OpenMPI version of foo).


A sample spec file

Name: foo

%package openmpi
BuildRequires: openmpi-devel

%package mpich2
BuildRequires: mpich2-devel

%build
# Have to do off-root builds to be able to build many versions at once

# Build serial version
mkdir serial
cd serial
ln -s ../configure .
%configure
make %{?_smp_mflags}
cd ..

# Build parallel versions: set compiler variables to MPI wrappers
export CC=mpicc
export CXX=mpicxx
export FC=mpif90
export F77=mpif77

# Build LAM version
%{_lam_load}
mkdir $MPICOMPILER
cd $MPICOMPILER
ln -s ../configure .
%configure --program-suffix=$MPISUFFIX
make %{?_smp_mflags}
cd ..
%{_lam_unload}

# Build OpenMPI version
%{_openmpi_load}
mkdir $MPICOMPILER
cd $MPICOMPILER
ln -s ../configure .
%configure --program-suffix=$MPISUFFIX
make %{?_smp_mflags}
cd ..
%{_openmpi_unload}

# Build mpich2 version
%{_mpich2_load}
mkdir $MPICOMPILER
cd $MPICOMPILER
ln -s ../configure .
%configure --program-suffix=$MPISUFFIX
make %{?_smp_mflags}
cd ..
%{_mpich2_unload}

%install
# Install serial version
make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"

# Install LAM version
%{_lam_load}
make -C %{_mpi_compiler} install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_lam_unload}

# Install OpenMPI version
%{_openmpi_load}
make -C %{_mpi_compiler} install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_openmpi_unload}

# Install MPICH2 version
%{_mpich2_load}
make -C %{_mpi_compiler} install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
%{_mpich2_unload}


%files # All the serial (normal) binaries

%files lam # All lam linked files

%files openmpi # All openmpi linked files

%files mpich2 # All mpich2 linked files