From FedoraProject

Revision as of 15:40, 20 June 2011 by Vedranm (Talk | contribs)

Jump to: navigation, search


Proposal for Improving MPI support in Fedora

Written by EdHill.


People are increasingly using MPI to solve scientific and engineering problems. While its still a niche market, its steadily growing. Networks of workstations and small clusters have become quite common. And cheaper computers is only making it more common.


It would be nice to provide a few different MPI implementations which can be installed with a single command

yum install mpich2 openmpi ...

and then operate them side-by-side without worries about conflicts. In my opinion, there are no good reasons why Fedora users should be "stuck" with LAM and forced to fight with from-source builds for other MPI implementations. We easily can and therefore should do better!


On a large number sites (ranging from "supercomputing centres" right through through medium and small cluster installs), admins have adopted the "modules" or environment-modules software to easily and gracefully handle situations with multiple simultaneous installs of various compilers and/or libraries. The environment-modules system has proven itself to be a solid, general, workable, and extensible framework. While I don't suggest that Fedora (or even Fedora Extras) adopt environment-modules wholesale for all sorts of problems, the simultaneous installation of multiple MPI implementations is a situation that just begs for an environment-modules solution.

The alternatives approach suggested by others is, in my opinion, clearly inferior to environment-modules since:

  • where does one put the man pages for each implementation?
  • the alternatives setup is NOT easily extended to multiple different compilers
  • alternatives has the concept of one implementation being preferred to all others and this is unnecessary/pointless in the context of multiple MPI implementations

With the recent addition of environment-modules to Fedora Extras, I'd like to see it used to solve the current multiple-MPI-implementations deadlock.