4.1. Quick start: Installing Open MPI

Although this section skips many details, it offers examples that will probably work in many environments.

Caution

Note that this section is a “Quick start” — it does not attempt to be comprehensive or describe how to build Open MPI in all supported environments. The examples below may therefore not work exactly as shown in your environment.

Please consult the other sections in this chapter for more details, if necessary.

Important

If you have checked out a developer’s copy of Open MPI (i.e., you cloned from Git), you really need to read the Developer’s Guide before attempting to build Open MPI. Really.

4.1.1. Binary packages

Although the Open MPI community itself does not distribute binary packages for Open MPI, many downstream packagers do.

For example, many Linux distributions include Open MPI packages — even if they are not installed by default. You should consult the documentation and/or package list for your Linux distribution to see if you can use its built-in package system to install Open MPI.

The MacOS package managers Homebrew and MacPorts both offer binary Open MPI packages:

# For Homebrew
shell$ brew install openmpi

# For MacPorts
shell$ port install openmpi

Important

Binary packages may or may not include support for features that are required on your platform (e.g., a specific networking stack). Or the binary packages available to you may be older / out of date. As such, it may be better to build and install Open MPI from a source tarball available from the main Open MPI web site.

4.1.2. Building from source

Download the Open MPI source code from the main Open MPI web site.

Caution

Do not download an Open MPI source code tarball from GitHub.com. The tarballs automatically generated by GitHub.com are incomplete and will not build properly. They are not official Open MPI releases.

Open MPI uses a traditional configure script paired with make to build. Typical installs can be of the pattern:

shell$ tar xf openmpi-<version>.tar.bz2
shell$ cd openmpi-<version>
shell$ ./configure --prefix=<path> [...options...] 2>&1 | tee config.out
<... lots of output ...>

# Use an integer value of N for parallel builds
shell$ make [-j N] all 2>&1 | tee make.out

# ...lots of output...

# Depending on the <prefix> chosen above, you may need root access
# for the following:
shell$ make install 2>&1 | tee install.out

# ...lots of output...

Note that VPATH builds are fully supported. For example:

shell$ tar xf openmpi-<version>.tar.bz2
shell$ cd openmpi-<version>
shell$ mkdir build
shell$ cd build
shell$ ../configure --prefix=<path> 2>&1 | tee config.out
# ...etc.

The above patterns can be used in many environments.

Note that there are many, many configuration options available in the ./configure step. Some of them may be needed for your particular HPC network interconnect type and/or computing environment; see the rest of this chapter for descriptions of the available options.