10. Launching MPI applications
Open MPI can launch MPI processes in a wide variety of environments, but they can generally be broken down into two categories:
Scheduled environments: these are systems where a resource manager and/or scheduler are used to control access to the compute nodes. Popular resource managers include Slurm, PBS/Pro/Torque, and LSF.
Non-scheduled environments: these are systems where resource managers are not used. Launches are typically local (e.g., on a single laptop or workstation) or via
ssh
(e.g., across a small number of nodes).
Similar to many MPI implementations, Open MPI provides the commands mpirun(1) and mpiexec(1) to launch MPI jobs. This section deals with using these commands.
Note, however, that in Open MPI, mpirun(1) and mpiexec(1) are exactly identical. Specifically, they are symbolic links to a common back-end launcher command.
Note
The name of the back-end launcher command has changed over
time (it used to be orterun
, it is now prte
). This
back-end name is largely irrelevant to the user.
The rest of this section usually refers only to mpirun(1), even though the same discussions also apply to mpiexec(1) (because they are both, in fact, the same command).
- 10.1. Quick start: Launching MPI applications
- 10.2. Prerequisites
- 10.3. The role of PMIx and PRRTE
- 10.4. Scheduling processes across hosts
- 10.5. Launching only on the local node
- 10.6. Launching with SSH
- 10.7. Launching with Slurm
- 10.8. Launching with LSF
- 10.9. Launching with PBS / Torque
- 10.10. Launching with Grid Engine
- 10.11. Unusual jobs
- 10.12. Troubleshooting