9.1. Quick start: Running MPI applications
Although this section skips many details, it offers examples that will probably work in many environments.
Note that this section is a “Quick start” — it does not attempt to be comprehensive or describe how to build Open MPI in all supported environments. The examples below may therefore not work exactly as shown in your environment.
Please consult the other sections in this chapter for more details, if necessary.
Open MPI supports both
mpiexec (they are exactly
equivalent) to launch MPI applications. For example:
shell$ mpirun -n 2 mpi-hello-world # or shell$ mpiexec -n 2 mpi-hello-world # or shell$ mpiexec -n 1 mpi-hello-world : -n 1 mpi-hello-world
are all equivalent. For simplicity, the rest of this documentation
will simply refer to
TODO Link to the mpirun(1) page here.
Note that the
mpirun command supports a large number of options.
Be sure to see the
mpirun man page for much more information.
9.1.1. Launching on a single host
It is common to develop MPI applications on a single laptop or
workstation. In such cases, use
mpirun and specify how many MPI
processes you want to launch via the
shell$ mpirun -n 6 mpi-hello-world Hello world, I am 0 of 6 (running on my-laptop)) Hello world, I am 1 of 6 (running on my-laptop) ... Hello world, I am 5 of 6 (running on my-laptop)
If you do not specify the
mpirun will default to
launching as many MPI processes as there are processor cores (not
hyperthreads) on the machine.
9.1.2. Launching in a non-scheduled environments (via
In general, Open MPI requires the following to launch and run MPI applications:
You must be able to login to remote nodes non-interactively (e.g., without entering a password or passphrase).
Open MPI’s executables must be findable (e.g., in your
Open MPI’s libraries must be findable (e.g., in your
mpirun accepts a
--hostfile parameter to specify a hostfile
containing one hostname per line:
shell$ cat my-hostfile.txt node1.example.com node2.example.com node3.example.com slots=2 node4.example.com slots=10
slots attribute tells Open MPI the maximum number
of processes that can be allocated to that node. If
slots is not
provided, Open MPI — by default — uses the number of
processor cores (not hyperthreads) on that node.
Assuming that each of the 4 nodes in my-hostfile.txt have 16 cores:
shell$ mpirun --hostfile my-hostfile.txt mpi-hello-world Hello world, I am 0 of 44 (running on node1.example.com) Hello world, I am 1 of 44 (running on node1.example.com) ... Hello world, I am 15 of 44 (running on node1.example.com) Hello world, I am 16 of 44 (running on node2.example.com) Hello world, I am 17 of 44 (running on node2.example.com) ... Hello world, I am 31 of 44 (running on node2.example.com) Hello world, I am 32 of 44 (running on node3.example.com) Hello world, I am 33 of 44 (running on node3.example.com) Hello world, I am 34 of 44 (running on node4.example.com) ... Hello world, I am 43 of 44 (running on node4.example.com)
You can see the breakdown of how many processes Open MPI launched on each node:
node1: 16, because no
node2: 16, because no
node3: 2, because
node2: 10, because
9.1.3. Launching in scheduled environments
In scheduled environments (e.g., in a Slurm job, or PBS/Pro, or LSF, or any other schedule), the user tells the scheduler how many MPI processes to launch, and the scheduler decides which hosts to use. The scheduler then passes both pieces of information (the number of processes and the hosts to use) to Open MPI.
There are two ways to launch in a scheduled environment. Nominally,
they both achieve the same thing: they launch MPI processes. Them
main user-observable difference between the two methods is that
mpirun has many more features than scheduler direct launchers.
184.108.40.206. Using Open MPI’s
Technically, Open MPI’s
mpirun is a thin layer around
prun. Hence, most of the functionality
described here is really about
prun. For simplicity,
however, this docmentation will describe everything in terms
TODO Link to mpirun(1) here.
When using the full-featured
mpirun in a scheduled environment,
there is no need to specify a hostfile or number of MPI processes to
mpirun will receive this information directly from the
scheduler. Hence, if you want to launch an MPI job that completely
“fills” your scheduled allocation (i.e., one MPI process for each slot
in the scheduled allocation), you can simply:
# Write a script that runs your MPI application shell$ cat my-slurm-script.sh #!/bin/sh # There is no need to specify -n or --hostfile because that # information will automatically be provided by Slurm. mpirun mpi-hello-world
You then submit the
my-slurm-script.sh script to Slurm for
# Use -n to indicate how many MPI processes you want to run. # Slurm will pick the specific hosts which will be used. shell$ sbatch -n 40 my-slurm-script.sh Submitted batch job 1234 shell$
After Slurm job 1234 completes, you can look at the output file to see what happened:
shell$ cat slurm-1234.out Hello world, I am 0 of 40 (running on node37.example.com) Hello world, I am 1 of 40 (running on node37.example.com) Hello world, I am 2 of 40 (running on node37.example.com) ... Hello world, I am 39 of 40 (running on node19.example.com)
Note that the Slurm scheduler picked the hosts on which the processes ran.
The above example shows that simply invoking
mpi-hello-world — with no other CLI options — obtains
the number of processes to run and hosts to use from the scheduler.
TODO Link to mpirun(1) here.
mpirun has many more features not described in this Quick Start
section. For example, while uncommon in scheduled environments, you
--hostfile to launch in subsets of the
overall scheduler allocation. See the mpirun man page for more
220.127.116.11. Using the scheduler to “direct launch” (without
Some schedulers (such as Slurm) have the ability to “direct launch”
MPI processes without using Open MPI’s
mpirun. For example:
shell$ srun -n 40 mpi-hello-world Hello world, I am 0 of 40 (running on node14.example.com) Hello world, I am 1 of 40 (running on node14.example.com) Hello world, I am 2 of 40 (running on node14.example.com) ... Hello world, I am 39 of 40 (running on node203.example.com) shell$
Similar to the prior example, this example launches 40 copies of
mpi-hello-world, but it does so via the Slurm