18.2.4. intro_shmem

intro_shmem - Introduction to the OpenSHMEM programming model

18.2.4.1. DESCRIPTION

The SHMEM programming model consists of library routines that provide low-latency, high-bandwidth communication for use in highly parallelized scalable programs. The routines in the OpenSHMEM application programming interface (API) provide a programming model for exchanging data between cooperating parallel processes. The resulting programs are similar in style to Message Passing Interface (MPI) programs. The SHMEM API can be used either alone or in combination with MPI routines in the same parallel program.

An OpenSHMEM program is SPMD (single program, multiple data) in style. The SHMEM processes, called processing elements or PEs, all start at the same time and they all run the same program. Usually the PEs perform computation on their own subdomains of the larger problem and periodically communicate with other PEs to exchange information on which the next computation phase depends.

The OpenSHMEM routines minimize the overhead associated with data transfer requests, maximize bandwidth and minimize data latency. Data latency is the period of time that starts when a PE initiates a transfer of data and ends when a PE can use the data. OpenSHMEM routines support remote data transfer through put operations, which transfer data to a different PE, get operations, which transfer data from a different PE, and remote pointers, which allow direct references to data objects owned by another PE. Other operations supported are collective broadcast and reduction, barrier synchronization, and atomic memory operations. An atomic memory operation is an atomic read-and-update operation, such as a fetch-and-increment, on a remote or local data object.

18.2.4.2. OPENSHMEM ROUTINES

This section lists the significant OpenSHMEM message-passing routines.

PE queries

  • C/C++ only:

    • _num_pes(3)

    • _my_pe(3)

  • Fortran only:

    • NUM_PES(3)

    • MY_PE(3)

Elemental data put routines

Block data put routines

Elemental data get routines

Block data get routines

Strided put routines

Strided get routines

Point-to-point synchronization routines

Barrier synchronization routines

Atomic memory fetch-and-operate (fetch-op) routines

Reduction routines

Broadcast routines

Cache management routines

Byte-granularity block put routines

Collect routines

Atomic memory fetch-and-operate (fetch-op) routines

Atomic memory operation routines

  • Fortran only:

    • shmem_int4_add(3)

    • shmem_int4_inc(3)

Remote memory pointer function

Reduction routines

Accessibility query routines

Symmetric Data Objects

Consistent with the SPMD nature of the OpenSHMEM programming model is the concept of symmetric data objects. These are arrays or variables that exist with the same size, type, and relative address on all PEs. Another term for symmetric data objects is “remotely accessible data objects”. In the interface definitions for OpenSHMEM data transfer routines, one or more of the parameters are typically required to be symmetric or remotely accessible.

The following kinds of data objects are symmetric:

  • Fortran data objects in common blocks or with the SAVE attribute. These data objects must not be defined in a dynamic shared object (DSO).

  • Non-stack C and C++ variables. These data objects must not be defined in a DSO.

  • Fortran arrays allocated with shpalloc(3)

  • C and C++ data allocated by shmalloc(3)

Collective Routines

Some SHMEM routines, for example, shmem_broadcast(3) and shmem_float_sum_to_all(3), are classified as collective routines because they distribute work across a set of PEs. They must be called concurrently by all PEs in the active set defined by the PE_start, logPE_stride, PE_size argument triplet. The following man pages describe the OpenSHMEM collective routines:

  • shmem_and(3)

  • shmem_barrier(3)

  • shmem_broadcast(3)

  • shmem_collect(3)

  • shmem_max(3)

  • shmem_min(3)

  • shmem_or(3)

  • shmem_prod(3)

  • shmem_sum(3)

  • shmem_xor(3)

18.2.4.3. USING THE SYMMETRIC WORK ARRAY, PSYNC

Multiple pSync arrays are often needed if a particular PE calls as OpenSHMEM collective routine twice without intervening barrier synchronization. Problems would occur if some PEs in the active set for call 2 arrive at call 2 before processing of call 1 is complete by all PEs in the call 1 active set. You can use shmem_barrier(3) or shmem_barrier_all(3) to perform a barrier synchronization between consecutive calls to OpenSHMEM collective routines.

There are two special cases:

  • The shmem_barrier(3) routine allows the same pSync array to be used on consecutive calls as long as the active PE set does not change.

  • If the same collective routine is called multiple times with the same active set, the calls may alternate between two pSync arrays. The SHMEM routines guarantee that a first call is completely finished by all PEs by the time processing of a third call begins on any PE.

Because the SHMEM routines restore pSync to its original contents, multiple calls that use the same pSync array do not require that pSync be reinitialized after the first call.

18.2.4.4. SHMEM ENVIRONMENT VARIABLES

This section lists the significant SHMEM environment variables.

  • SMA_VERSION print the library version at start-up.

  • SMA_INFO print helpful text about all these environment variables.

  • SMA_SYMMETRIC_SIZE number of bytes to allocate for the symmetric heap.

  • SMA_DEBUG enable debugging messages.

The first call to SHMEM must be start_pes(3). This routines initialize the SHMEM runtime.

Calling any other SHMEM routines beforehand has undefined behavior. Multiple calls to this routine is not allowed.

18.2.4.5. COMPILING AND RUNNING OPENSHMEM PROGRAMS

The OpenSHMEM specification is silent regarding how OpenSHMEM programs are compiled, linked and run. This section shows some examples of how wrapper programs could be utilized to compile and launch applications. The commands are styled after wrapper programs found in many MPI implementations.

The following sample command line demonstrates running an OpenSHMEM Program using a wrapper script (oshrun in this case):

  • C/C++:

oshcc c_program.c
  • FORTRAN:

oshfort fortran_program.f

The following sample command line demonstrates running an OpenSHMEM Program assuming that the library provides a wrapper script for such purpose (named oshrun for this example):

oshrun -n 32 ./a.out

18.2.4.6. EXAMPLES

Example 1: The following Fortran OpenSHMEM program directs all PEs to sum simultaneously the numbers in the VALUES variable across all PEs:

PROGRAM REDUCTION
  REAL VALUES, SUM
  COMMON /C/ VALUES
  REAL WORK

  CALL START_PES(0)
  VALUES = MY_PE()
  CALL SHMEM_BARRIER_ALL ! Synchronize all PEs
  SUM = 0.0
  DO I = 0, NUM_PES()-1
    CALL SHMEM_REAL_GET(WORK, VALUES, 1, I) ! Get next value
    SUM = SUM + WORK                ! Sum it
  ENDDO
  PRINT *, 'PE ', MY_PE(), ' COMPUTED SUM=', SUM
  CALL SHMEM_BARRIER_ALL
END

Example 2: The following C OpenSHMEM program transfers an array of 10 longs from PE 0 to PE 1:

#include <mpp/shmem.h>

main() {
  long source[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
  static long target[10];

  shmem_init();
  if (shmem_my_pe() == 0) {
    /* put 10 elements into target on PE 1 */
    shmem_long_put(target, source, 10, 1);
  }
  shmem_barrier_all(); /* sync sender and receiver */
  if (shmem_my_pe() == 1)
    printf("target[0] on PE %d is %d\n", shmem_my_pe(), target[0]);
}

See also

The following man pages also contain information on OpenSHMEM routines. See the specific man pages for implementation information.

shmem_add(3) shmem_and(3) shmem_barrier(3) shmem_barrier_all(3) shmem_broadcast(3) shmem_cache(3) shmem_collect(3) shmem_cswap(3) shmem_fadd(3) shmem_fence(3) shmem_finc(3) shmem_get(3) shmem_iget(3) shmem_inc(3) shmem_iput(3) shmem_lock(3) shmem_max(3) shmem_min(3) shmem_my_pe(3) shmem_or(3) shmem_prod(3) shmem_put(3) shmem_quiet(3) shmem_short_g(3) shmem_short_p(3) shmem_sum(3) shmem_swap(3) shmem_wait(3) shmem_xor(3) shmem_pe_accessible(3) shmem_addr_accessible(3) shmem_init(3) shmem_malloc(3) shmem_my_pe(3) shmem_n_pes(3)