17.2.155. MPI_File_write_ordered

MPI_File_write_ordered — Writes a file at a location specified by a shared file pointer (blocking, collective).

17.2.155.1. SYNTAX

17.2.155.1.1. C Syntax

#include <mpi.h>

int MPI_File_write_ordered(MPI_File fh, const void *buf,
    int count, MPI_Datatype datatype,
    MPI_Status *status)

17.2.155.1.2. Fortran Syntax

USE MPI
! or the older form: INCLUDE 'mpif.h'

MPI_FILE_WRITE_ORDERED(FH, BUF, COUNT, DATATYPE,
    STATUS, IERROR)
    <type>  BUF(*)
    INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR

17.2.155.1.3. Fortran 2008 Syntax

USE mpi_f08

MPI_File_write_ordered(fh, buf, count, datatype, status, ierror)
    TYPE(MPI_File), INTENT(IN) :: fh
    TYPE(*), DIMENSION(..), INTENT(IN) :: buf
    INTEGER, INTENT(IN) :: count
    TYPE(MPI_Datatype), INTENT(IN) :: datatype
    TYPE(MPI_Status) :: status
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror

17.2.155.2. INPUT PARAMETERS

  • fh : File handle (handle).

  • buf : Initial address of buffer (choice).

  • count : Number of elements in buffer (integer).

  • datatype : Data type of each buffer element (handle).

17.2.155.3. OUTPUT PARAMETERS

  • status : Status object (Status).

  • ierror : Fortran only: Error status (integer).

17.2.155.4. DESCRIPTION

MPI_File_write_ordered is a collective routine. This routine must be called by all processes in the communicator group associated with the file handle fh. Each process may pass different argument values for the datatype and count arguments. Each process attempts to write, into the file associated with fh, a total number of count data items having datatype type contained in the user’s buffer buf. For each process, the location in the file at which data is written is the position at which the shared file pointer would be after all processes whose ranks within the group are less than that of this process had written their data. MPI_File_write_ordered returns the number of datatype elements written in status. The shared file pointer is updated by the amounts of data requested by all processes of the group.

17.2.155.5. ERRORS

Almost all MPI routines return an error value; C routines as the return result of the function and Fortran routines in the last argument.

Before the error value is returned, the current MPI error handler associated with the communication object (e.g., communicator, window, file) is called. If no communication object is associated with the MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error handler. When MPI_COMM_SELF is not initialized (i.e., before MPI_Init/MPI_Init_thread, after MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial error handler. The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/MPI_Comm_spawn_multiple. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all other MPI functions.

Open MPI includes three predefined error handlers that can be used:

  • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

  • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When called on a communicator, it acts as if MPI_Abort was called on that communicator. If called on a window or file, acts as if MPI_Abort was called on a communicator containing the group of processes in the corresponding window or file. If called on a session, aborts only the local process.

  • MPI_ERRORS_RETURN Returns an error code to the application.

MPI applications can also implement their own error handlers by calling:

Note that MPI does not guarantee that an MPI program can continue past an error.

See the MPI man page for a full list of MPI error codes.

See the Error Handling section of the MPI-3.1 standard for more information.