#ifdef ENABLE_QMPI #ifndef MPICH_MPI_FROM_PMPI int QMPI_Comm_get_errhandler(QMPI_Context context, int tool_id, MPI_Comm comm, MPI_Errhandler *errhandler) MPICH_API_PUBLIC
The MPI Standard was unclear on whether this routine required the user to call MPI_Errhandler_free once for each call made to this routine in order to free the error handler. After some debate, the MPI Forum added an explicit statement that users are required to call MPI_Errhandler_free when the return value from this routine is no longer needed. This behavior is similar to the other MPI routines for getting objects; for example, MPI_Comm_group requires that the user call MPI_Group_free when the group returned by MPI_Comm_group is no longer needed.
This routine is thread and interrupt safe only if no MPI routine that updates or frees the same MPI object may be called concurrently with this routine.
The MPI standard defined a thread-safe interface but this does not mean that all routines may be called without any thread locks. For example, two threads must not attempt to change the contents of the same MPI_Info object concurrently. The user is responsible in this case for using some mechanism, such as thread locks, to ensure that only one thread at a time makes use of this routine.
All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran.
All MPI routines (except MPI_Wtime and MPI_Wtick) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Comm_set_errhandler (for communicators), MPI_File_set_errhandler (for files), and MPI_Win_set_errhandler (for RMA windows). The MPI-1 routine MPI_Errhandler_set may be used but its use is deprecated. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error; however, MPI implementations will attempt to continue whenever possible.