News & Events

  • A new release candidate of MPICH, 4.3.0rc1, is now available on our downloads page. See CHANGES for more detailed info. The full set of git commits since 4.2.3 can be found here: https://www.mpich.org/static/downloads/4.3.0rc1/shortlog

    At this stage, only critical bugs will be fixed before the 4.3.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release of MPICH, 4.3.0b1, is now available for download. This is the first beta release for the 4.3 series. It can be found on our downloads page. For the full set of commits, see https://github.com/pmodels/mpich/compare/v4.2.3…v4.3.0b1/
    ===============================================================================
                                   Changes in 4.3
    ===============================================================================
    # Support MPI memory allocation kinds side document.
    
    # Support MPI ABI Proposal. Configure with --enable-mpi-abi and build with
      mpicc_abi. By default, mpicc still builds and links with MPICH ABI.
    
    # Experimental API MPIX_Op_create_x. It supports user callback function with
      extra_state context and op destructor callback. It supports language bindings
      to use proxy function for language-specific user callbacks.
    
    # Experimental API MPIX_{Comm,File,Session,Win}_create_errhandler_x. They allow
      user error handlers to have extra_state context and corresponding destructor.
      This allows language bindings to implement user error handlers via proxy.
    
    # Experimental API MPIX_Request_is_complete. This is a pure request state query
      function that will not invoke progress, nor will free the request. This should
      help applications that want separate task dependency checking from progress
      engine to avoid progress contentions, especially in multi-threaded context.
      It is also useful for tools to profile non-deterministic calls such as
      MPI_Test.
    
    # Experimental API MPIX_Async_start. This function let applications to inject
      progress hooks to MPI progress. It allows application to implement custom
      asynchronous operations that will be progressed by MPI. It avoids having to
      implement separate progress mechanisms that may either take additional
      resource or contend with MPI progress and negatively impact performance. It
      also allows applications to create custom MPI operations, such as MPI
      nonblocking collectives, and achieve near native performance.
    
    # Added benchmark tests test/mpi/bench/p2p_{latency,bw}.
    
    # Added CMA support in CH4 IPC.
    
    # Added IPC read algorithm for intranode Allgather and Allgatherv.
    
    # Added CVAR MPIR_CVAR_CH4_SHM_POSIX_TOPO_ENABLE to enable non-temporal memcpy
      for inter-numa shm communication.
    
    # Added CVAR MPIR_CVAR_DEBUG_PROGRESS_TIMEOUT for debugging MPI deadlock issues.
    
    # ch4:ucx now supports dynamic processes. MPI_Comm_spawn{_multiple} will work.
      MPI_Open_port will fail due to ucx port name exceeds current MPI_MAX_PORT_NAME
      of 256. One can work around by use an info hint "port_name_size" and use a
      larger port name buffer.
    
    # PMI-1 defines PMI_MAX_PORT_NAME, which may be different from MPI_MAX_PORT_NAME.
      This is used by "PMI_Lookup_name". Consequently, MPI_Lookup_name accepts info
      hint "port_name_size" that may be larger than MPI_MAX_PORT_NAME. If the port
      name does not fit in "port_name_size", it will return a truncation error.
    
    # Autogen default to use -yaksa-depth=2.
    
    # Default MPIR_CVAR_CH4_ROOTS_ONLY_PMI to on.
    
    # Added ch4 netmod API am_tag_send and am_tag_recv.
    
    # Added MPIR_CVAR_CH4_OFI_EAGER_THRESHOLD to force RNDV send mode.
    
    # Make check target will run ROMIO tests.
    
    Continue reading →
  • A new stable release of MPICH, 4.2.3, is now available for download. This release addresses a number of user-reported crashes and build issues. It can be found on our downloads page. For the full set of commits see https://github.com/pmodels/mpich/compare/v4.2.2…v4.2.3/
    ====================================================================
                             Changes in 4.2.3
    ====================================================================
    
    # Update embedded libfabric to fix a build issue on FreeBSD
    
    # Fix HIP support for use with AMD GPUs
    
    # Fix potential invalid context issue in CUDA memory hooks
    
    # Fix GPU fallback path in ch4/ofi for Intel GPU buffers
    
    # Fix IPC handle destruction with Level Zero API (Intel GPU)
    
    # Fix potential crash in MPI_ISENDRECV with derived datatypes
    
    # Fix bug in persistent MPI_GATHER that incorrectly cached buffer
    contents at init time
    
    # Fix memory allocation bug in ROMIO GPFS driver
    
    # Fix missing error names in ch4/ofi netmod
    
    # Fix potential hang in multi-VCI active message RMA
    
    # Fix bug in ch3 large count support with derived datatypes
    
    # Fix manpage generation to provide aliases for large-count versions
    
    # Fix potential crash in Hydra with long PMI command messages
    
    # Fix bug in exit status capture in Hydra when there are multiple
    processes with non-zero exit
    
    # Fix implementation of C/Fortran status conversion functions
    
    # Fix implementation of MPI_Type_create_f90_xxx functions
    Continue reading →
  • A new release candidate of MPICH, 4.2.3rc1, is now available for download. This release addresses a number of user-reported crashes and build issues. It can be found on our downloads page. For the full set of commits see https://github.com/pmodels/mpich/compare/v4.2.2…v4.2.3rc1/
    ====================================================================
                             Changes in 4.2.3
    ====================================================================
    
    # Update embedded libfabric to fix a build issue on FreeBSD
    
    # Fix HIP support for use with AMD GPUs
    
    # Fix potential invalid context issue in CUDA memory hooks
    
    # Fix GPU fallback path in ch4/ofi for Intel GPU buffers
    
    # Fix IPC handle destruction with Level Zero API (Intel GPU)
    
    # Fix potential crash in MPI_ISENDRECV with derived datatypes
    
    # Fix bug in persistent MPI_GATHER that incorrectly cached buffer
    contents at init time
    
    # Fix memory allocation bug in ROMIO GPFS driver
    
    # Fix missing error names in ch4/ofi netmod
    
    # Fix potential hang in multi-VCI active message RMA
    
    # Fix bug in ch3 large count support with derived datatypes
    
    # Fix manpage generation to provide aliases for large-count versions
    
    # Fix potential crash in Hydra with long PMI command messages
    
    # Fix bug in exit status capture in Hydra when there are multiple
    processes with non-zero exit
    
    # Fix implementation of C/Fortran status conversion functions
    
    # Fix implementation of MPI_Type_create_f90_xxx functions
    Continue reading →
  • A new stable release of MPICH, 4.2.2, is now available for download. This is the second bug-fix release in the 4.2.x series. This release addresses a number of user-reported crashes and build issues. It can be found on our downloads page. For the full set of commits see https://github.com/pmodels/mpich/compare/v4.2.1…v4.2.2/
    ====================================================================
                             Changes in 4.2.2
    ====================================================================
    
    # Update embedded libfabric to v1.20.1 and fix compilation with GCC 14.
    
    # Fix dataloop support for MPIX_Type_iov routines
    
    # Fix crash in Hydra when system has many local ip addresses
    
    # Fix RMA fallback check in ch4:ofi netmod
    
    # Fix MPI_UNDEFINED handling in mpi_f08 module
    
    # Fix Slurm environment variable inheritance in Hydra
    
    # Fix multiple issues with large count support in ROMIO
    
    # Fix potential hang in init using PMIx client and nonstandard keys
    
    # Fix crash if PMIx client cannot get appnum from server during init
    
    # Fix other build errors and warnings
    Continue reading →
  • A new stable release of MPICH, 4.2.1, is now available for download. This is the first bug-fix release in the 4.2.x series. This release addresses a number of user-reported crashes and build issues. It can be found on our downloads page. For the full set of commits see https://github.com/pmodels/mpich/compare/v4.2.0…v4.2.1/
    ====================================================================
                           Changes in 4.2.1
    ====================================================================
    
    # Disable flat_namespace to build libmpifort on macOS by default
    
    # Prefix warning messages with "MPICH"
    
    # Add --disable-doc configure option
    
    # Fix support for building MPICH Fortran support with Xcode 15 on macOS
    
    # Fix bug in MPI_WIN_SHARED_QUERY when window is not shared
    
    # Fix bug in ch4/ofi gpu pipelining
    
    # Fixes for Intel GPU support
    
    # Fix memory leak in ch4/shm collectives
    
    # Fix bug in MPI_COMM_SPLIT with intercommunicators and non-zero root
    
    # Fix bug in DAOS ROMIO driver
    
    # Fix bug in cycling error code array
    
    # Return an error if there is failure to create a datatype in mpi_f08
    module for noncontiguous user data
    
    # Return an error when shared memory segment creation fails
    Continue reading →
  • A new stable release of MPICH, 4.1.3, is now available for download. This is the third and final bug-fix release in the 4.1.x series. This release addresses a number of user-reported crashes and build issues. See links to the changelog and full set of commits since 4.1.2: https://github.com/pmodels/mpich/blob/v4.1.3/CHANGES https://github.com/pmodels/mpich/compare/v4.1.2…v4.1.3Continue reading →
  • A new stable release of MPICH, 4.2.0, is now available for download. This is the first stable release in the 4.2.x series. It can be found on our downloads page. For the full set of commits see https://github.com/pmodels/mpich/compare/v4.1.2…v4.2.0/
    ==============================================================
                           Changes in 4.2.0
    ==============================================================
    # Complete support MPI 4.1 specification
    
    # Experimental thread communicator feature (e.g. MPIX_Threadcomm_init).
    See paper "Frustrated With MPI+Threads? Try MPIxThreads!",
    https://doi.org/10.1145/3615318.3615320.
    
    # Experimental datatype functions MPIX_Type_iov_len and MPIX_Type_Iov
    
    # Experimental op MPIX_EQUAL for MPI_Reduce and MPI_Allreduce (intra
    communicator only)
    
    # Use --with-{pmi,pmi2,pmix]=[path] to configure external PMI library.
    Convenience options for Slurm and cray deprecated. Use
    --with-pmi=oldcray for older Cray environment.
    
    # Error checking default changed to runtime (used to be all).
    
    # Use the error handler bound to MPI_COMM_SELF as the default error
    handler.
    
    # Use ierror instead of ierr in "use mpi" Fortran interface. This
    affects user code if they call with explicit keyword, e.g. call
    MPI_Init(ierr=arg). "ierror" is the correct name specified in the MPI
    specification. We only added subroutine interface in "mpi.mod" since
    4.1.
    
    # Handle conversion functions, such as MPI_Comm_c2f, MPI_Comm_f2c, etc.,
    are no longer macros. MPI-4.1 require these to be actual functions.
    
    # Yaksa updated to auto detect the GPU architecture and only build for
    the detected arch. This applies to CUDA and HIP support.
    
    # MPI_Win_shared_query can be used on windows created by MPI_Win_create,
    MPI_Win_allocate, in addition to windows created by
    MPI_Win_allocate_shared. MPI_Win_allocate will create shared memory
    whenever feasible, including between spawned processes on the same
    node.
    
    # Fortran mpi.mod support Type(c_ptr) buffer output for MPI_Alloc_mem,
    MPI_Win_allocate, and MPI_Win_allocate_shared.
    
    # New functions added in MPI-4.1: MPI_Remove_error_string,
    MPI_Remove_error_code, and MPI_Remove_error_class
    
    # New functions added in MPI-4.1: MPI_Request_get_status_all,
    MPI_Request_get_status_any, and MPI_Request_get_status_some.
    
    # New function added in MPI-4.1: MPI_Type_get_value_index.
    
    # New functions added in MPI-4.1: MPI_Comm_attach_buffer,
    MPI_Session_attach_buffer, MPI_Comm_detach_buffer,
    MPI_Session_detach_buffer, MPI_Buffer_flush, MPI_Comm_flush_buffer,
    MPI_Session_flush_buffer, MPI_Buffer_iflush, MPI_Comm_iflush_buffer,
    and MPI_Session_iflush_buffer. Also added constant
    MPI_BUFFER_AUTOMATIC to allow automatic buffers.
    
    # Support for "mpi_memory_alloc_kinds" info key. Memory allocation kind
    requests can be made via argument to mpiexec, or as info during
    session creation. Kinds supported are "mpi" (with standard defined
    restrictors) and "system". Queries for supported kinds can be made on
    MPI objects such as sessions, comms, windows, or files. MPI 4.1 states
    that supported kinds can also be found in MPI_INFO_ENV, but it was
    decided at the October 2023 meeting that this was a mistake and will
    be removed in an erratum.
    
    # Fix potential crash in GPU memory hooks
    Continue reading →
  • A new release candidate of MPICH, 4.2.0rc3, is now available on our downloads page. This release includes a fix for a critical bug in GPU support initialization when no devices are available. See CHANGES for all detailed release info.

    The full set of git commits since 4.1.2 can be found here: https://www.mpich.org/static/downloads/4.2.0rc3/shortlog

    At this stage, only critical bugs will be fixed before the 4.2.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release candidate of MPICH, 4.2.0rc2, is now available on our downloads page. This release includes a fix for a critical bug in the CUDA memory hooks discovered in testing by the PETSc team. See CHANGES for all detailed release info.

    The full set of git commits since 4.1.2 can be found here: https://www.mpich.org/static/downloads/4.2.0rc2/shortlog

    At this stage, only critical bugs will be fixed before the 4.2.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release candidate of MPICH, 4.2.0rc1, is now available on our downloads page. This release includes support for the newly ratified MPI-4.1 standard. See CHANGES for more detailed info.

    The full set of git commits since 4.1.2 can be found here: https://www.mpich.org/static/downloads/4.2.0rc1/shortlog

    At this stage, only critical bugs will be fixed before the 4.2.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release of MPICH, 4.2.0b1, is now available for download. This is the first beta release for the 4.2 series. This release provides complete feature support for the newly ratified MPI 4.1 specification (https://www.mpi-forum.org/docs/). See links to the changelog and full set of commits since 4.1.2: https://github.com/pmodels/mpich/blob/v4.2.0b1/CHANGES https://github.com/pmodels/mpich/compare/v4.1.2…v4.2.0b1Continue reading →
  • A new stable release of MPICH, 4.1.2, is now available for download. This is the second bug-fix release in the 4.1.x series. This release addresses a number of user-reported crashes and build issues. See links to the changelog and full set of commits since 4.1.1: https://github.com/pmodels/mpich/blob/v4.1.2/CHANGES https://github.com/pmodels/mpich/compare/v4.1.1…v4.1.2

    Continue reading →
  • A new stable release of MPICH, 4.1.1, is now available for download. This is the fist bug-fix release in the 4.1.x series. This release contains fixes for MPI Sessions, mpif.h Fortran support, MPI_IALLTOALLW, persistent MPI_ALLGATHER, testsuite usage with non-MPICH libraries, and updates to the embedded UCX and yaksa submodules. The full set of changes since 4.1 can be found here: https://github.com/pmodels/mpich/compare/v4.1…v4.1.1

    Continue reading →
  • A new stable release of MPICH, 4.1, is now available for download. This is the first stable release in the 4.1.x series. Major features in this release include:
    • Standalone libpmi
    • Independent use of MPICH testsuite
    • Experimental MPIX stream prototypes to support explicit threading context and asynchronous enqueue to GPU streams. Both CUDA and HIP are supported.
    • Significant refactor in PMI and Hydra
    The full set of changes since 4.0.3 can be found here: https://github.com/pmodels/mpich/compare/v4.0.3…v4.1 You can find the release on our downloads page.Continue reading →
  • A new release candidate of MPICH, 4.1rc4, is now available for download. This release contains fixes for shared memory initialization and MPI_Allgatherv calls on intracommunicators of size=1.

    As a reminder, only critical bugs will be fixed before the 4.1 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release candidate of MPICH, 4.1rc3, is now available for download. This release contains a fix for the MPIR_proctable implementation in Hydra. Thanks to the DDT folks for the report. As a reminder, only critical bugs will be fixed before the 4.1 GA release. Please give it a try and report any issues!Continue reading →
  • A new release candidate of MPICH, 4.1rc2, is now available for download. This release contains fixes for a few user reported issues, as well as improvements to the release packaging process. At this stage, only critical bugs will be fixed before the 4.1 GA release. Please give it a try and report any issues! Continue reading →
  • A new release candidate of MPICH, 4.1rc1, is now available for download. Major features in this release include:

    • Standalone libpmi
    • Independent use of MPICH testsuite
    • Experimental MPIX stream prototypes to support explicit threading context and asynchronous enqueue to GPU streams. Both CUDA and HIP are supported.
    • Significant refactor in PMI and Hydra

    The full set of changes since 4.0.3 can be found here: https://github.com/pmodels/mpich/compare/v4.0.3…v4.1rc1 You can find the release on our downloads page.

    At this stage, only critical bugs will be fixed before the 4.1 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new beta release of MPICH, 4.1b1, is now available for download. This is the first feature-complete release in the 4.1 series. Major features in this release include:
    • Standalone libpmi
    • Independent use of MPICH testsuite
    • Experimental MPIX stream prototypes to support explicit threading context and asynchronous enqueue to GPU streams. Both CUDA and HIP are supported.
    • Significant refactor in PMI and Hydra
    The full set of changes since 4.0.3 can be found here: https://github.com/pmodels/mpich/compare/v4.0.3…v4.1b1Continue reading →
  • A new stable release of MPICH, 4.0.3, is now available for download. This is the third bug-fix release in the 4.0.x series. This release contains fixes for the Message Queue Dumping Interface, HIP GPU support, ROMIO, Intel OneAPI compilers, and other user-reported bugs and build issues. The full set of changes since 4.0.2 can be found here: https://github.com/pmodels/mpich/compare/v4.0.2…v4.0.3

    Continue reading →
  • A new alpha release of MPICH 4.1a1, is now available for download. This is the first alpha release for 4.1 release series. This release contains PMI refactoring, MPIX stream experimental APIs, build enhancements, and many fixes and improvements. The full set of changes over 4.0.2 can be found here: https://github.com/pmodels/mpich/compare/v4.0.2…v4.1a1.

    Continue reading →
  • A new stable release of MPICH, 4.0.2, is now available for download. This is the second bug-fix release in the 4.0.x series. This release contains fixes for CUDA, Level Zero, FreeBSD, Fortran, Intel compilers on macOS, PMI2, musl libc, and other user-reported bugs and build issues. The full set of changes since 4.0.1 can be found here: https://github.com/pmodels/mpich/compare/v4.0.1…v4.0.2

    Continue reading →
  • A new stable release of MPICH, 4.0.1, is now available for download. This is the first bug-fix release in the 4.0.x series. This release contains fixes for Hydra, NVIDIA compiler support, macOS support, Fortran, MPI Sessions, documentation, and other user-reported bugs and build issues. The full set of changes since 4.0 can be found here: https://github.com/pmodels/mpich/compare/v4.0…v4.0.1

    Continue reading →
  • A new stable release of MPICH, 4.0, is now available for download. This is the first stable release in the 4.0.x series. In addition to support for all new MPI-4.0 APIs, this release contains significant advances in GPU and multithreaded application support. Also included are numerous bug fixes and code quality improvements.

    See https://github.com/pmodels/mpich/blob/v4.0/CHANGES for a summary of all changes since v3.4.3. The full list of commits can be found here.

    Production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new release candidate of MPICH, 4.0rc3, is now available for download. This is the third release candidate in the 4.0 series, and contains critical fixes found during the RC period. The full set of changes since 4.0rc1 can be found here: https://github.com/pmodels/mpich/compare/v4.0rc1…v4.0rc3

    You can find the release on our downloads page.

    At this stage, only critical bugs will be fixed before the 4.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new release candidate of MPICH, 4.0rc1, is now available for download. This is the first release candidate in the 4.0 series. This release is feature complete and includes support for all new MPI-4.0 APIs. The full set of changes since 4.0b1 can be found here: https://github.com/pmodels/mpich/compare/v4.0b1…v4.0rc1.

    Also in this release are several bugs fixes (thanks to user reports) and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    At this stage, only critical bugs will be fixed before the 4.0 GA release. Please give it a try and report any issues!

    Continue reading →
  • A new stable release of MPICH, 3.4.3, is now available for download. This is the third bug-fix release in the 3.4.x series. This release contains fixes for Hydra, GPU support, ROMIO, datatypes, and other user-reported bugs and build issues. The full set of changes since 3.4.2 can be found here: https://github.com/pmodels/mpich/compare/v3.4.2…v3.4.3

    Production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new beta release of MPICH, 4.0b1, is now available for download. This is the first beta release in the 4.0 series. This release is feature complete and includes support for all new MPI-4.0 APIs. The full set of changes since 4.0a2 can be found here: https://github.com/pmodels/mpich/compare/v4.0a2…v4.0b1.

    Also in this release are several bugs fixes (thanks to user reports) and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Continue reading →
  • A new alpha release of MPICH, 4.0a2, is now available for download. This is the second alpha release in the 4.0 series. This release contains full implementations for the just-published MPI 4.0 specification. It also includes a significant amount of bug fixes and enhancements in GPU and threading support. The release is fully backward compatible. The full set of changes since 4.0a1 can be found here: https://github.com/pmodels/mpich/compare/v4.0a1…v4.0a2.

    Continue reading →
  • A new stable release of MPICH, 3.4.2, is now available for download. This is the second bug-fix release in the 3.4.x series. This release contains fixes for GPU support and other user-reported bugs and build issues. The full set of changes since 3.4.1 can be found here: https://github.com/pmodels/mpich/compare/v3.4.1…v3.4.2

    Production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new alpha release of MPICH, 4.0a1, is now available for download. This is the first alpha release in the 4.0 series. This release contains implementations for the upcoming MPI 4.0 specification. Since the MPI 4.0 specification is still in draft status, all the new API functions are using MPIX prefix in this release.

    Continue reading →
  • A new stable release of MPICH, 3.4.1, is now available for download. This is the first bug-fix release in the 3.4.x series. This release contains critical fixes for GPU support and addresses a compatibility issue for MPI applications that use jansson (https://digip.org/jansson/) for manipulating JSON data.

    Production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new stable release of MPICH, 3.4, is now available for download. This is the first stable release in the 3.4.x series. In this release, ch4 replaces ch3 as the default device configuration. New features include GPU support, Yaksa datatype engine integration, multi-VCI communication (experimental), XPMEM support, DAOS support in ROMIO (thanks to Intel contribution), and business card exchange enhancements.

    Also in this release are several fixes and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new release candidate of MPICH, 3.4rc1, is now available for download. In the 3.4 series, ch4 replaces ch3 as the default device configuration. New features include GPU support, Yaksa datatype engine integration, multi-VCI communication (experimental), XPMEM support, DAOS support in ROMIO (thanks to Intel contribution), and business card exchange enhancements.

    Also in this release are several fixes and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Unless critical bugs are found, this release would be identical to the final 3.4 GA release (apart from the version number change).

    Continue reading →
  • A new preview release of MPICH, 3.4b1, is now available for download. This previews the 3.4 major release series. In the 3.4 series, ch4 replaces ch3 as the default device configuration. New features include GPU support, Yaksa datatype engine integration, multi-VCI communication (experimental), XPMEM support, DAOS support in ROMIO (thanks to Intel contribution), business card exchange enhancements, and support for NVIDIA/Arm HPC compilers.

    Also in this release are several fixes and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Continue reading →
  • A new preview release of MPICH, 3.4a3, is now available for download. This previews the 3.4 major release series. In the 3.4 series, ch4 replaces ch3 as the default device configuration. New features include GPU support, multi-VCI communication (experimental), XPMEM support, DAOS support in ROMIO (thanks to Intel contribution), business card exchange enhancements, and support for Arm HPC compilers.

    Also in this release are several fixes and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Continue reading →
  • A new preview release of MPICH, 3.4a2, is now available for download. This previews the 3.4 major release series. In the 3.4 series, ch4 replaces ch3 as the default device configuration. Other new features include DAOS support in ROMIO (thanks to Intel contribution), business card exchange enhancements, and support for Arm HPC compilers.

    Also in this release are several fixes and improvements in ch3, hydra, and common MPICH code. You can find the release on our downloads page.

    Continue reading →
  • A new stable release of MPICH, 3.3.2, is now available for download. This is the second bug-fix release in the 3.3.x series. A detailed list of changes can be found here: https://github.com/pmodels/mpich/blob/v3.3.2/CHANGES

    All production environments are encouraged to upgrade to this release.

    Continue reading →
  • A new stable release of MPICH, 3.3.1, is now available for download. This is the first bug-fix release in the 3.3.x series. A detailed list of changes can be found here: https://github.com/pmodels/mpich/blob/3.3.x/CHANGES

    All production environments are encouraged to upgrade to this release.Continue reading →

  • Thanks to all who attended and participated in this year’s MPICH BoF at Supercomputing in Dallas, Texas. We appreciate the chance to discuss MPICH in person and see what all our partners are up to with regards to their implementations. Slides from the MPICH team update are available here.

    We hope to see everyone again next year in Denver!Continue reading →

  • A new stable release of MPICH, 3.3, is now available for download. This is the first stable release in the 3.3 series. It contains several new features, enhancements and bug fixes. A detailed list of changes can be found here: https://github.com/pmodels/mpich/blob/master/CHANGES

    All production environments are encouraged to upgrade to this release.Continue reading →

  • A new release candidate of MPICH, 3.3rc1, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    Unless critical bugs are found, this release would be identical to the final 3.3 GA release (apart from the version number change). Continue reading →

  • A new preview release of MPICH, 3.3b3, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in beta stages, meaning there are known build issues and bugs, but most tests and common benchmarks will complete on 64-bit Linux systems. Since 3.3b1, there have been more stability improvements, bug fixes, and code cleanup. PMIx support is added in CH4 for this release to support launching with any PMIx compatible server.

    Also in this release is a reorganization of MPI collectives to make it easier to integrate new algorithms. A new framework is added to enable collective algorithms to be issued either on generic or device-specific functionality using a C++-template style system. Support is added for creating communicators based on hardware and network topology hints, and SLURM integration in Hydra is updated to work with the latest node list format. You can find the release on our downloads page.Continue reading →

  • A new preview release of MPICH, 3.3b2, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in beta stages, meaning there are known build issues and bugs, but most tests and common benchmarks will complete on 64-bit Linux systems. Since 3.3b1, there have been more stability improvements, bug fixes, and code cleanup. PMIx support is added in CH4 for this release to support launching with any PMIx compatible server.

    Also in this release is a reorganization of MPI collectives to make it easier to integrate new algorithms. A new framework is added to enable collective algorithms to be issued either on generic or device-specific functionality using a C++-template style system. Support is added for creating communicators based on hardware topology hints, and SLURM integration in Hydra is updated to work with the latest node list format. You can find the release on our downloads page.Continue reading →

  • A new preview release of MPICH, 3.3b1, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in beta stages, meaning there are known build issues and bugs, but most tests and common benchmarks will complete on 64-bit Linux systems. Since 3.3a3, there have been more stability improvements, bug fixes, and code cleanup.

    Also in this release is a reorganization of MPI collectives to make it easier to integrate new algorithms. A new framework is added to enable collective algorithms to be issued either on generic or device-specific functionality using a C++-template style system. Support is added for creating communicators based on hardware topology hints, and SLURM integration in Hydra is updated to work with the latest node list format. You can find the release on our downloads page.Continue reading →

  • A new preview release of MPICH, 3.3a3, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in alpha stages, meaning there are known build issues and bugs, but most tests and common benchmarks will complete on 64-bit Linux systems. Since 3.3a2, there have been more stability improvements, bug fixes, and code cleanup.

    Also in this release is a reorganization of MPI collectives to make it easier to integrate new algorithms. Support is added for creating communicators based on hardware topology hints, and SLURM support in Hydra is updated to work with the latest node list format. You can find the release on our downloads page.Continue reading →

  • We are glad to announce the release of MPICH 3.2.1. This is a stable maintenance release in the 3.2 series including a number of bug fixes and enhancements. All production environments are encouraged to upgrade to this release.Continue reading →
  • A new preview release of MPICH, 3.3a2, is now available for download. MPICH 3.3 contains a new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in alpha stages, meaning there are known build issues and bugs, but most tests and common benchmarks will complete on 64-bit Linux systems. Since 3.3a1, there have been major stability improvements, bug fixes, and code cleanup. CH4 support for Blue Gene/Q is added in this release (via the OFI/BGQ provider), as well as Mac OS builds.

    Also in this release are several bug fixes and improvements to CH3, Hydra, and common MPICH code. You can find the release on our downloads page.Continue reading →

  • The MPICH team will participate in several events at SC ’16 (http://sc16.supercomputing.org), including tutorials, workshops, BoFs and show floor exhibits. Come and say hi!

    Tutorials Workshops
    • Nov. 12 (Sat) / 01:00pm – 05:00pm / Sheraton SLC Hotel / Intel HPC Developer Conference
      • Ken Raffenetti – Presenter (3:40pm) “Next Generation MPICH: What to Expect – Lightweight communication and much more!”
    • Nov. 13 (Sun) / 09:00am – 05:30pm / 155-A / ExaMPI16
      • Pavan Balaji – Organizer
      • Ken Raffenetti – Panel Moderator (3:30pm)
    • Nov. 13 (Sun) / 09:00am-05:30pm / 355-D / Women in HPC: Diversifying the HPC Community
      • Lena Oden – Poster and Lightning talk “Towards efficient usage of heterogeneous memory” (12:30pm-3:30pm)
    • Nov. 14 (Mon) / 09:00am-05:30pm / 155-C / PDSW-DISCS
      • Rob Latham – Paper (3:30pm) “Get out of the way! Applying compression to internal data structures”
    • Nov. 14 (Mon) / 02:00pm-05:30pm / 251-B / PGAS Applications Workshop
      • Min Si – Panelist (5:00pm)
    • Nov. 18 (Fri) / 08:30am-12:00pm / 155-A / RESPA
      • Pavan Balaji – Keynote talk (8:40am)
    BoFs Show Floor
    • Nov. 14 (Mon) / 07:00pm-09:00pm / PGAS Booth (#537)
      • Min Si – (poster) “Accelerating the Global Arrays PGAS Model with Efficient and Portable Asynchronous Communication in MPI RMA”
    • Nov. 15 (Tue) / 11:15am-11:35am / OpenMP Booth (#611)
      • Sangmin Seo – (talk) BOLT: OpenMP over Lightweight Threads
    • Nov. 15 (Tue) / 02:00pm-02:30pm / Mellanox Booth (#2631)
      • Lena Oden – (talk) MPICH/UCX
    Continue reading →
  • The MPICH project has migrated primary development and bug tracking activity to GitHub (http://github.com/pmodels/mpich). Users are encouraged to fork the repository, open bug reports, and submit PRs to improve the code! Enjoy!Continue reading →
  • A new preview release of MPICH, 3.3a1, is now available for download. This preview is the first in the 3.3 major release series. Of major focus in this alpha is an new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:

    ./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

    CH4 is still in alpha stages, meaning there are known build issues and bugs, but most tests and common benchmarks will pass on 64-bit Linux.

    Also in this release are several bug fixes and improvements to CH3, Hydra, and common MPICH code. You can find the release on our downloads page.Continue reading →

  • Thanks to all who attended and gave presentations at this years MPICH Birds-of-a-Feather session at Supercomputing in Austin, TX. It was great to see everyone and discuss the future plans for MPICH in person. Aggregated slides from all presentations can be found here: SC15-MPICH-BoF SC15ColorSoftShadow Continue reading →
  • A new stable release of MPICH, 3.2, is now available for download. This is the first stable release in the 3.2 series. It contains several new features, enhancements and bug fixes. A detailed list of changes can be found here: http://git.mpich.org/mpich.git/blob_plain/v3.2:/CHANGES All production environments are encouraged to upgrade to this release.Continue reading →
  • A new preview release of MPICH, 3.2rc2, is now available for download. This preview release addresses a number of bugs that were present in the previous release candidate. Unless other critical bugs are found, this release would be identical to the final 3.2 GA release (apart from the version number change).Continue reading →
  • The MPICH team will have a series of events at SC ’15 (http://sc15.supercomputing.org), including tutorials, talks, posters, BoFs, and demos. Come and meet us at the following events:

    Tutorials Keynote speech
    • Nov. 16 (Mon) / Pavan Balaji @ ExaMPI
    Papers Posters BoFs Demos/Presentations
    • Nov. 17 (Tue) / 01:45pm – 04:00pm / DOE Booth (#502) / ARGO: An Exascale Operating System and Runtime
    Continue reading →
  • A new preview release of MPICH, 3.2rc1, is now available for download. This preview release addresses a number of bugs that were present in previous releases. Unless other critical bugs are found, this release would be identical to the final 3.2 GA release (apart from the version number change).Continue reading →
  • A new preview release of MPICH, 3.2b4, is now available for download. This preview release adds several capabilities including support for the MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, significant improvements to the MPICH Portals 4 implementation, and a completely revamped RMA infrastructure providing scalability and performance improvements. This preview release includes all the features planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • A new preview release of MPICH, 3.2b3, is now available for download. This preview release adds several capabilities including support for the MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, significant improvements to the MPICH Portals 4 implementation, and a completely revamped RMA infrastructure providing scalability and performance improvements. This preview release includes all the features planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • A new preview release of MPICH, 3.2b2, is now available for download. This preview release adds several capabilities including support for the proposed MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, significant improvements to the MPICH Portals 4 implementation, and a completely revamped RMA infrastructure providing scalability and performance improvements. This preview release includes all the features planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • A new preview release of MPICH, 3.2b1, is now available for download. This preview release adds several capabilities including support for the proposed MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, significant improvements to the MPICH Portals 4 implementation, and a completely revamped RMA infrastructure providing scalability and performance improvements. This preview release includes all the features planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • We are glad to announce the release of MPICH 3.1.4. This is a stable maintenance release in the 3.1 series including a number of bug fixes and also enhancements to our test suite. All production environments are encouraged to upgrade to this release.Continue reading →
  • Our annual MPICH BoF (Birds of a Feather) session at Supercomputing 2014 was a great chance for the MPICH team, vendor partners, and users to meet up and discuss recent developments and future plans. The slides from the BoF have now been posted here. Thanks to all who participated including:
    • Pavan Balaji – Argonne
    • Jeff Hammond – Intel
    • Brad Benton – AMD
    • Heidi Poxon – Cray
    • Craig Stunkel – IBM
    • Bill Magro – Intel
    • Rich Graham – Mellanox
    • Chulho Kim – Lenovo
    • Fab Tiller – Microsoft
    • Yutaka Ishikawa – Riken
    • Ken Raffenetti – Argonne
    • Wesley Bland – Argonne
    We look forward to seeing everyone next year at SC ’15 in Austin, TX!Continue reading →
  • MPICH group will have a series of events in SC’14 (http://sc14.supercomputing.org), including talks, posters, BoFs, tutorials and demos. You can download the SC14-mpich-flyer here. Come and meet us at the following events:
    • Papers
      • Mon / 04:10pm – 04:40pm / 286-7 / Simplifying the Recovery Model of User-Level Failure Mitigation
      • Wed / 10:30am – 11:00am / 393-4-5 / Nonblocking Epochs in MPI One-Sided Communication (Best Paper Finalist)
      • Wed / 11:30am – 12:00pm / 393-4-5 / MC-Checker: Detecting Memory Consistency Errors in MPI One-Sided Applications
    • Posters
      • Tue / 05:15pm – 07:00pm / Lobby / Using Global View Resilience (GVR) to add Resilience to Exascale Applications (Best Poster Finalist)
    • BoFs
      • Tue / 05:30pm – 07:00pm / 386-7 / MPICH: A High-Performance Open-Source MPI Implementation
      • Wed / 05:30pm – 07:00pm / 293 / The Message Passing Interface : MPI 3.1 and Plans for MPI 4.0
    • Tutorials
      • Mon / 08:30am – 05:00pm / 389 / Advanced MPI Programming, by Pavan Balaji, William Gropp, Torsten Hoefler, Rajeev Thakur
      • Mon / 08:30am – 05:00pm / 386-7 / Parallel I/O In Practice, by Robert J. Latham, Robert Ross, Brent Welch, Katie Antypas
    • Demos
      • Tue / 04:20pm – 05:00pm / UTK/NICS Booth #2925 / Argo Runtime for Massive Concurrency
      • Wed / 11:00am – 01:00pm / DOE Booth #1939 / ARGO: An Exascale Operating System and Runtime
    Continue reading →
  • A new preview release of MPICH, 3.2a2, is now available for download. This preview release adds several capabilities including support for the proposed MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, and significant improvements to the MPICH/portals4 implementation. These features represent a subset of those planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • The MPICH team is pleased to announce the availability of a new stable release, mpich-3.1.3. This is a stable release that adds several enhancements to Portals4 support, PAMI, RMA, and ROMIO. It also contains a large number of bug fixes. All production environments are encouraged to upgrade to this release.Continue reading →
  • A new preview release of MPICH, 3.2a1, is now available for download. This preview release is the first in a new major version series in mpich (3.2.x), and adds several capabilities including full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, and support for OFED InfiniBand for Xeon and Xeon Phi architectures. These features represent a subset of those planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).Continue reading →
  • A new stable release of MPICH, 3.1.2, is now available for download. This release contains significant enhancements to the BG/Q device, especially for RMA and shared memory functionality. It also contains enhancements to ROMIO and upgrades hwloc to 1.9.  In addition, it updates its weak alias support to align with gcc-4.x, has a better implementation of MPI_Allreduce for intercommunicator, adds more F08 test cases and fixes several bugs present in 3.1.1. All production environments are encouraged to upgrade to this release.  Continue reading →
  • The MPICH team is pleased to announce the availability of a new stable release (mpich-3.1.1). This is a stable release that adds several capabilities including Blue Gene/Q implementation supports for MPI-3, experimental Fortran 2008 bindings, significant rework of MPICH library management, and a large number of bug fixes. All production environments are encouraged to upgrade to this release.Continue reading →
  • The MPICH team is pleased to announce the availability of a new stable release (mpich-3.1). This is a new major release that adds several capabilities including full binary (ABI) compatibility with Intel MPI 5.0, an integrated MPICH-PAMID code base for IBM BG/Q (contributed by IBM), several improvements to Intel Xeon Phi (contributed by Intel and NEC), a major revamp of the MPI_T tools interface, large improvements to one-sided communication in shared memory environments, a number of improvements to mpiexec process binding capabilities, various improvements to the support of large counts (more than 2GB data), and a large number of bug fixes. All production environments are encouraged to upgrade to this release.Continue reading →
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc4). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4. Continue reading →
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc3). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4. Continue reading →
  • We had a successful Birds-of-a-Feather session at SC13 this year.  Rajeev Thakur, Pavan Balaji and Rusty Lusk from the MPICH group announced the MPICH ABI Compatibility Initiative. This BoF also provided a forum for users of MPICH as well as developers of MPI implementations derived from MPICH to discuss experiences and issues in using and porting MPICH. Future plans for MPICH were discussed. Representatives from Cray, IBM, Intel, Microsoft and University of Tokyo provided brief updates on the status of their efforts. Below are links to some of the slides. Continue reading →
  • The MPICH Birds-of-a-Feather session at SC13 saw the announcement of the MPICH ABI Compatibility Initiative. The goal of the initiative is for all participating implementations to be binary compatible, and to agree on a schedule for necessary ABI changes in future releases. More information about can be found in the MCS news and on the MPICH ABI Page.Continue reading →
  • This November, the MPICH team celebrates the 21st anniversary of the project. In November 1992, the project began with the name being finalized later in March 1993. It began as a reference implementation for the new MPI Standard. Since then, it has become the basis for numerous derivative implementations and has grown to be used on the largest and fastest machines in the world, including 9 of the top 10 Supercomputers according to the most recent Top500 list. Thanks for continuing to use MPICH for all of this time!Continue reading →
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc2). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.  Continue reading →
  • The MPICH team is pleased to announce the availability of a new preview release: 3.1rc1. This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.Continue reading →
  • A new preview release of MPICH, 3.1b1, is now available for download. This preview release is the first in a new major version series in mpich (3.1.x), and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, and improvements to MPI RMA for shared memory communication. This release also fixes several bugs present in 3.0.4.Continue reading →
  • A new stable release of MPICH, 3.0.4, is now available for download. This release adds several performance features for the Hydra process manager, support for communicator-specific tuning of eager/rendezvous thresholds, and fixes several bugs present in 3.0.3. Please use this release instead of 3.0.3.Continue reading →
  • A new stable release of MPICH, 3.0.3, is now available for download. This release adds several performance features for MPI-RMA and fixes several bugs present in 3.0.2. Please use this release instead of 3.0.2.Continue reading →
  • A new stable release of MPICH, 3.0.2, is now available for download. This release fixes several bugs present in 3.0.1. Please use this release instead of 3.0.1.Continue reading →
  • Researchers at University of British Columbia (UBC) have successfully executed MPI programs with over 100 million MPI processes on an MPICH derivative known as “Fine-Grain MPI”, or FG-MPI.Continue reading →
  • A new stable release of MPICH, 3.0.1, is now available for download. This release fixes a major hydra bug that was present in MPICH 3.0. Please use this release instead of 3.0.Continue reading →
  • This release contains a bug in hydra, which is fixed by 3.0.1. Please use 3.0.1 instead.

    A new stable release of MPICH, 3.0, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard. Other smaller features including support for ARM v7 native atomics and hwloc-1.6 are also included.

    Continue reading →
  • We had another successful Birds-of-a-Feather session at SC12 this year.  Rusty Lusk and Pavan Balaji from the MPICH group gave presentations on the past and future of the MPICH project, followed by presentations by Bill Magro from Intel, Duncan Roweth from Cray, Mark Atkins from IBM and Fab Tillier from Microsoft.  Below are links to some of the slides. Continue reading →
  • A new preview release of MPICH, 3.0rc1, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard.  Other smaller features including support for ARM v7 native atomics are also included.Continue reading →
  • We will hold an MPICH Birds-of-a-Feather session at SC|12 in SaltLake City.  The session will provide a forum for users of MPICH as well as developers of MPI implementations derived from MPICH to discuss experiences and issues in using and porting MPICH. Future plans for MPI-3 support will be discussed. Representatives from MPICH-derived implementations will provide brief updates on the status of their efforts. MPICH developers will also be present for an open forum discussion. The session will be held on Tuesday November 13, 2012 from 12:15-1:15 (MST) in room 155-B.Continue reading →
  • MPICH2 is up and running on the Raspberry Pi (a credit-card sized computer). Installation instructions are availableContinue reading →
  • A new major release of MPICH2, 1.5, is now available for download. This release adds many new features including support for much of the MPI-3 standard, support for IBM Blue Gene/Q and Intel MIC platforms, and a completely overhauled build system that supports parallel make. This release also fixes many bugs in the Hydra process manager and various other parts of the MPICH2 code.Continue reading →
  • A feature preview release of MPICH2, 1.5rc3, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5rc2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5rc1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5b2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5b1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5a2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A feature preview release of MPICH2, 1.5a1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.Continue reading →
  • A patch release of MPICH2, 1.4.1p1, is now available to download. This release addresses a bug in the Windows version.Continue reading →
  • A new release of MPICH2, 1.4.1, is now available to download. This is primarily a bug-fix release with a few new features.Continue reading →
  • A new preview release of MPICH2, 1.4.1rc1, is now available to download. This is primarily a bug-fix release with a few new features.Continue reading →
  • A new major release of MPICH2, 1.4, is now available to download. This release adds several new features including improved support for fault tolerance, support for the ARMCI API, and non-collective group creation functionality. This release also fixes many bugs in the Hydra process manager and various other parts of the MPICH2 code.Continue reading →
  • The second release candidate for a new major release of MPICH2, 1.4rc2, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.Continue reading →
  • The release candidate for a new major release of MPICH2, 1.4rc1, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.Continue reading →
  • The release candidate for a new bug-fix release of MPICH2, 1.3.3rc1, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.Continue reading →
  • A new patch release of MPICH2, 1.3.2p1, is now available to download. This release fixes two critical bugs in MPICH2 for older GNU compilers.Continue reading →
  • A new release of MPICH2, 1.3.2, is now available to download. This release fixes several bugs in MPICH2’s fault tolerance capability and the Hydra process manager.Continue reading →
  • A new release candidate of MPICH2, 1.3.2rc1, is now available to download. This release fixes several bugs in MPICH2’s fault tolerance capability and the Hydra process manager.Continue reading →
  • A new release of MPICH2, 1.3.1, is now available to download. This is primarily a bug-fix release. A few new features have also been added including complete support for the FTB MPI events, improvements to RMA operations, and ability to modify collective algorithm selection thresholds using environment variables.Continue reading →
  • A new preview release of MPICH2, 1.3.1rc1, is now available to download. This is primarily a bug-fix release. A few new features have also been added.Continue reading →
  • A new stable version of MPICH2, 1.3, has been released. It includes various new features including the Hydra process manager and asynchronous communication progress, several bug fixes, and code cleanup. We recommend all users using older releases of MPICH2 to upgrade to this version.Continue reading →
  • A new feature preview release of MPICH2, 1.3rc2, is now available to download. Early adopters are encouraged to try it out.Continue reading →
  • A new feature preview release of MPICH2, 1.3rc1, is now available to download. This release marks the end of all the major features planned for this release. Early adopters are encouraged to try it out.Continue reading →
  • A new feature preview release of MPICH2, 1.3b1, is now available to download. Major features in this release include fine-grained threading and process manager support for multiple resource managers including SGE, LSF, and POE. This release is not recommended for production systems at this time.Continue reading →
  • A new feature preview release of MPICH2, 1.3a2, is now available to download. This is the second preview release in the 1.3.z series. Major features in this release include checkpoint-restart capability using BLCR. This release is not recommended for production systems at this time.Continue reading →
  • A new stable version of MPICH2, 1.2.1p1, has been released. This is primarily a bug fix release that fixes several issues on PowerPC systems, build system bugs, and the MPD process manager.Continue reading →
  • A new feature preview release of MPICH2, 1.3a1, is now available to download. This marks the start of a new major release series that includes several new features and optimizations, including a new default process manager Hydra (replacing MPD). This release is not recommended for production systems at this time.Continue reading →
  • A new stable version of MPICH2, 1.2.1, has been released. This is primarily a bug fix release that fixes several issues with MX and Hydra, adds Valgrind support for debugging builds, support for the hwloc process binding library and others.Continue reading →
  • A release candidate for the next version of MPICH2, 1.2.1, has been released. This is primarily a bug fix release. The release candidate is meant for early trials.Continue reading →
  • A new stable version of MPICH2, 1.2, has been released. It includes MPI-2.2 support, several bug fixes, and code cleanup. We recommend all users using older releases of MPICH2 to upgrade to this version.Continue reading →
  • Members of the MPICH2 group are authors/coauthors of a total of five papers and two posters selected for EuroPVM/MPI 2009, two of which were selected as an “Outstanding Papers”. One of these two papers studies the implications of scaling MPI to a million processes and presents several scalability optimizations within MPICH2 that set the stage for this. The second paper studies improvements to derived datatypes within MPICH2.Continue reading →
  • A patch release of MPICH2, 1.1.1p1 has been released. This release fixes several bugs present in the previous stable release, 1.1.1 . We recommend all users of MPICH2 to upgrade to this version.Continue reading →
  • A new stable version of MPICH2, 1.1, has been released. It has several new features, bug fixes, and code cleanup. The new features include MPI 2.1 support, BG/P support, an entirely new TCP communication method, SMP aware collective operations, and a new process management framework called Hydra. We recommend all users using older releases of MPICH2 to upgrade to this version.Continue reading →
  • A release candidate for Version 1.1 has been released.Continue reading →
  • A new patch has been released for the current stable version of MPICH2, 1.0.8p1. This fixes an MPI-I/O build issue with the latest release of PVFS2, bug fixes for IA64 platforms and fixes for MPICH2 over Myrinet GM.Continue reading →
  • A new version of MPICH2, 1.1b1, has been released. This is a pre-release in the MPICH2-1.1 series, including support for the Myrinet MX network module, improved shared-memory collectives as well as regular collective communication, and support for a new and improved Lustre MPI-IO driver.Continue reading →
  • A new version of MPICH2, 1.1a2, has been released. This is an experimental pre-release intended for developers and advanced MPICH2 users. It has a number of new features including MPI 2.1 support, BG/P, an entirely new TCP communication method, SMP aware collective operations, and a new process management framework called Hydra. This release is not recommended for production systems at this time.Continue reading →
  • A new version of MPICH2, 1.0.8, has been released. It has several new features, bug fixes, and code cleanup. See the CHANGES for details. We recommend all users using older 1.0.X releases of MPICH2 to upgrade to this version.Continue reading →
  • Members of the MPICH2 group are authors/coauthors of a total of six papers selected for EuroPVM/MPI 2008, one of which was selected as an “Outstanding Paper”. The paper, “Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P” by Pavan Balaji, Anthony Chan, William Gropp, Rajeev Thakur, and Ewing Lusk, presents non-data-communication overheads within the MPI stack and their impact on performance on large-scale Blue Gene/P systems.Continue reading →
  • A new version of MPICH2, 1.1a1, has been released. This is an experimental pre-release intended for developers and advanced MPICH2 users. It has a number of new features including MPI 2.1 support, BG/P and an entirely new TCP communication method. This release is not recommended for production systems.Continue reading →
  • A new version of MPICH2, 1.0.7, has been released. It has several new features, bug fixes, and code cleanup. See the CHANGES file for details. We recommend that you upgrade to this version if you are using an older version of MPICH2.Continue reading →
  • MPICH2 (Argonne National Laboratory), in collaboration with mpiBLAST (Virginia Tech), landed the SC|07 Storage Challenge Award at SC|07 in Reno, NV for their entry entitled ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing. The team included Pavan Balaji (ANL), Wu Feng and Jeremy Archuleta (VT), and Heshan Lin (NCSU). The official press release for the awardees can be found here.Continue reading →
  • Members of the MPICH2 group are authors/coauthors of a total of six papers selected for EuroPVM/MPI 2007, two of which were selected as “Outstanding Papers”. The first paper, “Self-consistent MPI Performance Requirements” by Jesper Larsson Traff, William Gropp, and Rajeev Thakur, presents conditions that can be used by benchmarks and tools to automatically verify whether a given MPI implementation fulfills basic performance requirements. The second paper, “Test Suite for Evaluating Performance of MPI Implementations That Support MPI_THREAD_MULTIPLE” by Rajeev Thakur and William Gropp, presents performance tests that can be used to measure the overhead of providing the MPI_THREAD_MULTIPLE level of thread safety for user programs.Continue reading →
  • MPICH2 (Argonne National Laboratory) and mpiBLAST (Virginia Tech) collaborate using the ParaMEDIC framework to land a finalist slot in the SC|07 storage challenge. MPICH2 powers ParaMEDIC (short for Parallel Metadata Environment for Distributed I/O and Computing) allowing it to accelerate mpiBLAST by as much as 25-fold in a distributed I/O and computing environment. For additional information, see the SC07 entry here.Continue reading →

Comments are closed.