Changes between Version 16 and Version 17 of Public/ParaStationMPI


Ignore:
Timestamp:
Apr 7, 2020, 8:32:59 AM (4 years ago)
Author:
Simon Pickartz
Comment:

Discuss PSP_GW_MTU

Legend:

Unmodified
Added
Removed
Modified
  • Public/ParaStationMPI

    v16 v17  
    99== Heterogeneous Jobs using inter-module MPI communication ==
    1010!ParaStation MPI provides support for inter-module communication in federated high-speed networks. Therefore, so-called gateway (GW) daemons bridge the MPI traffic between the modules. This mechanism is transparent to the MPI application, i.e., the MPI ranks see a common `MPI_COMM_WORLD` across all modules within the job. However, the user has to account for these additional GW resources during the job submission. An example SLURM Batch script illustrating the submission of heterogeneous pack jobs including the allocation of GW resources can be found [wiki:User_Guide/Batch_system#HeterogeneousjobswithMPIcommunicationacrossmodules here].
     11
     12=== Application-dependent Tuning ===
     13The GW protocol supports the fragmentation of larger messages into smaller chunks of a given length, i.e., the Maximum Transfer Unit (MTU). This way, the GW daemon may benefit from pipelining effect resulting in an overlapping of the message transfer from the source to the GW daemon and from the GW daemon to the destination. The chunk size may be influenced by setting the following environment variable:
     14{{{
     15PSP_GW_MTU=<chunk size in byte>
     16}}}
     17The optimal chunk size is highly dependent on the communication pattern and therefore has to be chosen for each application individually.
     18
    1119
    1220== Modularity-aware Collectives ==