Changes between Version 6 and Version 7 of Public/ParaStationMPI
- Timestamp:
- Oct 24, 2019, 12:06:56 PM (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/ParaStationMPI
v6 v7 10 10 = CUDA Support by !ParaStation MPI = 11 11 12 === What is CUDA-awareness for MPI ===12 === What is CUDA-awareness for MPI? === 13 13 In brief, ''CUDA-awareness'' in an MPI library means that a mixed CUDA + MPI application is allowed to pass pointers to CUDA buffers (these are memory regions located on the GPU, the so-called ''Device'' memory) directly to MPI functions like `MPI_Send` or `MPI_Recv`. A non CUDA-aware MPI library would fail in such a case because the CUDA-memory cannot be accessed directly e.g. via load/store or `memcpy()` but has previously to be transferred via special routines like `cudaMemcpy()` to the Host memory. In contrast to this, a CUDA-aware MPI library recognizes that a pointer is associated with a buffer within the Device memory and can then copy this buffer before communication temporarily into the Host memory -- what is called ''Staging'' of this buffer. In addition, a CUDA-aware MPI library may also apply some kind of optimizations, for example, by means of exploiting so-called ''GPUdirect'' capabilities that allow for direct RDMA transfers from and to Device memory. 14 14 15 === Resources ===15 === Some external Resources === 16 16 * [http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html#axzz44ZswsbEt Getting started with CUDA] (by NVIDIA) 17 17 * [https://developer.nvidia.com/gpudirect NVIDIA GPUDirect Overview] (by NVIDIA) … … 24 24 (BTW: For !InfiniBand communication, !ParaStation MPI is already GPUdirect enabled.) 25 25 26 === Usage o f CUDA-awareness on the DEEP system ===26 === Usage on the DEEP system === 27 27 28 28 **Warning:** ''This manual section is currently under development. Therefore, the following usage guidelines may be not flawless and are likely to change in some respects in the near future! ''