Changes between Version 13 and Version 14 of Public/User_Guide/DEEP-EST_DAM


Ignore:
Timestamp:
Jan 23, 2020, 10:00:55 AM (4 years ago)
Author:
Jochen Kreutz
Comment:

2020-01-23 JK: Links to CUDA workaround and to ParaStationMPI Cuda awareness added; info on local storage updated

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/DEEP-EST_DAM

    v13 v14  
    3232   H:             Hidden Module
    3333}}}
     34
     35**Attention:** As of 23.01.2020 a work around for loading the correct CUDA driver and module has to be use. Please see [https://deeptrac.zam.kfa-juelich.de:8443/trac/wiki/Public/User_Guide/Information_on_software#UsingCuda Using CUDA] section.
    3436
    3537== Using FPGAs ==
     
    8991It's possible to access the local storage of the DEEP-ER SDV (`/sdv-work`), but you have to keep in mind that the file servers of that storage can just be accessed through 1 GbE ! Hence, it should not be used for performance relevant applications since it is much slower than the DEEP-EST local storages mounted to `/work`.
    9092
    91 There is node local storage available for the DEEP-EST DAM node (2 x 1.5 TB NVMe SSD), but configuration is to be done for those devices.
     93There is node local storage available for the DEEP-EST DAM node (2 x 1.5 TB NVMe SSD), it is mounted to `/nvme/scratch` and `/nvme/scratch2`. Additionally, there is a small (about 380 GB) scratch folder available in `/scratch`. Remember that the three scratch folders are not persistent and **will be cleaned after your job has finished** !
    9294
    9395== Multi-node Jobs ==
     
    9597
    9698A release-candidate version of ParaStationMPI with CUDA awareness is also available on the system. It is installed under the GCC stack (run `ml spider ParaStationMPI` to find the relevant installation for CUDA). This version also automatically loads a CUDA-aware installation of `pscom`.
     99Further information on CUDA awareness can be found in the [https://deeptrac.zam.kfa-juelich.de:8443/trac/wiki/Public/ParaStationMPI#CUDASupportbyParaStationMPI ParaStationMPI] section.
    97100
    98101**Attention:** As of 16.10.2019, there is no support for GPUDirect over EXTOLL. As a temporary workaround, this version of ParaStationMPI automatically performs device-to-host, host-to-host and host-to-device copies transparently to the user, so it can be used to run applications requiring a CUDA-aware MPI implementation (with limited data transfer performance). Support for GPUDirect will be provided by EXTOLL in the near future.