Changes between Version 13 and Version 14 of Public/User_Guide/DEEP-EST_DAM
- Timestamp:
- Jan 23, 2020, 10:00:55 AM (4 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/DEEP-EST_DAM
v13 v14 32 32 H: Hidden Module 33 33 }}} 34 35 **Attention:** As of 23.01.2020 a work around for loading the correct CUDA driver and module has to be use. Please see [https://deeptrac.zam.kfa-juelich.de:8443/trac/wiki/Public/User_Guide/Information_on_software#UsingCuda Using CUDA] section. 34 36 35 37 == Using FPGAs == … … 89 91 It's possible to access the local storage of the DEEP-ER SDV (`/sdv-work`), but you have to keep in mind that the file servers of that storage can just be accessed through 1 GbE ! Hence, it should not be used for performance relevant applications since it is much slower than the DEEP-EST local storages mounted to `/work`. 90 92 91 There is node local storage available for the DEEP-EST DAM node (2 x 1.5 TB NVMe SSD), but configuration is to be done for those devices.93 There is node local storage available for the DEEP-EST DAM node (2 x 1.5 TB NVMe SSD), it is mounted to `/nvme/scratch` and `/nvme/scratch2`. Additionally, there is a small (about 380 GB) scratch folder available in `/scratch`. Remember that the three scratch folders are not persistent and **will be cleaned after your job has finished** ! 92 94 93 95 == Multi-node Jobs == … … 95 97 96 98 A release-candidate version of ParaStationMPI with CUDA awareness is also available on the system. It is installed under the GCC stack (run `ml spider ParaStationMPI` to find the relevant installation for CUDA). This version also automatically loads a CUDA-aware installation of `pscom`. 99 Further information on CUDA awareness can be found in the [https://deeptrac.zam.kfa-juelich.de:8443/trac/wiki/Public/ParaStationMPI#CUDASupportbyParaStationMPI ParaStationMPI] section. 97 100 98 101 **Attention:** As of 16.10.2019, there is no support for GPUDirect over EXTOLL. As a temporary workaround, this version of ParaStationMPI automatically performs device-to-host, host-to-host and host-to-device copies transparently to the user, so it can be used to run applications requiring a CUDA-aware MPI implementation (with limited data transfer performance). Support for GPUDirect will be provided by EXTOLL in the near future.