Version 5 (modified by 7 years ago) (diff) | ,
---|
System usage
The system can be used through the PBS based batch system that is also used for the DEEP Cluster and Booster. You can request cluster nodes on the SDV with an interactive session like this:
srun --partition=sdv -N 4 -n 8 --pty /bin/bash -i srun ./hello_cluster Hello world from process 6 of 8 on deeper-sdv07 Hello world from process 7 of 8 on deeper-sdv07 Hello world from process 3 of 8 on deeper-sdv05 Hello world from process 4 of 8 on deeper-sdv06 Hello world from process 0 of 8 on deeper-sdv04 Hello world from process 2 of 8 on deeper-sdv05 Hello world from process 5 of 8 on deeper-sdv06 Hello world from process 1 of 8 on deeper-sdv04
When using a batch script, you have to adapt the —partition option within your script: —partition=sdv
Filesystems and local storage
The home filesystems on the SDV are provided via GPFS/NFS and hence the same as on the DEEP System. The local storage system of the SDV running BeeGFS is available at
/sdv-work
This is NOT the same storage being used on the DEEP system. Both, the DEEP System and the DEEP-ER SDV have their own local storage. On the DEEP nodes it is
mounted to /work
on the deeper-sdv nodes it can be found in /sdv-work
. In addition both systems provide a work filesystem via GPFS, which is called
/gpfs-work
and shared for both systems. But it should not be used for performance relevant applications since it is much slower than the local
storages.
Using /nvme
During job startup, all files of non-priviledged users within /nvme
are removed. If you want to keep your files across consecutive jobs on a particular SDV node, add a list of filenames to $HOME/.nvme_keep
:
/nvme/tmp/myfile.A /nvme/tmp/myfile.B
This will keep the files /nvme/tmp/myfile.{A,B}
across two or more job runs in a row.
Multi-node Jobs
Please use
module load extoll
to run jobs on multiple nodes.