Changes between Initial Version and Version 1 of Public/User_Guide/SDV_Cluster


Ignore:
Timestamp:
Jul 6, 2016, 10:10:40 AM (9 years ago)
Author:
Anke Kreuzer
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/SDV_Cluster

    v1 v1  
     1= System usage =
     2
     3The system can be used through the PBS based batch system that is also used for the DEEP Cluster and Booster. You can request cluster nodes on the SDV with an interactive session like this:
     4
     5{{{
     6kreutz@deepl:~ > qsub -I -l nodes=2:ppn=24:sdv,walltime=01:00:00
     7qsub: waiting for job 76649.deepm to start
     8qsub: job 76649.deepm ready
     9
     10kreutz@deeper-sdv16:~ >
     11}}}
     12
     13When using a batch script, you have to adapt the -l option within your script.
     14
     15== Filesystems and local storage ==
     16
     17The home filesystems on the SDV are provided via GPFS/NFS and hence the same as on the DEEP System.
     18The local storage system of the SDV running BeeGFS is available at
     19{{{
     20/sdv-work
     21}}}
     22
     23This is NOT the same storage being used on the DEEP system. Both, the DEEP System and the DEEP-ER SDV have their own local storage. On the DEEP nodes it is
     24mounted to `/work` on the deeper-sdv nodes it can be found in `/sdv-work`. In addition both systems provide a work filesystem via GPFS, which is called
     25`/gpfs-work` and shared for both systems. But it should not be used for performance relevant applications since it is much slower than the local
     26storages.
     27
     28== Using the EXTOLL network ==
     29
     30Since the `LD_LIBRARY_PATH` attracts incompatible EXTOLL libraries it as to be adapted manually. The following setup should work:
     31{{{
     32module purge
     33module load gcc parastation/gcc-5.1.4-1_1_g064e3f7
     34export LD_LIBRARY_PATH=/usr/local/parastation/pscom/lib64:${LD_LIBRARY_PATH}
     35export LD_LIBRARY_PATH=/direct/Software/Extoll/SDV/lib:$LD_LIBRARY_PATH
     36}}}
     37
     38== Using /nvme ==
     39During job startup, all files of non-priviledged users within {{{/nvme}}} are removed. If you want to keep your files across consecutive jobs on a particular SDV node, add a list of filenames to {{{$HOME/.nvme_keep}}}:
     40{{{
     41/nvme/tmp/myfile.A
     42/nvme/tmp/myfile.B
     43}}}
     44This will keep the files {{{/nvme/tmp/myfile.{A,B} }}} across two or more job runs in a row.