Changes between Version 10 and Version 11 of Public/User_Guide/DEEP-EST_CM


Ignore:
Timestamp:
Oct 15, 2021, 11:59:46 AM (3 years ago)
Author:
Jochen Kreutz
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/DEEP-EST_CM

    v10 v11  
    2424
    2525The home filesystem on the DEEP-EST Cluster Module is provided via GPFS/NFS and hence the same as on (most of) the remaining compute nodes.
    26 The local storage system of the CM running BeeGFS is available at
     26The all flash storage stystem (AFSM) system running BeeGFS is available at
    2727{{{
    2828/work
    2929}}}
    30 There is a gateway being used to bridge between the Infiniband EDR used for the CM and the 40 GbE network the file servers are connected to.
     30
     31In addition, the older SSSM storage system provides the `/usr/local` filesystem on the CM compute nodes running BeeGFS as well.
     32There is a gateway being used to bridge between the Infiniband EDR used for the CM and the 40 GbE network the SSSM file servers are connected to.
    3133
    3234This is NOT the same storage being used on the DEEP-ER SDV system. Both, the DEEP-EST prototype system and the DEEP-ER SDV have their own local storage.   
    3335
    34 It's possible to access the local storage of the DEEP-ER SDV (`/sdv-work`), but you have to keep in mind that the file servers of that storage can just be accessed through 1 GbE ! Hence, it should not be used for performance relevant applications since it is much slower than the DEEP-EST local storages mounted to `/work`.
     36There is also some node local storage available for the DEEP-EST Cluster nodes mounted to `/scratch` on each node (about 380 GB with XFS). Remember that this scratch is not persistent and **will be cleaned after your job has finished** !
    3537
    36 There is also some node local storage available for the DEEP-EST Cluster nodes mounted to `/scratch` on each node (about 380 GB with XFS). Remember that this scratch is not persistent and **will be cleaned after your job has finished** !
     38Please, refer to the [wiki:Public/User_Guide/System_Overview system overview] and [wiki:Public/User_Guide/Filesystems filesystems] pages for further information of the CM hardware, available filesystems and network connections.
    3739
    3840== Multi-node Jobs ==
    3941
    40 The latest `pscom` version used in !ParaStation MPI provides support for the Infiniband interconnect used in the DEEP-EST Cluster Module. Hence, loading the most recent !ParaStationMPI module will be enough to run multi-node MPI jobs over Infiniband:
     42The latest `pscom` version used in !ParaStation MPI provides support for the Infiniband interconnect used in the DEEP-EST Cluster Module. Hence, loading the most recent ParaStationMPI module will be enough to run multi-node MPI jobs over Infiniband:
    4143
    4244{{{
     
    4446}}}
    4547
    46 For using Cluster nodes in heterogeneous jobs together with DAM nodes, please see info about [https://deeptrac.zam.kfa-juelich.de:8443/trac/wiki/Public/User_Guide/Batch_system#Heterogeneousjobs heterogeneous jobs]. Currently (as of 2020-04-03) the ESB rack is equipped with IB and directly connected to the CM nodes. Hence, no gateways has to be used when running on CM and ESB nodes. This will change once the targeted  Extoll Fabri³ solution is implemented for the ESB.
     48For using Cluster nodes in heterogeneous jobs together with DAM nodes, please see info about [wiki:Public/User_Guide/Modular_jobs heterogeneous jobs]. Currently (as of 2020-04-03) the ESB racks 2 and 3 are equipped with IB and directly connected to the CM nodes. Hence, no gateway has to be used when running on CM and ESB nodes. The first rack is planned to be modified to use IB as well (instead of currently installed Extoll Fabri³ solution).