[[TOC]] = System usage = The DEEP-EST Cluster Module (CM) can be used through the SLURM based batch system that is also used for the DAM and ESB modules and (most of) the Software Development Vehicles (SDV). You can request a CM cluster node (`dp-cn[01-50]`) with an interactive session like this: {{{ srun -A deepsea --partition=dp-cn -N 1 -n 8 --pty --interactive /bin/bash srun ./hello_cluster Hello World from processor dp-cn15, rank 2 out of 8 Hello World from processor dp-cn15, rank 3 out of 8 Hello World from processor dp-cn15, rank 6 out of 8 Hello World from processor dp-cn15, rank 7 out of 8 Hello World from processor dp-cn15, rank 0 out of 8 Hello World from processor dp-cn15, rank 4 out of 8 Hello World from processor dp-cn15, rank 1 out of 8 Hello World from processor dp-cn15, rank 5 out of 8 }}} When using a batch script, you have to use the partition option within your script: `--partition=dp-cn` (short form: `-p dp-cn`). == Filesystems and local storage == The home filesystem on the DEEP-EST Cluster Module is provided via GPFS/NFS and hence the same as on (most of) the remaining compute nodes. The CM is connected to the all flash storage stystem (AFSM) system via Infiniband. The AFMS runs BeeGFS and provides a fast local work filesystem at {{{ /work }}} In addition, the older SSSM storage system provides the `/usr/local` filesystem on the CM compute nodes running BeeGFS as well. There is also some node local storage available for the DEEP-EST Cluster nodes mounted to `/scratch` on each node (about 380 GB with XFS). Remember that this scratch is not persistent and **will be cleaned after your job has finished** ! Please, refer to the [wiki:Public/User_Guide/System_overview system overview] and [wiki:Public/User_Guide/Filesystems filesystems] pages for further information of the CM hardware, available filesystems and network connections. == Multi-node Jobs == The latest `pscom` version used in !ParaStation MPI provides support for the Infiniband interconnect used in the DEEP-EST Cluster Module. Hence, loading the most recent ParaStationMPI module will be enough to run multi-node MPI jobs over Infiniband: {{{ module load ParaStationMPI }}} For using Cluster nodes in heterogeneous jobs with DAM and ESB nodes no gateway has to be used (anymore), since all 3 compute modules (as well es the login and file servers) are using EDR Infiniband as interconnect.