Changes between Version 34 and Version 35 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
Apr 3, 2023, 8:29:20 PM (2 years ago)
Author:
Anke Kreuzer
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v34 v35  
    33= File Systems =
    44== Available file systems ==
    5 On the DEEP-EST system, three different groups of file systems are available:
     5On the DEEP system, three different groups of file systems are available:
    66
    77 * the [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS file systems], provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems;
    88
    9  * the DEEP-EST parallel BeeGFS file systems, available on all the nodes of the DEEP-EST system;
     9 * the DEEP parallel BeeGFS file systems, available on all the nodes of the DEEP system;
    1010
    1111 * the file systems local to each node.
     
    1515 * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to.  These home folders have a low space quota and are reserved for configuration files, ssh keys, etc.
    1616
    17  * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user can create folders within each of the projects he/she is part of (with either personal or permissions to share with other project members). For the DEEP project, the project folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
     17 * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user can create folders within each of the projects he/she is part of (with either personal or permissions to share with other project members). For the DEEP-SEA project (for example), the project folder is located under `/p/project/deepsea/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
    1818
    19 The DEEP-EST system doesn't mount the $SCRATCH file systems from GPFS, as it is expected to provide similar functionalities with its own parallel and local file systems.
     19The DEEP system doesn't mount the $SCRATCH file systems from GPFS, as it is expected to provide similar functionalities with its own parallel and local file systems.
    2020
    2121The `deepv` login node exposes the same file systems as the compute nodes, but it lacks a local scratch file system. Since `/tmp` is very limited in size on `deepv` please use `$SCRATCH` instead (pointing to the project folder) or use e.g. the /pmem/scratch on the dp-dam partition $LOCALSCRATCH on any other compute node when performing SW installation activities. '''A quota has been introduced for `/tmp` on `deepv` to avoid clogging of this filesystem on the login node which will lead to several issues. Additionally, files in `/dev/shm`, `/tmp` and `/var/tmp` older than 7 days will be removed regularly !'''
    2222
    23 The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems. '''Please beware that the `$project` (all lowercase) variable used in the table only represents any !JuDoor project the user might have access to, and that it is not really exported on the system environment.''' For a list of all projects a user belongs to, please refer to the user's [https://judoor.fz-juelich.de/login JuDoor page]. Alternatively, users can check the projects they are part of with the `jutil` application:
     23The following table summarizes the characteristics of the file systems available in the DEEP and (SDV) systems. '''Please beware that the `$project` (all lowercase) variable used in the table only represents any !JuDoor project the user might have access to, and that it is not really exported on the system environment.''' For a list of all projects a user belongs to, please refer to the user's [https://judoor.fz-juelich.de/login JuDoor page]. Alternatively, users can check the projects they are part of with the `jutil` application:
    2424
    2525{{{
     
    2727}}}
    2828|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
    29 || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Home directory;[[BR]]used only for configuration files. || ||
    30 || /p/project || /p/project/$project || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     29|| /p/home || /p/home/jusers/$USER || SDV, DEEP || GPFS exported via NFS || Global || || || || JUST GPFS Home directory;[[BR]]used only for configuration files. || ||
     30|| /p/project || /p/project/$project || SDV, DEEP || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
    3131|| /arch || /arch/$project || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || If you plan to transfer data to / from the archive e.g. to the project folder, please consider using JUDAC instead of working on `deepv` in order to help avoiding congestion on the DEEP <-> Just connection. Get in contact to the system administrators (e.g. via the support mailing list) if you need assistance with archiving your data. ||
    3232|| /arch2 || /arch2/$project || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || If you plan to transfer data to / from the archive e.g. to the project  folder, please consider using JUDAC instead of working on `deepv`  in order to help avoiding congestion on the DEEP <-> Just  connection. Get in contact to the system administrators (e.g. via the  support mailing list) if you need assistance with archiving your data. ||
    33 || /afsm || /afsm || DEEP-EST || BeeGFS || Global || BeeGFS 7.2.5 || || || Fast work file system, **no backup**, hence not meant for permanent data storage || ||
    34 || /work_old || /work_old/$project || DEEP-EST || BeeGFS || Global || BeeGFS 7.2.5 || || || Work file system, **no backup**, hence not meant for permanent data storage. **Deprecated** || ||
    35 || /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Node local scratch file system for temporary data. Will be cleaned up after job finishes. Size differs on the modules!|| *Recommended to use instead of /tmp for storing temporary files ||
     33|| /afsm || /afsm || DEEP || BeeGFS || Global || BeeGFS 7.2.5 || || || Fast work file system, **no backup**, hence not meant for permanent data storage || ||
     34|| /work_old || /work_old/$project || DEEP || BeeGFS || Global || BeeGFS 7.2.5 || || || Work file system, **no backup**, hence not meant for permanent data storage. **Deprecated** || ||
     35|| /scratch || /scratch || DEEP || xfs local partition || Local* || || || || Node local scratch file system for temporary data. Will be cleaned up after job finishes. Size differs on the modules!|| *Recommended to use instead of /tmp for storing temporary files ||
    3636|| /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3737|| /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||