Changes between Version 20 and Version 21 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
Jun 22, 2020, 4:15:31 PM (4 years ago)
Author:
Jacopo de Amicis
Comment:

Generalised paths for any possible JuDoor? project (not only cdeep/deep).

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v20 v21  
    1919The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE file systems from GPFS, as it is expected to provide similar functionalities with its own parallel file systems.
    2020
    21 The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems:
     21The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems. '''Please beware that the `$project` (all lowercase) variable used in the table only represents any !JuDoor project the user might have access to, and that it is not really exported on the system environment.''' For a list of all projects a user belongs to, please refer to the user's [https://judoor.fz-juelich.de/login JuDoor page]. Alternatively, users can check the projects they are part of with the `jutil` application:
     22{{{
     23$ jutil user projects -o columns
     24}}}
    2225
    2326|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
    2427|| /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Home directory;[[BR]]used only for configuration files. || ||
    25 || /p/project || /p/project/cdeep || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
    26 || /arch || /arch/deep || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || If you plan to use the archive, please get in contact to the system administrators (e.g. via the support mailing list). You can find further information and some hints on using the archive [https://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html here]. ||
    27 || /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || *Also available in the SDV but only through 1 Gig network connection ||
     28|| /p/project || /p/project/$project || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     29|| /arch || /arch/$project || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || If you plan to use the archive, please get in contact to the system administrators (e.g. via the support mailing list). You can find further information and some hints on using the archive [https://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html here]. ||
     30|| /work || /work/$project || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || *Also available in the SDV but only through 1 Gig network connection ||
    2831|| /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *Recommended to use instead of /tmp for storing temporary files ||
    2932|| /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3033|| /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3134|| /pmem/scratch || /pmem/scratch || DAM partition || DCPMM in appdirect mode || Local* || || || 2.2 GB/s simple dd test in dp-dam01 || || *3 TB in dp-dam[01,02], 2 TB in dp-dam[03-16] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology ||
    32 || /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage|| *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     35|| /sdv-work || /sdv-work/$project/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage|| *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    3336|| /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    3437|| /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||