Changes between Version 18 and Version 19 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
May 5, 2020, 3:48:09 PM (4 years ago)
Author:
Jacopo de Amicis
Comment:

Added information about /arch

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v18 v19  
    22
    33= File Systems =
    4 
    54== Available file systems ==
    65On the DEEP-EST system, three different groups of file systems are available:
     
    2322
    2423|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
    25 || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || ||
    26 || /p/project || /p/project/cdeep || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     24|| /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Home directory;[[BR]]used only for configuration files. || ||
     25|| /p/project || /p/project/cdeep || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     26|| /arch || /arch/deep || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || ||
    2727|| /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || *Also available in the SDV but only through 1 Gig network connection ||
    28 || /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! || *Recommended to use instead of /tmp for storing temporary files ||
    29 || /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! || *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    30 || /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! || *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
     28|| /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *Recommended to use instead of /tmp for storing temporary files ||
     29|| /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
     30|| /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3131|| /pmem/scratch || /pmem/scratch || DAM partition || DCPMM in appdirect mode || Local* || || || 2.2 GB/s simple dd test in dp-dam01 || || *3 TB in dp-dam[01,02], 2 TB in dp-dam[03-16] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology ||
    3232|| /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage|| *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     
    95954294967296 bytes (4.3 GB) copied, 1.94668 s, 2.2 GB/s
    9696}}}
    97 
    98  * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes
     97 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gb/s. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes
    9998
    10099 * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059