Changes between Version 22 and Version 23 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
Sep 29, 2020, 3:37:25 PM (4 years ago)
Author:
Jochen Kreutz
Comment:

adapted table with regards to /work and /sdv-work

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v22 v23  
    1717 * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user can create folders within each of the projects he/she is part of (with either personal or permissions to share with other project members). For the DEEP project, the project folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
    1818
    19 The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE file systems from GPFS, as it is expected to provide similar functionalities with its own parallel file systems.
     19The DEEP-EST system doesn't mount the $SCRATCH file systems from GPFS, as it is expected to provide similar functionalities with its own parallel and local file systems.
    2020
    2121The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems. '''Please beware that the `$project` (all lowercase) variable used in the table only represents any !JuDoor project the user might have access to, and that it is not really exported on the system environment.''' For a list of all projects a user belongs to, please refer to the user's [https://judoor.fz-juelich.de/login JuDoor page]. Alternatively, users can check the projects they are part of with the `jutil` application:
     
    2828|| /p/project || /p/project/$project || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
    2929|| /arch || /arch/$project || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || If you plan to use the archive, please get in contact to the system administrators (e.g. via the support mailing list). You can find further information and some hints on using the archive [https://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html here]. ||
    30 || /work || /work/$project || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || *Also available in the SDV but only through 1 Gig network connection ||
     30|| /work || /work/$project || DEEP-EST || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || ||
    3131|| /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *Recommended to use instead of /tmp for storing temporary files ||
    3232|| /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3333|| /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) ||
    3434|| /pmem/scratch || /pmem/scratch || DAM partition || DCPMM in appdirect mode || Local* || || || 2.2 GB/s simple dd test in dp-dam01 || || *3 TB in dp-dam[01,02], 2 TB in dp-dam[03-16] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology ||
    35 || /sdv-work || /sdv-work/$project/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage.[[BR]][[BR]]Fast EXTOLL connectivity is available only with the `deeper-sdv` partition (1 Gbps connectivity from other partitions).[[BR]][[BR]]**Please use `/work` from the DEEP-EST compute nodes (`dp-cn`, `dp-esb` and `dp-dam` partitions).** || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     35|| /sdv-work || /sdv-work/$project/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage.[[BR]][[BR]]Fast EXTOLL connectivity is available only with the `deeper-sdv` partition (1 Gbps connectivity from other partitions).|| *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    3636|| /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    3737|| /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||