Changes between Version 2 and Version 3 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
Apr 3, 2019, 2:55:41 PM (5 years ago)
Author:
Cristina Manzano
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v2 v3  
    11= File Systems =
     2On the DEEP-EST system, three different groups of filesystems are available:
    23
    3 On the DEEP-EST system, three different groups of filesystems are available:
    4   - the
    5     [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems],
    6     provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems;
     4 * the [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems], provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems;
    75
    8   - the DEEP-EST parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system;
     6 * the DEEP-EST (and SDV) parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system;
    97
    10   - the filesystems local to each node.
     8 * the filesystems local to each node.
    119
    12 The users home folders are placed on the shared GPFS filesystems.  With the advent of the new user
    13 model at JSC
    14 ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows:
    15 - $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different
    16   home folders are available, one per system he/she has access to.  These home
    17   folders have a low space quota and are reserved for configuration files, ssh
    18   keys, etc.
     10The users home folders are placed on the shared GPFS filesystems.  With the advent of the new user model at JSC ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows:
    1911
    20 - $PROJECT: In JUMO, data and computational resources are assigned to projects:
    21   users can request access to a project and use the resources associated to it.
    22   As a consequence, each user has a folder within each of the projects he/she is part
    23   of. For the DEEP project, such folder is located under `/p/project/cdeep/`.
    24   Here is where the user should place data, and where the old files generated
    25   in the home folder before the JUMO transition can be found.
     12 * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to.  These home folders have a low space quota and are reserved for configuration files, ssh keys, etc.
    2613
    27 The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is
    28 expected to provide similar functionalities with its own parallel filesystems.
     14 * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user has a folder within each of the projects he/she is part of. For the DEEP project, such folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
    2915
     16The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is expected to provide similar functionalities with its own parallel filesystems.
    3017
    3118The following table summarizes the characteristics of the file systems available in the DEEP and DEEP-ER systems:
    3219
    33 || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Other''' ||
    34 || /p/home || /p/home/jusers/$USER || SDV || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. ||
    35 || /p/project || /p/project/cdeep/$USER || SDV || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking ||
    36 || /gpfs-work  || /gpfs-work/$USER || DEEP, SDV || GPFS exported via NFS || Global || || || || GPFS work file system;[[BR]]not suitable for performance relevant applications[[BR]]or benchmarking ||
    37 || /work  || /work/$USER || DEEP || BeeGFS || Global || 2015.03.!r11 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 2170 MiB/s write, 2111 MiB/s read[[BR]]~21000 ops/s create[[BR]]![1] || Work file system ||
    38 || /sdv-work || /sdv-work/$USER || SDV || BeeGFS || Global || 2015.03.!r10 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 425 MiB/s write, 67 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove[[BR]]![2] || Work file system ||
    39 || /nvme || /nvme/tmp || SDV || NVMe device || Local || || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove[[BR]]![2] || 1 NVMe device available at each SDV compute node ||
    40 || /mnt/beeond  || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || 2015.03.!r10 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove[[BR]]![2] || 1 BeeOND instance running on each NVMe device ||
     20|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
     21|| /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || ||
     22|| /p/project || /p/project/cdeep/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     23|| /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system || *Also available in the SDV but only through 1 Gig network connection ||
     24|| /sdv-work || /sdv-work/cdeep/$USER || SDV || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 425 MiB/s write, 67 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     25|| /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     26|| /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    4127
    4228== Stripe Pattern Details ==
    43 
    4429It is possible to query this information from the deep login node, for instance:
    4530
     
    6550+ Number of storage targets: desired: 4
    6651}}}
    67 
    6852Or like this:
    6953
     
    8367Inodes: Total: 0          Free: 0
    8468}}}
    85 
    8669See http://www.beegfs.com/wiki/Striping for more information.
    8770
    8871== Additional infos ==
    89 
    9072Detailed information on the '''BeeGFS Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeGFS here].
    9173
     
    9678Detailed information on the '''Storage Performance''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/SDV_AdminGuide/3_Benchmarks here].
    9779
    98 == Footnotes ==
     80== Notes ==
     81 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and KNMs) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes
    9982
    100 ![1] Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports:
    101    * https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059
     83 * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059
    10284
    103 ![2] Test results and parameters used stored in JUBE:
     85 * Test results and parameters used stored in JUBE:
    10486
    10587{{{