Changes between Version 1 and Version 2 of Public/User_Guide/Filesystems


Ignore:
Timestamp:
Mar 12, 2019, 5:15:26 PM (5 years ago)
Author:
Jacopo de Amicis
Comment:

Added information about JUST filesystems after the new JUMO.

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/Filesystems

    v1 v2  
    11= File Systems =
     2
     3On the DEEP-EST system, three different groups of filesystems are available:
     4  - the
     5    [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems],
     6    provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems;
     7
     8  - the DEEP-EST parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system;
     9
     10  - the filesystems local to each node.
     11
     12The users home folders are placed on the shared GPFS filesystems.  With the advent of the new user
     13model at JSC
     14([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows:
     15- $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different
     16  home folders are available, one per system he/she has access to.  These home
     17  folders have a low space quota and are reserved for configuration files, ssh
     18  keys, etc.
     19
     20- $PROJECT: In JUMO, data and computational resources are assigned to projects:
     21  users can request access to a project and use the resources associated to it.
     22  As a consequence, each user has a folder within each of the projects he/she is part
     23  of. For the DEEP project, such folder is located under `/p/project/cdeep/`.
     24  Here is where the user should place data, and where the old files generated
     25  in the home folder before the JUMO transition can be found.
     26
     27The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is
     28expected to provide similar functionalities with its own parallel filesystems.
     29
    230
    331The following table summarizes the characteristics of the file systems available in the DEEP and DEEP-ER systems:
    432
    533|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Other''' ||
    6 || /home[a-b] || /home[a-b]/$USER || DEEP, SDV || GPFS exported via NFS || Global || || || || Home directory;[[BR]]not suitable for performance relevant applications[[BR]]or benchmarking ||
     34|| /p/home || /p/home/jusers/$USER || SDV || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. ||
     35|| /p/project || /p/project/cdeep/$USER || SDV || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking ||
    736|| /gpfs-work  || /gpfs-work/$USER || DEEP, SDV || GPFS exported via NFS || Global || || || || GPFS work file system;[[BR]]not suitable for performance relevant applications[[BR]]or benchmarking ||
    837|| /work  || /work/$USER || DEEP || BeeGFS || Global || 2015.03.!r11 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 2170 MiB/s write, 2111 MiB/s read[[BR]]~21000 ops/s create[[BR]]![1] || Work file system ||