Changes between Version 2 and Version 3 of Public/User_Guide/OmpSs-2


Ignore:
Timestamp:
Jun 11, 2019, 9:56:32 AM (5 years ago)
Author:
Pedro Martinez-Ferror
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/OmpSs-2

    v2 v3  
    1 OmpSs2 user guide here
     1= Programming with OmpSs-2 =
     2
     3* Introduction
     4* Quick User Guide
     5
     6[[Image()]]
     7
     8== Introduction ==
     9OmpSs-2 is a programming model composed of a set of directives and library routines that can be used in conjunction with a high-level programming language in order to develop concurrent applications.
     10
     11
     12== File Systems ==
     13On the DEEP-EST system, three different groups of filesystems are available:
     14
     15 * the [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems], provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems;
     16
     17 * the DEEP-EST (and SDV) parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system;
     18
     19 * the filesystems local to each node.
     20
     21The users home folders are placed on the shared GPFS filesystems.  With the advent of the new user model at JSC ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows:
     22
     23 * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to.  These home folders have a low space quota and are reserved for configuration files, ssh keys, etc.
     24
     25 * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user has a folder within each of the projects he/she is part of. For the DEEP project, such folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
     26
     27The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is expected to provide similar functionalities with its own parallel filesystems.
     28
     29The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems:
     30
     31|| '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
     32|| /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || ||
     33|| /p/project || /p/project/cdeep/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
     34|| /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system || *Also available in the SDV but only through 1 Gig network connection ||
     35|| /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and KNM via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     36|| /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     37|| /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
     38
     39== Stripe Pattern Details ==
     40It is possible to query this information from the deep login node, for instance:
     41
     42{{{
     43manzano@deep $ fhgfs-ctl --getentryinfo /work/manzano
     44Path: /manzano
     45Mount: /work
     46EntryID: 1D-53BA4FF8-3BD3
     47Metadata node: deep-fs02 [ID: 15315]
     48Stripe pattern details:
     49+ Type: RAID0
     50+ Chunksize: 512K
     51+ Number of storage targets: desired: 4
     52
     53manzano@deep $ beegfs-ctl --getentryinfo /sdv-work/manzano
     54Path: /manzano
     55Mount: /sdv-work
     56EntryID: 0-565C499C-1
     57Metadata node: deeper-fs01 [ID: 1]
     58Stripe pattern details:
     59+ Type: RAID0
     60+ Chunksize: 512K
     61+ Number of storage targets: desired: 4
     62}}}
     63Or like this:
     64
     65{{{
     66manzano@deep $ stat -f /work/manzano
     67  File: "/work/manzano"
     68    ID: 0        Namelen: 255     Type: fhgfs
     69Block size: 524288     Fundamental block size: 524288
     70Blocks: Total: 120178676  Free: 65045470   Available: 65045470
     71Inodes: Total: 0          Free: 0
     72
     73manzano@deep $ stat -f /sdv-work/manzano
     74  File: "/sdv-work/manzano"
     75    ID: 0        Namelen: 255     Type: fhgfs
     76Block size: 524288     Fundamental block size: 524288
     77Blocks: Total: 120154793  Free: 110378947  Available: 110378947
     78Inodes: Total: 0          Free: 0
     79}}}
     80See http://www.beegfs.com/wiki/Striping for more information.
     81
     82== Additional infos ==
     83Detailed information on the '''BeeGFS Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeGFS here].
     84
     85Detailed information on the '''BeeOND Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeOND here].
     86
     87Detailed information on the '''Storage Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/local_storage here].
     88
     89Detailed information on the '''Storage Performance''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/SDV_AdminGuide/3_Benchmarks here].
     90
     91== Notes ==
     92 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and KNMs) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes
     93
     94 * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059
     95
     96 * Test results and parameters used stored in JUBE:
     97
     98{{{
     99user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior
     100user@deep $ jube2 result benchmarks
     101
     102user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest
     103user@deep $ jube2 result benchmarks
     104}}}