Changes between Version 6 and Version 7 of Public/User_Guide/OmpSs-2


Ignore:
Timestamp:
Jun 11, 2019, 11:03:23 AM (5 years ago)
Author:
Pedro Martinez-Ferror
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/User_Guide/OmpSs-2

    v6 v7  
    1010OmpSs-2 is a programming model composed of a set of directives and library routines that can be used in conjunction with a high-level programming language in order to develop concurrent applications. Its name originally comes from two other programming models: **OpenMP** and **StarSs**. The design principles of these two programming models constitute the fundamental ideas used to conceive the OmpSs philosophy.
    1111
    12 [[Image(pools.png)]]
     12[[Image(pools.png, 20%)]]
    1313
    1414OmpSs-2 **thread-pool** execution model differs from the **fork-join** parallelism implemented in OpenMP.
     
    3535The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems:
    3636
    37 || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' ||
    38 || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || ||
    39 || /p/project || /p/project/cdeep/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || ||
    40 || /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system || *Also available in the SDV but only through 1 Gig network connection ||
    41 || /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and KNM via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    42 || /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write,  3108 MiB/s read[[BR]]139148 ops/s create,  62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    43 || /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` ||
    4437
    4538== Stripe Pattern Details ==