1 | | OmpSs2 user guide here |
| 1 | = Programming with OmpSs-2 = |
| 2 | |
| 3 | * Introduction |
| 4 | * Quick User Guide |
| 5 | |
| 6 | [[Image()]] |
| 7 | |
| 8 | == Introduction == |
| 9 | OmpSs-2 is a programming model composed of a set of directives and library routines that can be used in conjunction with a high-level programming language in order to develop concurrent applications. |
| 10 | |
| 11 | |
| 12 | == File Systems == |
| 13 | On the DEEP-EST system, three different groups of filesystems are available: |
| 14 | |
| 15 | * the [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems], provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems; |
| 16 | |
| 17 | * the DEEP-EST (and SDV) parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system; |
| 18 | |
| 19 | * the filesystems local to each node. |
| 20 | |
| 21 | The users home folders are placed on the shared GPFS filesystems. With the advent of the new user model at JSC ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows: |
| 22 | |
| 23 | * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to. These home folders have a low space quota and are reserved for configuration files, ssh keys, etc. |
| 24 | |
| 25 | * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user has a folder within each of the projects he/she is part of. For the DEEP project, such folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found. |
| 26 | |
| 27 | The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is expected to provide similar functionalities with its own parallel filesystems. |
| 28 | |
| 29 | The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems: |
| 30 | |
| 31 | || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' || |
| 32 | || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || || |
| 33 | || /p/project || /p/project/cdeep/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || || |
| 34 | || /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system || *Also available in the SDV but only through 1 Gig network connection || |
| 35 | || /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and KNM via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |
| 36 | || /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write, 3108 MiB/s read[[BR]]139148 ops/s create, 62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |
| 37 | || /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |
| 38 | |
| 39 | == Stripe Pattern Details == |
| 40 | It is possible to query this information from the deep login node, for instance: |
| 41 | |
| 42 | {{{ |
| 43 | manzano@deep $ fhgfs-ctl --getentryinfo /work/manzano |
| 44 | Path: /manzano |
| 45 | Mount: /work |
| 46 | EntryID: 1D-53BA4FF8-3BD3 |
| 47 | Metadata node: deep-fs02 [ID: 15315] |
| 48 | Stripe pattern details: |
| 49 | + Type: RAID0 |
| 50 | + Chunksize: 512K |
| 51 | + Number of storage targets: desired: 4 |
| 52 | |
| 53 | manzano@deep $ beegfs-ctl --getentryinfo /sdv-work/manzano |
| 54 | Path: /manzano |
| 55 | Mount: /sdv-work |
| 56 | EntryID: 0-565C499C-1 |
| 57 | Metadata node: deeper-fs01 [ID: 1] |
| 58 | Stripe pattern details: |
| 59 | + Type: RAID0 |
| 60 | + Chunksize: 512K |
| 61 | + Number of storage targets: desired: 4 |
| 62 | }}} |
| 63 | Or like this: |
| 64 | |
| 65 | {{{ |
| 66 | manzano@deep $ stat -f /work/manzano |
| 67 | File: "/work/manzano" |
| 68 | ID: 0 Namelen: 255 Type: fhgfs |
| 69 | Block size: 524288 Fundamental block size: 524288 |
| 70 | Blocks: Total: 120178676 Free: 65045470 Available: 65045470 |
| 71 | Inodes: Total: 0 Free: 0 |
| 72 | |
| 73 | manzano@deep $ stat -f /sdv-work/manzano |
| 74 | File: "/sdv-work/manzano" |
| 75 | ID: 0 Namelen: 255 Type: fhgfs |
| 76 | Block size: 524288 Fundamental block size: 524288 |
| 77 | Blocks: Total: 120154793 Free: 110378947 Available: 110378947 |
| 78 | Inodes: Total: 0 Free: 0 |
| 79 | }}} |
| 80 | See http://www.beegfs.com/wiki/Striping for more information. |
| 81 | |
| 82 | == Additional infos == |
| 83 | Detailed information on the '''BeeGFS Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeGFS here]. |
| 84 | |
| 85 | Detailed information on the '''BeeOND Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeOND here]. |
| 86 | |
| 87 | Detailed information on the '''Storage Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/local_storage here]. |
| 88 | |
| 89 | Detailed information on the '''Storage Performance''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/SDV_AdminGuide/3_Benchmarks here]. |
| 90 | |
| 91 | == Notes == |
| 92 | * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and KNMs) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes |
| 93 | |
| 94 | * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059 |
| 95 | |
| 96 | * Test results and parameters used stored in JUBE: |
| 97 | |
| 98 | {{{ |
| 99 | user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior |
| 100 | user@deep $ jube2 result benchmarks |
| 101 | |
| 102 | user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest |
| 103 | user@deep $ jube2 result benchmarks |
| 104 | }}} |