[[Image(OmpSs2_logo_full.png)]] = Programming with OmpSs-2 = Table of contents: * [#QuickOverview Quick Overview] * Quick Setup on DEEP-EST * Examples == Quick Overview == OmpSs-2 is a programming model composed of a set of directives and library routines that can be used in conjunction with a high-level programming language (such as C, C++ or Fortran) in order to develop concurrent applications. Its name originally comes from two other programming models: **OpenMP** and **StarSs**. The design principles of these two programming models constitute the fundamental ideas used to conceive the OmpSs philosophy. [[Image(OmpSsOpenMP.png, 30%)]] OmpSs-2 **thread-pool** execution model differs from the **fork-join** parallelism implemented in OpenMP. [[Image(pools.png, 30%)]] A **task** is the minimum execution entity that can be managed independently by the runtime scheduler. **Task dependences** let the user annotate the data flow of the program and are used to determine, at runtime, if the parallel execution of two tasks may cause data races. [[Image(taskGraph.png, 15%)]] The reference implementation of OmpSs-2 is based on the **Mercurium** source-to-source compiler and the **Nanos6** runtime library: * Mercurium source-to-source compiler provides the necessary support for transforming the high-level directives into a parallelized version of the application. * Nanos6 runtime library provides services to manage all the parallelism in the user-application, including task creation, synchronization and data movement, as well as support for resource heterogeneity. [[Image(MercuriumNanos.png, 35%)]] The reader is encouraged to visit the following links for additional information: * OmpSs-2 official website. [https://pm.bsc.es/ompss-2] * OmpSs-2 specification. [https://pm.bsc.es/ftp/ompss-2/doc/spec] * OmpSs-2 user guide. [https://pm.bsc.es/ftp/ompss-2/doc/user-guide] * OmpSs-2 examples and exercises. [https://pm.bsc.es/ftp/ompss-2/doc/examples/index.html] * Mercurium official website. [https://www.bsc.es/research-and-development/software-and-apps/software-list/mercurium-ccfortran-source-source-compiler Link 1], [https://pm.bsc.es/mcxx Link 2] * Nanos official website. [https://www.bsc.es/research-and-development/software-and-apps/software-list/nanos-rtl Link 1], [https://pm.bsc.es/nanox Link 2] == File Systems == On the DEEP-EST system, three different groups of filesystems are available: * the [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/Filesystems/JUST_filesystems_node.html JSC GPFS filesystems], provided via [http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/OnlineStorage/JUST/JUST_node.html JUST] and mounted on all JSC systems; * the DEEP-EST (and SDV) parallel BeeGFS filesystems, available on all the nodes of the DEEP-EST system; * the filesystems local to each node. The users home folders are placed on the shared GPFS filesystems. With the advent of the new user model at JSC ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows: * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to. These home folders have a low space quota and are reserved for configuration files, ssh keys, etc. * $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user has a folder within each of the projects he/she is part of. For the DEEP project, such folder is located under `/p/project/cdeep/`. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found. The DEEP-EST system doesn't mount the $SCRATCH and $ARCHIVE filesystems, as it is expected to provide similar functionalities with its own parallel filesystems. The following table summarizes the characteristics of the file systems available in the DEEP-EST and DEEP-ER (SDV) systems: == Stripe Pattern Details == It is possible to query this information from the deep login node, for instance: {{{ manzano@deep $ fhgfs-ctl --getentryinfo /work/manzano Path: /manzano Mount: /work EntryID: 1D-53BA4FF8-3BD3 Metadata node: deep-fs02 [ID: 15315] Stripe pattern details: + Type: RAID0 + Chunksize: 512K + Number of storage targets: desired: 4 manzano@deep $ beegfs-ctl --getentryinfo /sdv-work/manzano Path: /manzano Mount: /sdv-work EntryID: 0-565C499C-1 Metadata node: deeper-fs01 [ID: 1] Stripe pattern details: + Type: RAID0 + Chunksize: 512K + Number of storage targets: desired: 4 }}} Or like this: {{{ manzano@deep $ stat -f /work/manzano File: "/work/manzano" ID: 0 Namelen: 255 Type: fhgfs Block size: 524288 Fundamental block size: 524288 Blocks: Total: 120178676 Free: 65045470 Available: 65045470 Inodes: Total: 0 Free: 0 manzano@deep $ stat -f /sdv-work/manzano File: "/sdv-work/manzano" ID: 0 Namelen: 255 Type: fhgfs Block size: 524288 Fundamental block size: 524288 Blocks: Total: 120154793 Free: 110378947 Available: 110378947 Inodes: Total: 0 Free: 0 }}} See http://www.beegfs.com/wiki/Striping for more information. == Additional infos == Detailed information on the '''BeeGFS Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeGFS here]. Detailed information on the '''BeeOND Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/BeeOND here]. Detailed information on the '''Storage Configuration''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/local_storage here]. Detailed information on the '''Storage Performance''' can be found [https://trac.version.fz-juelich.de/deep-er/wiki/SDV_AdminGuide/3_Benchmarks here]. == Notes == * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and KNMs) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059 * Test results and parameters used stored in JUBE: {{{ user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior user@deep $ jube2 result benchmarks user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest user@deep $ jube2 result benchmarks }}}