| 52 | Remember that OmpSs?-2 uses **thread-pool** execution model which means that it permanently **uses all the threads** present on the system. The reader check the **system affinity** by running the **NUMA command** `numactl --show`: |
| 53 | {{{ |
| 54 | $ numactl --show |
| 55 | policy: bind |
| 56 | preferred node: 0 |
| 57 | physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35 |
| 58 | cpubind: 0 |
| 59 | nodebind: 0 |
| 60 | membind: 0 |
| 61 | }}} |
| 62 | as well as the **Nanos6 command** `nanos6-info --runtime-details | grep List`: |
| 63 | {{{ |
| 64 | $ nanos6-info --runtime-details | grep List |
| 65 | Initial CPU List 0-11,24-35 |
| 66 | NUMA Node 0 CPU List 0-35 |
| 67 | NUMA Node 1 CPU List |
| 68 | }}} |
| 69 | |
| 70 | Notice that both commands return consistent outputs and, even though an entire node with two sockets has been requested, only the first NUMA node (i.e. socket) has been correctly bind. As a result, only 48 threads of the first socket (0-11, 24-35), from which 24 are physical and 24 logical (hyper-threading enabled), are going to be utilised whilst the other 48 threads available on the second socket will remain idle. Therefore, **the system affinity showed above is not correct.** |
| 71 | |
| 72 | System affinity can be used to specify, for example, the ratio of MPI and OmpSs-2 processes for a hybrid application and can be modified by user request in different ways: |
| 73 | * Via SLURM: if the affinity does not correspond with the ressources requested like in the example above, then contact the system admin. |
| 74 | * Via the command `numactl`. |
| 75 | * Via the command `taskset`. |