12 | | The users home folders are placed on the shared GPFS filesystems. With the advent of the new user |
13 | | model at JSC |
14 | | ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows: |
15 | | - $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different |
16 | | home folders are available, one per system he/she has access to. These home |
17 | | folders have a low space quota and are reserved for configuration files, ssh |
18 | | keys, etc. |
| 10 | The users home folders are placed on the shared GPFS filesystems. With the advent of the new user model at JSC ([http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/NewUsageModel_node.html JUMO]), the shared filesystems are structured as follows: |
20 | | - $PROJECT: In JUMO, data and computational resources are assigned to projects: |
21 | | users can request access to a project and use the resources associated to it. |
22 | | As a consequence, each user has a folder within each of the projects he/she is part |
23 | | of. For the DEEP project, such folder is located under `/p/project/cdeep/`. |
24 | | Here is where the user should place data, and where the old files generated |
25 | | in the home folder before the JUMO transition can be found. |
| 12 | * $HOME: each JSC user has a folder under `/p/home/jusers/`, in which different home folders are available, one per system he/she has access to. These home folders have a low space quota and are reserved for configuration files, ssh keys, etc. |
33 | | || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Other''' || |
34 | | || /p/home || /p/home/jusers/$USER || SDV || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || |
35 | | || /p/project || /p/project/cdeep/$USER || SDV || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || |
36 | | || /gpfs-work || /gpfs-work/$USER || DEEP, SDV || GPFS exported via NFS || Global || || || || GPFS work file system;[[BR]]not suitable for performance relevant applications[[BR]]or benchmarking || |
37 | | || /work || /work/$USER || DEEP || BeeGFS || Global || 2015.03.!r11 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 2170 MiB/s write, 2111 MiB/s read[[BR]]~21000 ops/s create[[BR]]![1] || Work file system || |
38 | | || /sdv-work || /sdv-work/$USER || SDV || BeeGFS || Global || 2015.03.!r10 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 425 MiB/s write, 67 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove[[BR]]![2] || Work file system || |
39 | | || /nvme || /nvme/tmp || SDV || NVMe device || Local || || Block size: 4K || 1145 MiB/s write, 3108 MiB/s read[[BR]]139148 ops/s create, 62587 ops/s remove[[BR]]![2] || 1 NVMe device available at each SDV compute node || |
40 | | || /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || 2015.03.!r10 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove[[BR]]![2] || 1 BeeOND instance running on each NVMe device || |
| 20 | || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' || |
| 21 | || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || || |
| 22 | || /p/project || /p/project/cdeep/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || || |
| 23 | || /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system || *Also available in the SDV but only through 1 Gig network connection || |
| 24 | || /sdv-work || /sdv-work/cdeep/$USER || SDV || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 425 MiB/s write, 67 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |
| 25 | || /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write, 3108 MiB/s read[[BR]]139148 ops/s create, 62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |
| 26 | || /mnt/beeond || /mnt/beeond || SDV || BeeGFS On Demand running on the NVMe || Local || BeeGFS 7.1.2 || Block size: 512K || 1130 MiB/s write, 2447 MiB/s read[[BR]]12511 ops/s create, 18424 ops/s remove* || 1 BeeOND instance running on each NVMe device || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || |