Changes between Version 14 and Version 15 of Public/User_Guide/Filesystems
- Timestamp:
- Feb 13, 2020, 4:06:31 PM (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/Filesystems
v14 v15 29 29 || /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! || *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) || 30 30 || /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! || *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) || 31 || /pmem/scratch || /pmem/scratch || DAM partition (odd nodes) || DCPMM in appdirect mode || Local* || || || || || *3 TB in dp-dam01, 2 TB in dp-dam[03,05,07,09,11,13,15] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology ||31 || /pmem/scratch || /pmem/scratch || DAM partition (odd nodes) || DCPMM in appdirect mode || Local* || || || 2.2 GB/s simple dd test in dp-dam01* || || *3 TB in dp-dam01, 2 TB in dp-dam[03,05,07,09,11,13,15] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology || 32 32 || /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || 33 33 || /nvme || /nvme/tmp || SDV || NVMe device || Local || BeeGFS 7.1.2 || Block size: 4K || 1145 MiB/s write, 3108 MiB/s read[[BR]]139148 ops/s create, 62587 ops/s remove* || 1 NVMe device available at each SDV compute node || *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || … … 87 87 88 88 == Notes == 89 * dd test @dp-dam01 of the DCPMM in appdirect mode: 90 91 {{{ 92 [root@dp-dam01 scratch]# dd if=/dev/zero of=./delme bs=4M count=1024 conv=sync 93 1024+0 records in 94 1024+0 records out 95 4294967296 bytes (4.3 GB) copied, 1.94668 s, 2.2 GB/s 96 }}} 97 89 98 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes 90 99