Changes between Version 18 and Version 19 of Public/User_Guide/Filesystems
- Timestamp:
- May 5, 2020, 3:48:09 PM (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/Filesystems
v18 v19 2 2 3 3 = File Systems = 4 5 4 == Available file systems == 6 5 On the DEEP-EST system, three different groups of file systems are available: … … 23 22 24 23 || '''Mount Point''' || '''User can write/read to/from''' || '''Cluster''' || '''Type''' || '''Global / Local''' || '''SW Version''' || '''Stripe Pattern Details''' || '''Maximum Measured Performance[[BR]](see footnotes)''' || '''Description''' || '''Other''' || 25 || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Home directory;[[BR]]used only for configuration files. || || 26 || /p/project || /p/project/cdeep || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || || 24 || /p/home || /p/home/jusers/$USER || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Home directory;[[BR]]used only for configuration files. || || 25 || /p/project || /p/project/cdeep || SDV, DEEP-EST || GPFS exported via NFS || Global || || || || JUST GPFS Project directory;[[BR]]GPFS main storage file system;[[BR]]not suitable for performance relevant applications or benchmarking || || 26 || /arch || /arch/deep || login node only (deepv) || GPFS exported via NFS || Global || || || || JUST GPFS Archive directory;[[BR]]Long-term storage solution for data not used in a long time;[[BR]]Data migrated to tape - not intended for lots of small files. Recovery can take days. || || 27 27 || /work || /work/cdeep || DEEP-EST* || BeeGFS || Global || BeeGFS 7.1.2 || || || Work file system, **no backup**, hence not meant for permanent data storage || *Also available in the SDV but only through 1 Gig network connection || 28 || /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! 29 || /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! 30 || /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes! 28 || /scratch || /scratch || DEEP-EST || xfs local partition || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *Recommended to use instead of /tmp for storing temporary files || 29 || /nvme/scratch || /nvme/scratch || DAM partition || local SSD (xfs) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) || 30 || /nvme/scratch2 || /nvme/scratch2 || DAM partition || local SSD (ext4) || Local* || || || || Scratch file system for temporary data. Will be cleaned up after job finishes!|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) || 31 31 || /pmem/scratch || /pmem/scratch || DAM partition || DCPMM in appdirect mode || Local* || || || 2.2 GB/s simple dd test in dp-dam01 || || *3 TB in dp-dam[01,02], 2 TB in dp-dam[03-16] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology || 32 32 || /sdv-work || /sdv-work/cdeep/$USER || SDV (deeper-sdv nodes via EXTOLL, KNL and ml-gpu via GbE only), DEEP-EST (1 GbE only) || BeeGFS || Global || BeeGFS 7.1.2 || Type: RAID0,[[BR]]Chunksize: 512K,[[BR]]Number of storage targets: desired: 4 || 1831.85 MiB/s write, 1308.62 MiB/s read[[BR]]15202 ops/s create, 5111 ops/s remove* || Work file system, **no backup**, hence not meant for permanent data storage|| *Test results and parameters used stored in JUBE:[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/ior`[[BR]]`user@deep $ jube2 result benchmarks`[[BR]][[BR]]`user@deep $ cd /usr/local/deep-er/sdv-benchmarks/synthetic/mdtest`[[BR]]`user@deep $ jube2 result benchmarks` || … … 95 95 4294967296 bytes (4.3 GB) copied, 1.94668 s, 2.2 GB/s 96 96 }}} 97 98 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gig. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes 97 * The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gb/s. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes 99 98 100 99 * Performance tests (IOR and mdtest) reports are available in the BSCW under DEEP-ER -> Work Packages (WPs) -> WP4 -> T4.5 - Performance measurement and evaluation of I/O software -> Jülich DEEP Cluster -> Benchmarking reports: https://bscw.zam.kfa-juelich.de/bscw/bscw.cgi/1382059