Table of Contents
File Systems
Available file systems
On the DEEP system, three different groups of file systems are available:
- the JSC GPFS file systems, provided via JUST and mounted on all JSC systems;
- the DEEP parallel BeeGFS file systems, available on all the nodes of the DEEP system;
- the file systems local to each node.
The users home folders are placed on the shared GPFS file systems. With the advent of the new user model at JSC (JUMO), the shared file systems are structured as follows:
- $HOME: each JSC user has a folder under
/p/home/jusers/
, in which different home folders are available, one per system he/she has access to. These home folders have a low space quota and are reserved for configuration files, ssh keys, etc.
- $PROJECT: In JUMO, data and computational resources are assigned to projects: users can request access to a project and use the resources associated to it. As a consequence, each user can create folders within each of the projects he/she is part of (with either personal or permissions to share with other project members). For the DEEP-SEA project (for example), the project folder is located under
/p/project/deepsea/
. Here is where the user should place data, and where the old files generated in the home folder before the JUMO transition can be found.
The DEEP system doesn't mount the $SCRATCH file systems from GPFS, as it is expected to provide similar functionalities with its own parallel and local file systems.
The deepv
login node exposes the same file systems as the compute nodes, but it lacks a local scratch file system. Since /tmp
is very limited in size on deepv
please use $SCRATCH
instead (pointing to the project folder) or use e.g. the /pmem/scratch on the dp-dam partition $LOCALSCRATCH on any other compute node when performing SW installation activities. A quota has been introduced for /tmp
on deepv
to avoid clogging of this filesystem on the login node which will lead to several issues. Additionally, files in /dev/shm
, /tmp
and /var/tmp
older than 7 days will be removed regularly '''
The following table summarizes the characteristics of the file systems available in the DEEP and (SDV) systems. Please beware that the $project
(all lowercase) variable used in the table only represents any JuDoor project the user might have access to, and that it is not really exported on the system environment. For a list of all projects a user belongs to, please refer to the user's JuDoor page. Alternatively, users can check the projects they are part of with the jutil
application:
$ jutil user projects -o columns
Mount Point | User can write/read to/from | Cluster | Type | Global / Local | SW Version | Stripe Pattern Details | Maximum Measured Performance (see footnotes) | Description | Other |
/p/home | /p/home/jusers/$USER | SDV, DEEP | GPFS exported via NFS | Global | JUST GPFS Home directory; used only for configuration files. | ||||
/p/project | /p/project/$project | SDV, DEEP | GPFS exported via NFS | Global | JUST GPFS Project directory; GPFS main storage file system; not suitable for performance relevant applications or benchmarking | ||||
/arch | /arch/$project | login node only (deepv) | GPFS exported via NFS | Global | JUST GPFS Archive directory; Long-term storage solution for data not used in a long time; Data migrated to tape - not intended for lots of small files. Recovery can take days. | If you plan to transfer data to / from the archive e.g. to the project folder, please consider using JUDAC instead of working on deepv in order to help avoiding congestion on the DEEP ↔ Just connection. Get in contact to the system administrators (e.g. via the support mailing list) if you need assistance with archiving your data.
| |||
/arch2 | /arch2/$project | login node only (deepv) | GPFS exported via NFS | Global | JUST GPFS Archive directory; Long-term storage solution for data not used in a long time; Data migrated to tape - not intended for lots of small files. Recovery can take days. | If you plan to transfer data to / from the archive e.g. to the project folder, please consider using JUDAC instead of working on deepv in order to help avoiding congestion on the DEEP ↔ Just connection. For the DEEP-SEA project, please apply for the datadeepsea project within JuDoor. Get in contact to the system administrators (e.g. via the support mailing list) if you need assistance with archiving your data.
| |||
/afsm | /afsm | DEEP | BeeGFS | Global | BeeGFS 7.2.5 | Fast work file system, no backup, hence not meant for permanent data storage | |||
/work_old | /work_old/$project | DEEP | BeeGFS | Global | BeeGFS 7.2.5 | Work file system, no backup, hence not meant for permanent data storage. Deprecated | |||
/scratch | /scratch | DEEP | xfs local partition | Local* | Node local scratch file system for temporary data. Will be cleaned up after job finishes. Size differs on the modules|| *Recommended to use instead of /tmp for storing temporary files | ||||
/nvme/scratch | /nvme/scratch | DAM partition | local SSD (xfs) | Local* | Scratch file system for temporary data. Will be cleaned up after job finishes|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) | ||||
/nvme/scratch2 | /nvme/scratch2 | DAM partition | local SSD (ext4) | Local* | Scratch file system for temporary data. Will be cleaned up after job finishes|| *1.5 TB Intel Optane SSD Data Center (DC) P4800X (NVMe PCIe3 x4, 2.5”, 3D XPoint)) | ||||
/pmem/scratch | /pmem/scratch | DAM partition | DCPMM in appdirect mode | Local* | 2.2 GB/s simple dd test in dp-dam01 | *3 TB in dp-dam[01,02], 2 TB in dp-dam[03-16] Intel Optane DC Persistent Memory (DCPMM) 256GB DIMMs based on Intel’s 3D XPoint non-volatile memory technology |
Notes
- dd test @dp-dam01 of the DCPMM in appdirect mode:
[root@dp-dam01 scratch]# dd if=/dev/zero of=./delme bs=4M count=1024 conv=sync 1024+0 records in 1024+0 records out 4294967296 bytes (4.3 GB) copied, 1.94668 s, 2.2 GB/s
- The /work file system which is available in the DEEP-EST prototype, is as well reachable from the nodes in the SDV (including KNLs and ml-gpu nodes) but through a slower connection of 1 Gb/s. The file system is therefore not suitable for benchmarking or I/O task intensive jobs from those nodes
- For moving data between /p/* and /arch, please use JUDAC instead of performing these actions on the login node (
deepv
). This helps avoiding congestion on the Just connection:ssh -l <username> judac mv /p/... /arch/...