Version 6 (modified by 6 years ago) (diff) | ,
---|
Overview of our systems
This page is supposed to give a short overview on the available systems from a hardware point of view. All hardware can be reached through a login node via SSH: deep@fz-juelich.de. The login node is implemented as virtual machine hosted by the master nodes (in a failover mode). Please, see also information about getting an account and using the batch system.
Rack plan
This is a sketch of the available hardware and a short description of the hardware interesting for the system users (the nodes you can use for running your jobs on or that can be used for testing).
SSSM rack
This rack hosts the master nodes, files servers and the storage as well network components for the Gigabit Ethernet administration and service networks. Users can access the login node via deep@fz-juelich.de (implemented as virtual machine running on the master nodes).
CM rack
Contains the hardware of the DEEP-EST Cluster Module including compute nodes, management nodes, network components and liquid cooling unit.
- Cluster [50 nodes]:
dp-cn[01-50]
- 2 Intel Xeon 'Skylake' Gold 6146 (12 cores (24 threads), 3.2GHz)
- 192 GB RAM
- 1 x 400GB NVMe SSD (not exposed to users)
- network: IB EDR (100 Gb/s)
DAM rack
This rack will host the nodes of the Data Analytics Module of the DEEP-EST prototype. Currently it contains for test servers:
- Prototype DAM [4 nodes]:
protodam[01-04]
- 2 x Intel Xeon 'Skylake' (26 cores per socket)
- 192 GB RAM
- network: Gigabit Ethernet
SDV rack
Along with the prototype systems serveral test nodes and so called software development vehicles have been installed in the scope of the DEEP(-ER,EST) projects. These are located in the SDV rack. The following components can be accessed by the users:
- Cluster [16 nodes]:
deeper-sdv[01-16]
- 2 Intel Xeon 'Haswell' E5-v2680 v3 (2.5 GHz)
- 128 GB RAM
- 1 NVMe with 400 GB per node( accessible through BeeGFS on demand)
- network: 100 Gb/s Extoll tourmalet
- KNLs [8 nodes]:
knl[01-08]
- 1 Intel Xeon Phi (64-68 cores)
- 1 NVMe with 400 GB per node (accessible through BeeGFS on demand)
- 16 GB MCDRAM plus 96 GB RAM per KNL
- network: Gigabit Ethernet
- KNMs [2 nodes]:
knm[01-02]
- 1 Intel Xeon Phi - Knight Mill (72 cores)
- 16 GB MCDRAM plus 96 GB RAM per KNL
- network: Gigabit Ethernet
- GPU nodes for ML [3 nodes]:
ml-gpu[01-03]
- 2 x Intel Xeon 'Skylake' Silver 4112 (2.6 GHz)
- 192 GB RAM
- 4 x Nvidia Tesla V100 GPU (PCIe Gen3), 16 GB HBM2
- network: 40GbE connection
- NAM:
- size: 2 GB
- network: Extoll
- details: https://www.deep-projects.eu/hardware/memory-hierarchies/49-nam
FPGA test server
In addition to the seven racks hosting the SDV and prototype hardware there is an FPGA workstation available for testing. Please, get in contact to j.kreutz@… if you would like to get access.
- FPGA [1 node]:
fpga01
- 2 x Intel CPU (8 cores)
- 64 GB RAM
- 1 x Intel Arria 10 PAC
Network overview
Different types of interconnects are in use along with the Gigabit Ethernet connectivity (used for administration and service network) that is available for all the nodes. The following sketch should give a rough overview. Network details will be of particular interest for the storage access. Please also refer to the description of the filesystems.
Further information
Attachments (9)
- CM_node_hardware.png (151.4 KB) - added by 5 years ago.
- DAM_node_hardware.png (226.5 KB) - added by 5 years ago.
- ESB_node_hardware.png (130.5 KB) - added by 5 years ago.
- DEEP-EST_Modules.png (377.0 KB) - added by 3 years ago.
- SSSM.png (88.3 KB) - added by 3 years ago.
- AFSM.png (341.7 KB) - added by 3 years ago.
-
SystemOverview.png (20.9 KB) - added by 3 years ago.
MSA Overview
- IB_non-blocking_fat_tree.png (411.2 KB) - added by 3 years ago.
- DEEP-EST_Prototype_Rackplan.png (214.0 KB) - added by 11 months ago.
Download all attachments as: .zip