wiki:Public/User_Guide/System_overview

Version 7 (modified by Jochen Kreutz, 6 years ago) (diff)

Overview of our systems

This page is supposed to give a short overview on the available systems from a hardware point of view. All hardware can be reached through a login node via SSH: deep@fz-juelich.de. The login node is implemented as virtual machine hosted by the master nodes (in a failover mode). Please, see also information about getting an account and using the batch system.

Rack plan

This is a sketch of the available hardware including a short description of the hardware interesting for the system users (the nodes you can use for running your jobs and that can be used for testing).

No image "Prototype_plus_SSSM_and_SDV_Rackplan_47U--2019-03.png" attached to Public/User_Guide/System_overview

SSSM rack

This rack hosts the master nodes, files servers and the storage as well network components for the Gigabit Ethernet administration and service networks. Users can access the login node via deep@fz-juelich.de (implemented as virtual machine running on the master nodes).

CM rack

Contains the hardware of the DEEP-EST Cluster Module including compute nodes, management nodes, network components and liquid cooling unit.

  • Cluster [50 nodes]: dp-cn[01-50]
    • 2 Intel Xeon 'Skylake' Gold 6146 (12 cores (24 threads), 3.2GHz)
    • 192 GB RAM
    • 1 x 400GB NVMe SSD (for OS only, not exposed to users)
    • network: IB EDR (100 Gb/s)

DAM rack

This rack will host the nodes of the Data Analytics Module of the DEEP-EST prototype. Currently it contains four test servers:

  • Prototype DAM [4 nodes]: protodam[01-04]
    • 2 x Intel Xeon 'Skylake' (26 cores per socket)
    • 192 GB RAM
    • network: Gigabit Ethernet

SDV rack

Along with the prototype systems serveral test nodes and so called software development vehicles have been installed in the scope of the DEEP(-ER,EST) projects. These are located in the SDV rack. The following components can be accessed by the users:

  • Cluster [16 nodes]: deeper-sdv[01-16]
    • 2 Intel Xeon 'Haswell' E5-v2680 v3 (2.5 GHz)
    • 128 GB RAM
    • 1 NVMe with 400 GB per node( accessible through BeeGFS on demand)
    • network: 100 Gb/s Extoll tourmalet
  • KNLs [8 nodes]: knl[01-08]
    • 1 Intel Xeon Phi (64-68 cores)
    • 1 NVMe with 400 GB per node (accessible through BeeGFS on demand)
    • 16 GB MCDRAM plus 96 GB RAM per KNL
    • network: Gigabit Ethernet
  • KNMs [2 nodes]: knm[01-02]
    • 1 Intel Xeon Phi - Knight Mill (72 cores)
    • 16 GB MCDRAM plus 96 GB RAM per KNL
    • network: Gigabit Ethernet
  • GPU nodes for ML [3 nodes]: ml-gpu[01-03]
    • 2 x Intel Xeon 'Skylake' Silver 4112 (2.6 GHz)
    • 192 GB RAM
    • 4 x Nvidia Tesla V100 GPU (PCIe Gen3), 16 GB HBM2
    • network: 40GbE connection

FPGA test server

In addition to the seven racks hosting the SDV and prototype hardware there is an FPGA workstation available for testing. Please, get in contact to j.kreutz@… if you would like to get access.

  • FPGA [1 node]: fpga01
    • 2 x Intel CPU (8 cores)
    • 64 GB RAM
    • 1 x Intel Arria 10 PAC

Network overview

Different types of interconnects are in use along with the Gigabit Ethernet connectivity (used for administration and service network) that is available for all the nodes. The following sketch should give a rough overview. Network details will be of particular interest for the storage access. Please also refer to the description of the filesystems.

No image "DEEP-EST_Networks_Schematic_Overview.png" attached to Public/User_Guide/System_overview

Further information

Attachments (9)

Download all attachments as: .zip