Changes between Version 3 and Version 4 of Public/User_Guide/System_overview
- Timestamp:
- May 17, 2019, 3:56:01 PM (6 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/System_overview
v3 v4 1 1 = Overview of our systems = 2 2 3 This page is supposed to give a short overview on the available systems from a hardware point of view. 3 This page is supposed to give a short overview on the available systems from a hardware point of view. All hardware can be reached through a login node via SSH: '''!deep@fz-juelich.de'''. 4 The login node is implemented as virtual machine hosted by the master nodes (in a failover mode). 5 Please, see also information about [wiki:Public/User_Guide/Account getting an account] and using the [wiki:Public/User_Guide/Batch_system batch system]. 4 6 5 7 == Rack plan == 6 8 7 This is a sketch of the available hardware .9 This is a sketch of the available hardware and a short description of the hardware interesting for the system users (the nodes you can use for running your jobs on or that can be used for testing). 8 10 9 [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U--2019-03.png)]] 10 == Network overview == 11 11 [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U--2019-03.png, 50%)]] 12 12 13 13 … … 35 35 }}} 36 36 37 == SDV rack == 37 === SSSM rack === 38 39 This rack hosts the master nodes, files servers and the storage as well network components for the Gigabit Ethernet administration and service networks. Users can access the login node via '''!deep@fz-juelich.de''' (implemented as virtual machine running on the master nodes). 40 41 === CM rack === 42 43 Contains the hardware of the DEEP-EST Cluster Module including compute nodes, management nodes, network components and liquid cooling unit. 44 45 * Cluster [50 nodes]: `dp-cn[01-50]` 46 * 2 Intel Xeon 'Skylake' Gold 6146 (12 cores (24 threads), 3.2GHz) 47 * 192 GB RAM 48 * 1 x 400GB NVMe SSD (not exposed to users) 49 * network: IB EDR (100 Gb/s) 50 51 === DAM rack === 52 53 This rack will host the nodes of the Data Analytics Module of the DEEP-EST prototype. Currently it contains for test servers: 54 55 * Prototype DAM [4 nodes]: `protodam[01-04]` 56 * 2 x Intel Xeon 'Skylake' (26 cores per socket) 57 * 192 GB RAM 58 59 === SDV rack === 60 61 Along with the prototype systems serveral test nodes and so called software development vehicles have been installed in the scope of the DEEP(-ER,EST) projects. These are located in the SDV rack. The following components can be accessed by the users: 62 38 63 * Cluster [16 nodes]: `deeper-sdv[01-16]` 39 64 * 2 Intel Xeon 'Haswell' E5-v2680 v3 (2.5 GHz) 40 65 * 128 GB RAM 41 * 1 NVMe with 400 GB per node 66 * 1 NVMe with 400 GB per node( accessible through BeeGFS on demand) 42 67 * network: 100 Gb/s Extoll tourmalet 43 68 44 {{{#!comment 45 * KNLs: 46 * 8 KNLs 47 * 4 with EXTOLL 48 * 2 with NVMe 49 * 96 GB per KNL 50 }}} 69 * KNLs [8 nodes]: `knl[01-08]` 70 * 1 Intel Xeon Phi (64-68 cores) 71 * 1 NVMe with 400 GB per node (accessible through BeeGFS on demand) 72 * 16 GB MCDRAM plus 96 GB RAM per KNL 73 * network: Gigabit Ethernet 74 75 * KNMs [2 nodes]: `knm[01-02]` 76 * 1 Intel Xeon Phi - Knight Mill (72 cores) 77 * 16 GB MCDRAM plus 96 GB RAM per KNL 78 * network: Gigabit Ethernet 51 79 52 80 * GPU nodes for ML [3 nodes]: `ml-gpu[01-03]` … … 62 90 63 91 92 93 == Network overview 64 94 65 == DAM rack == 66 * Prototype DAM [4 nodes]: `protodam[01-04]` 67 * 2 x Intel Xeon 'Skylake' (26 cores per socket) 68 * 192 GB RAM 69 70 71 95 Different types of interconnects are in use along with the Gigabit Ethernet connectivity (used for administration and service network) that is available for all the nodes. 96 The following sketch should give a rough overview. Network details will be of particular interest for the storage access. 72 97 73 98