Changes between Version 33 and Version 34 of Public/User_Guide/System_overview
- Timestamp:
- Nov 23, 2021, 11:15:05 AM (3 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/System_overview
v33 v34 15 15 In addition to the previous compute modules, a Scalable Storage Service Module (SSSM) provides shared storage infrastructure for the DEEP-EST prototype (`/usr/local`) and is accompanied by the All Flash Storage Module (AFSM) leveraging a fast local work filesystem (mounted to `/work` on the compute nodes). 16 16 17 The modules are connected together by the Network Federation (NF), composed by different types of interconnects and briefly described below. ** The setup will change into an "all IB EDR network" in the next months.** 17 The modules are connected together by the Network Federation (NF), composed by different types of interconnects and briefly described below. 18 ** The setup will change into an "all IB EDR network" in the next months.** 18 19 19 20 === Cluster Module === … … 21 22 22 23 {{{#!td 23 24 24 * Cluster [50 nodes]: `dp-cn[01-50]`: 25 25 * 2 Intel Xeon 'Skylake' Gold 6146 (12 cores (24 threads), 3.2GHz) … … 27 27 * 1 x 400GB NVMe SSD 28 28 * network: !InfiniBand EDR (100 Gb/s) 29 }}} 30 {{{#!td 31 [[Image(CM_node_hardware.png, width=600px, align=center)]] 32 }}} 29 33 30 }}} `#!td [[Image(CM_node_hardware.png, width=600px, align=center)]] `31 34 32 35 === Extreme Scale Booster === … … 41 44 * 1 x 512 GB SSD 42 45 * network: IB EDR 100 (Gb/s) (nodes `dp-esb[01-25]` to be converted from Extoll to IB EDR) 46 47 }}} 48 {{{#!td 49 [[Image(ESB_node_hardware.png, width=400px, align=center)]] 50 }}} 43 51 44 }}} `#!td [[Image(ESB_node_hardware.png, width=400px, align=center)]] ` 45 46 `#!comment [[span(style=color: #FF0000, **Attention:** )]] the Extreme Scale Booster will become available in March 2020. ` 52 {{{#!comment 53 [[span(style=color: #FF0000, **Attention:** )]] the Extreme Scale Booster will become available in March 2020. 54 }}} 47 55 48 56 === Data Analytics Module === … … 60 68 * network: EXTOLL (100 Gb/s) + 40 Gb Ethernet (to be converted to IB EDR) 61 69 62 }}} `#!td [[Image(DAM_node_hardware.png, width=620px, align=center)]] ` 70 }}} 71 {{{#!td 72 [[Image(DAM_node_hardware.png, width=620px, align=center)]] 73 }}} 63 74 64 75 === Scalable Storage Service Module === 65 It is based on spinning disks. It is composed of 4 volume data server systems, 2 metadata servers and 2 RAID enclosures. The RAID enclosures each host 24 spinning disks with a capacity of 8 TB each. Both RAIDs expose two 16 Gb/s fibre channel connections, each connecting to one of the four file servers. There are 2 volumes per RAID setup. The volumes are driven with a RAID-6 configuration. The BeeGFS global parallel file system is used to make 292 TBof data storage capacity available.76 It is based on spinning disks. It is composed of 4 volume data server systems, 2 metadata servers and 2 RAID enclosures. The RAID enclosures each host 24 spinning disks with a capacity of 8 TB each. Both RAIDs expose two 16 Gb/s fibre channel connections, each connecting to one of the four file servers. There are 2 volumes per RAID setup. The volumes are driven with a RAID-6 configuration. The BeeGFS global parallel file system is used to make of data storage capacity available. 66 77 67 Here are the specificat ions of the main hardware components more in detail:78 Here are the specificatoins of the main hardware components more in detail: 68 79 69 === All 70 It is based on PCIe3 NVMe SSD storage devices. It is composed of 6 volume data server systems and 2 metadata servers interconnected with a 100 Gbps EDR- !InfiniBand fabric. The AFSM is integrated into the DEEP-EST Prototype EDR fabric topology of the CM and ESB EDR partition. The BeeGFS global parallel file system is used to make 1.3 PB of data storage capacity available.80 === All-Flash Storage Module === 81 It is based on PCIe3 NVMe SSD storage devices. It is composed of 6 volume data server systems and 2 metadata servers interconnected with a 100 Gbps EDR-InfiniBand fabric. The AFSM is integrated into the DEEP-EST Prototype EDR fabric topology of the CM and ESB EDR partition. 71 82 72 Here are the specificat ions of the main hardware components more in detail:83 Here are the specificatoins of the main hardware components more in detail: 73 84 74 85 == Network overview == 86 75 87 Currently, different types of interconnects are in use along with the Gigabit Ethernet connectivity that is available for all the nodes (used for administration and service network). The following sketch should give a rough overview. Network details will be of particular interest for the storage access. Please also refer to the description of the [wiki:Public/User_Guide/Filesystems filesystems]. 76 88 77 ** network is going to be formed to an "all IB EDR" setup soon ! ** `#!comment network overview to be updated once all IB solution is in place ` [[Image(DEEP-EST_Prototype_Network_Overview.png, width=850px, align=center)]] 89 ** network is going to be formed to an "all IB EDR" setup soon ! ** 90 {{{#!comment 91 network overview to be updated once all IB solution is in place 92 }}} 93 [[Image(DEEP-EST_Prototype_Network_Overview.png, width=850px, align=center)]] 94 78 95 79 96 == Rack plan == 80 97 This is a sketch of the available hardware including a short description of the hardware interesting for the system users (the nodes you can use for running your jobs and that can be used for testing). 81 98 82 `#!comment Rack plan to be updated once IB HW is installed ` [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U.png, 60%, align=center)]] 99 {{{#!comment 100 Rack plan to be updated once IB HW is installed 101 }}} 102 [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U.png, 60%, align=center)]] 103 83 104 84 105 === SSSM rack === … … 117 138 * network: 40GbE connection 118 139 140 119 141 {{{#!comment Not available anymore 120 142