Changes between Version 32 and Version 33 of Public/User_Guide/System_overview
- Timestamp:
- Nov 23, 2021, 11:10:34 AM (3 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Public/User_Guide/System_overview
v32 v33 15 15 In addition to the previous compute modules, a Scalable Storage Service Module (SSSM) provides shared storage infrastructure for the DEEP-EST prototype (`/usr/local`) and is accompanied by the All Flash Storage Module (AFSM) leveraging a fast local work filesystem (mounted to `/work` on the compute nodes). 16 16 17 The modules are connected together by the Network Federation (NF), composed by different types of interconnects and briefly described below. 18 ** The setup will change into an "all IB EDR network" in the next months.** 17 The modules are connected together by the Network Federation (NF), composed by different types of interconnects and briefly described below. ** The setup will change into an "all IB EDR network" in the next months.** 19 18 20 19 === Cluster Module === … … 22 21 23 22 {{{#!td 23 24 24 * Cluster [50 nodes]: `dp-cn[01-50]`: 25 25 * 2 Intel Xeon 'Skylake' Gold 6146 (12 cores (24 threads), 3.2GHz) … … 27 27 * 1 x 400GB NVMe SSD 28 28 * network: !InfiniBand EDR (100 Gb/s) 29 }}}30 {{{#!td31 [[Image(CM_node_hardware.png, width=600px, align=center)]]32 }}}33 29 30 }}} `#!td [[Image(CM_node_hardware.png, width=600px, align=center)]] ` 34 31 35 32 === Extreme Scale Booster === … … 44 41 * 1 x 512 GB SSD 45 42 * network: IB EDR 100 (Gb/s) (nodes `dp-esb[01-25]` to be converted from Extoll to IB EDR) 46 47 }}}48 {{{#!td49 [[Image(ESB_node_hardware.png, width=400px, align=center)]]50 }}}51 43 52 {{{#!comment 53 [[span(style=color: #FF0000, **Attention:** )]] the Extreme Scale Booster will become available in March 2020. 54 }}} 44 }}} `#!td [[Image(ESB_node_hardware.png, width=400px, align=center)]] ` 45 46 `#!comment [[span(style=color: #FF0000, **Attention:** )]] the Extreme Scale Booster will become available in March 2020. ` 55 47 56 48 === Data Analytics Module === … … 68 60 * network: EXTOLL (100 Gb/s) + 40 Gb Ethernet (to be converted to IB EDR) 69 61 70 }}} 71 {{{#!td 72 [[Image(DAM_node_hardware.png, width=620px, align=center)]] 73 }}} 62 }}} `#!td [[Image(DAM_node_hardware.png, width=620px, align=center)]] ` 74 63 75 64 === Scalable Storage Service Module === 76 It is based on spinning disks. It is composed of 4 volume data server systems, 2 metadata servers and 2 RAID enclosures. The RAID enclosures each host 24 spinning disks with a capacity of 8 TB each. Both RAIDs expose two 16 Gb/s fibre channel connections, each connecting to one of the four file servers. There are 2 volumes per RAID setup. The volumes are driven with a RAID-6 configuration. The BeeGFS global parallel file system is used to make of data storage capacity available.65 It is based on spinning disks. It is composed of 4 volume data server systems, 2 metadata servers and 2 RAID enclosures. The RAID enclosures each host 24 spinning disks with a capacity of 8 TB each. Both RAIDs expose two 16 Gb/s fibre channel connections, each connecting to one of the four file servers. There are 2 volumes per RAID setup. The volumes are driven with a RAID-6 configuration. The BeeGFS global parallel file system is used to make 292 TB of data storage capacity available. 77 66 78 Here are the specificat oins of the main hardware components more in detail:67 Here are the specifications of the main hardware components more in detail: 79 68 80 === All -Flash Storage Module ===81 It is based on PCIe3 NVMe SSD storage devices. It is composed of 6 volume data server systems and 2 metadata servers interconnected with a 100 Gbps EDR- InfiniBand fabric. The AFSM is integrated into the DEEP-EST Prototype EDR fabric topology of the CM and ESB EDR partition.69 === All Flash Storage Module === 70 It is based on PCIe3 NVMe SSD storage devices. It is composed of 6 volume data server systems and 2 metadata servers interconnected with a 100 Gbps EDR-!InfiniBand fabric. The AFSM is integrated into the DEEP-EST Prototype EDR fabric topology of the CM and ESB EDR partition. The BeeGFS global parallel file system is used to make 1.3 PB of data storage capacity available. 82 71 83 Here are the specificat oins of the main hardware components more in detail:72 Here are the specifications of the main hardware components more in detail: 84 73 85 74 == Network overview == 86 87 75 Currently, different types of interconnects are in use along with the Gigabit Ethernet connectivity that is available for all the nodes (used for administration and service network). The following sketch should give a rough overview. Network details will be of particular interest for the storage access. Please also refer to the description of the [wiki:Public/User_Guide/Filesystems filesystems]. 88 76 89 ** network is going to be formed to an "all IB EDR" setup soon ! ** 90 {{{#!comment 91 network overview to be updated once all IB solution is in place 92 }}} 93 [[Image(DEEP-EST_Prototype_Network_Overview.png, width=850px, align=center)]] 94 77 ** network is going to be formed to an "all IB EDR" setup soon ! ** `#!comment network overview to be updated once all IB solution is in place ` [[Image(DEEP-EST_Prototype_Network_Overview.png, width=850px, align=center)]] 95 78 96 79 == Rack plan == 97 80 This is a sketch of the available hardware including a short description of the hardware interesting for the system users (the nodes you can use for running your jobs and that can be used for testing). 98 81 99 {{{#!comment 100 Rack plan to be updated once IB HW is installed 101 }}} 102 [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U.png, 60%, align=center)]] 103 82 `#!comment Rack plan to be updated once IB HW is installed ` [[Image(Prototype_plus_SSSM_and_SDV_Rackplan_47U.png, 60%, align=center)]] 104 83 105 84 === SSSM rack === … … 138 117 * network: 40GbE connection 139 118 140 141 119 {{{#!comment Not available anymore 142 120