• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

HSE University HPC Cluster "cHARISMa"

At the beginning of 2019, a high-performance cluster was launched at HSE University. The cluster placed 6th on the Top-50 ranking of CIS high-performance computer systems (edition No. 31 of the Top50 list dated 09/23/2019).

The high-performance computing cluster at HSE University consists of 26 specialized computing nodes with large RAM and four modern graphics accelerators Tesla V100 32GB in each node.

The resources of the computing cluster are intended to support basic research and teaching at university, as well as to carry out research projects requiring the use of high-performance systems. Cluster resources are allocated upon request for specific projects and for a limited period of time.

Currently, the HSE HPC cluster is ranked 6th in the Top50 ranking of the most powerful computer systems in CIS.

Cluster Specifications
Number of nodes/CPU/Cores/GPU40/80/1816/116
Processor ModelIntel Xeon Gold
GPU ModelNVIDIA Tesla V100 32 GB NVLink
Total RAM34 TB
Storage SystemParallel file system Lustre
(840 Tb usable space)
Type of Computing NetworkInfiniBand EDR (2x100 Gbit/s)
Management NetworkGigabit Ethernet
Peak Performance1 Petaflops
Performance at LINPACK653.7 Teraflops
Operating SystemLinux Centos 7.6

Types of computing nodes

The HSE supercomputer uses four types of powerful computing nodes. Computing nodes of type B are used to solve problems with very large amounts of data. Type C nodes are more connected between the central and graphics processors than A and B. Type D nodes are optimal for tasks that use central processors. Type E nodes are preferably used for training large neural networks and processing a large amount of data. See more about choosing the type of computing nodes to run the task.

16 Type A Computing nodes
Processor Model 2 x Intel Xeon Gold 6152 2.1-3.7 HHz (22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x Mellanox 100Gb/s Infiniband Dual Port
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
10 Type B Computing nodes
Processor Model 2 x Intel Xeon Gold 6152 2.1-3.7 HHz (22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB
RAM1536 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x Mellanox 100Gb/s Infiniband Dual Port
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
Type C Computing nodes (C4140M)
Processor Model2 x Intel Xeon Gold 6240R 2.4-4 HHz (24 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBandMellanox 100 Gbit/s InfiniBand
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
11 Type D Computing nodes (R640)
Processor Model2 x Intel Xeon Gold 6248R 3-4 HHz (24 cores)    
GPU ModelN/A
RAM384 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBandMellanox 100 Gbit/s InfiniBand
Network Adapter EthernetIntel Gigabit 4P I350-T
Type E Computing nodes (HPE XL675d Gen10+) 
Processor Model2 x AMD EPYC 7702 2-3.35 HHz (64 cores)       
GPU Model8 x NVIDIA Tesla A100 80 GB SXM (NVLink)
GPU PlatformNVIDIA HGX A100 8-GPU 
RAM1 TB
Solid-state drive2 x SSD 960 GB (RAID 1)
Network Adapter InfiniBandMellanox 200 Gbit/s InfiniBand
Network Adapter EthernetIntel Gigabit 4P I350-T

 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.