• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

HSE University HPC Cluster "cHARISMa"

At the beginning of 2019, a high-performance cluster was launched at HSE University. The cluster placed 10th on the Top-50 ranking of CIS high-performance computer systems.

The high-performance cluster consists of 46 computing nodes, including 6 newest nodes for training neural networks with eight NVIDIA A100 80GB SXM GPUs each, 29 specialized nodes with a large amount of RAM 768-1536 GB and four NVIDIA Tesla V100 32 GB graphics accelerators each, as well as 11 computing nodes with powerful central processors.

The resources of the computing cluster are intended to support basic research and teaching at university, as well as to carry out research projects requiring the use of high-performance systems. Cluster resources are allocated upon request for specific projects and for a limited period of time.

Currently, the HSE HPC cluster is ranked 10th in the Top50 ranking of the most powerful computer systems in CIS.

Cluster Specifications
Number of nodes/CPU/Cores/GPU46/92/2584/164
Processor ModelIntel Xeon Gold
AMD EPYC
GPU ModelNVIDIA Tesla V100 32 ГБ NVLink
NVIDIA Tesla A100 80 ГБ SXM NVLink
Total RAM52.4 TB
Storage SystemParallel file system Lustre
(848 Tb usable space)
Type of Computing NetworkInfiniBand EDR (2x100 Gbit/s)
Management NetworkGigabit Ethernet
Peak Performance2 Petaflops
Performance at LINPACK972.4 Teraflops
Peak AI Performance (FP16)30 AI Petaflops
Job scheduling systemSlurm
Operating SystemLinux CentOS

Types of computing nodes

The HSE supercomputer uses seven types of high-performance computing nodes. Type B nodes are used for tasks with very large datasets. Type C nodes provide tighter CPU-GPU connectivity than Types A and B. Type D nodes are optimal for CPU-centric workloads. Type E nodes are preferable for deep neural network training and large-scale data processing. Type F nodes are universal GPU nodes for computational experiments, as well as training and inference of small and medium-sized models. Type H nodes are intended for large-scale distributed training, including large language and multimodal models, and other resource-intensive workloads. See more about choosing the type of computing nodes to run a job.

16 Type A computing nodes (C4140K)
Processor Model2 x Intel Xeon Gold 6152 2.1-3.7 GHz (2 x 22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
10 Type B computing nodes (C4140K)
Processor Model2 x Intel Xeon Gold 6152 2.1-3.7 GHz (2 x 22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM1.5 TB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
3 Type C computing nodes (C4140M)
Processor Model2 x Intel Xeon Gold 6240R 2.4-4.0 GHz (2 x 24 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
Network Adapter EthernetIntel Ethernet 10G 4P X710/I350
11 Type D computing nodes (R640)
Processor Model2 x Intel Xeon Gold 6248R 3.0-4.0 GHz (2 x 24 cores)
GPU ModelN/A
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand100 Gbit/s
Network Adapter EthernetIntel Gigabit 4P I350-T
6 Type E computing nodes (HPE XL675d Gen10+)
Processor Model2 x AMD EPYC 7702 2.0-3.35 GHz (2 x 64 cores)
GPU Model8 x NVIDIA A100 80 GB SXM (HGX, NVLink)
RAM1 TB
Solid-state drive2 x SSD 960 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
Network Adapter EthernetIntel Gigabit 4P I350-T
2 Type F computing nodes (R760XA)
Processor Model2 x Intel Xeon Gold 6426Y 2.5-4.1 GHz (2 x 16 cores)
GPU Model2 x NVIDIA H100 80 GB PCIe (NVLink Bridge)
RAM512 GB
Solid-state drive2 x SSD NVMe 960 GB
Network Adapter InfiniBand100 Gbit/s
Network Adapter Ethernet1 Gbit/s
1 Type H' computing node (4 GPU)
Processor Model2 x Intel Xeon 6730P 2.5-3.8 GHz (2 x 32 cores)
GPU Model4 x NVIDIA H200 141 GB NVL (2 x NVLink Bridge 4-way)
RAM1.5 TB
Solid-state drive2 x SSD NVMe 1.92 TB (RAID 1)
Network Adapter InfiniBand4 x 100 Gbit/s
Network Adapter Ethernet1 Gbit/s
2 Type H computing nodes (8 GPU)
Processor Model2 x Intel Xeon 6747P 2.5-3.8 GHz (2 x 48 cores)
GPU Model8 x NVIDIA H200 141 GB NVL (2 x NVLink Bridge 4-way)
RAM2 TB
Solid-state drive2 x SSD NVMe 1.92 TB (RAID 1)
Network Adapter InfiniBand4 x 100 Gbit/s
Network Adapter Ethernet1 Gbit/s


 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.