• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

HSE University HPC Cluster "cHARISMa"

At HSE University, a high-performance computing cluster “Charisma” (cHARISMa, Computer of HSE for Artificial Intelligence and Supercomputer Modelling) is in operation. The system ranked 6th in the Top-50 list of supercomputers (edition No. 31 of the Top-50 ranking dated September 23, 2019).
The supercomputer consists of 51 computing nodes based on high-performance GPU accelerators, including NVIDIA H200 141GB, H100 80GB, A100 80GB, and V100 32GB, as well as powerful central processing units. The different types of computing nodes are designed to address a wide range of computational tasks carried out by university researchers.
User data is stored on a parallel Lustre file system with a total capacity of 848 TB. The computing network is based on InfiniBand EDR with a bandwidth of 2×100 Gbit/s.

Cluster Specifications
Number of nodes/CPU/Cores/GPU/GPU Cores51/102/2904/184/1438848
Processor ModelIntel Xeon / AMD EPYC
GPU Model  20 x NVIDIA H200 141 GB NVL
    4 x NVIDIA H100 80 GB NVL
  48 x NVIDIA A100 80 GB SXM
116 x NVIDIA V100 32 GB SXM
Total RAM53.5 TB
Storage SystemParallel file system Lustre
(848 TB usable space)
Type of Computing NetworkInfiniBand EDR (2x100 Gbit/s)
Management NetworkGigabit Ethernet
Peak Performance (FP64)3.52 Petaflops
Performance at LINPACK975.6 Teraflops (A100+V100) + 807.2 Teraflops (H200) 
Peak AI Performance (FP16)91 AI Petaflops
Job scheduling systemSlurm
Operating SystemRocky Linux / Linux CentOS

Types of Computing Nodes

The HSE supercomputer uses seven types of high-performance computing nodes. Type B nodes are used for tasks involving very large datasets. Type C nodes provide higher connectivity between CPUs and GPUs compared to Types A and B. Type D nodes are optimized for CPU-intensive workloads. Type E nodes are preferable for deep neural network training and large-scale data processing. Type F nodes are universal GPU nodes suitable for computational experiments, as well as training and inference of small- and medium-sized models. Type H nodes are designed for large-scale distributed training, including large language and multimodal models, as well as other highly resource-intensive workloads. See more about choosing the type of computing nodes for running a job.

16 Type A Computing Nodes (C4140K)
Processor Model2 x Intel Xeon Gold 6152 2.1-3.7 GHz (2 x 22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
10 Type B Computing Nodes (C4140K)
Processor Model2 x Intel Xeon Gold 6152 2.1-3.7 GHz (2 x 22 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM1.5 TB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
3 Type C computing nodes (C4140M)
Processor Model2 x Intel Xeon Gold 6240R 2.4-4 GHz (2 x 24 cores)
GPU Model4 x NVIDIA Tesla V100 32 GB NVLink
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
11 Type D computing nodes (R640)
Processor Model2 x Intel Xeon Gold 6248R 3.0-4 GHz (2 x 24 cores)
GPU ModelN/A
RAM768 GB
Solid-state drive2 x SSD 240 GB (RAID 1)
Network Adapter InfiniBand100 Gbit/s
6 Type E Computing Nodes (HPE XL675d Gen10+)
Processor Model2 x AMD EPYC 7702 2.0-3.35 GHz (2 x 64 cores)     
GPU Model8 x NVIDIA A100 80 GB SXM (HGX, NVLink)
RAM1 TB
Solid-state drive2 x SSD 960 GB (RAID 1)
Network Adapter InfiniBand2 x 100 Gbit/s
2 Type F Computing Nodes (R760XA)
Processor Model2x Intel Xeon Gold 6426Y 2.5-4.1 GHz (2x16 cores)
GPU Model2 x NVIDIA H100 80 GB PCIe (NVLink Bridge)
RAM512 GB
Solid-state drive2 x SSD NVMe 960 GB
Network Adapter InfiniBand100 Gbit/s
1 Type H' Computing Node (4 GPU)
Processor Model2 x Intel Xeon 6730P 2.5-3.8 GHz (2 x 32 cores)       
GPU Model4 x NVIDIA H200 141 GB NVL (2 x NVLink Bridge)
RAM1.5 TB
Solid-state drive2 x SSD NVMe 1.92 TB (RAID 1)
Network Adapter InfiniBand4 x 100 Gbit/s
2 Type H Computing Nodes (8 GPU)
Processor Model2 x Intel Xeon 6747P 2.5-3.8 GHz (2 x 48 cores)       
GPU Model8 x NVIDIA H200 141 GB NVL (2 x NVLink Bridge)
RAM2 TB
Solid-state drive2 x SSD NVMe 1.92 TB (RAID 1)
Network Adapter InfiniBand4 x 100 Gbit/s

GPU Interaction Schemes in Computing Nodes

Types A and B (4xV100)

Type C (4xV100)

Type E (8xA100)

A_B C E

Type F (2xH100)

Type H' (4xH200) 

Type H (8xH200) 

F G H


 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.