HSE University HPC Cluster "cHARISMa"
At HSE University, a high-performance computing cluster “Charisma” (cHARISMa, Computer of HSE for Artificial Intelligence and Supercomputer Modelling) is in operation. The system ranked 6th in the Top-50 list of supercomputers (edition No. 31 of the Top-50 ranking dated September 23, 2019).
The supercomputer consists of 51 computing nodes based on high-performance GPU accelerators, including NVIDIA H200 141GB, H100 80GB, A100 80GB, and V100 32GB, as well as powerful central processing units. The different types of computing nodes are designed to address a wide range of computational tasks carried out by university researchers.
User data is stored on a parallel Lustre file system with a total capacity of 848 TB. The computing network is based on InfiniBand EDR with a bandwidth of 2×100 Gbit/s.
Types of Computing Nodes
The HSE supercomputer uses seven types of high-performance computing nodes. Type B nodes are used for tasks involving very large datasets. Type C nodes provide higher connectivity between CPUs and GPUs compared to Types A and B. Type D nodes are optimized for CPU-intensive workloads. Type E nodes are preferable for deep neural network training and large-scale data processing. Type F nodes are universal GPU nodes suitable for computational experiments, as well as training and inference of small- and medium-sized models. Type H nodes are designed for large-scale distributed training, including large language and multimodal models, as well as other highly resource-intensive workloads. See more about choosing the type of computing nodes for running a job.
GPU Interaction Schemes in Computing Nodes
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.