Supercomputing
Our Parallel and Distributed Computing Division evaluates, procures and operates supercomputer and large-scale on-line storage resources, enabling advanced simulation and data analysis workflows for science.
Introduction
The most powerful system being currently operated is part of the National High-Performance Computing Alliance (NHR). Consultants from physics and materials science, chemistry and bioinformatics as well as earth system research and engineering sciences provide advice to the researchers. The consultants support the efficient implementation of projects in both large-scale computing and data analysis, facilitating innovative research.
The HLRN-IV Complex at ZIB
„Lise“ – a Bull Intel Cluster system ranked at position 40 in the TOP500 list of November 2019 represents the HLRN-IV complex operated at ZIB. It is complemented by Peta-scale file systems for on-line storage. For data archive a magnetic tape library with several tape robots in a separate high-security room is provided.
The HLRN-IV system “Lise” at a glance:
![]() | 14 compute racks | with a mixed warm-water and air cooling infrastructure. |
![]() | 1270 nodes | in total, each with two Intel Xeon Cascade Lake Advanced Processors with 48 cores, i.e. 96 Xeon cores per node; 1236 standard nodes with 384 GByte main memory, 32 large memory nodes 768 GByte memory, and 2 huge memory nodes with 1.5 TByte memory per node. |
![]() | 2540 CPUs | Intel Xeon Platinum 9242 (2.3GHz) with 48 cores each. |
![]() | 121'920 cores | is the total number of physical cores; as Hyperthreading is enabled twice as much logical cores are available to support highly parallel workloads. |
![]() | Fat Tree | is the interconnect topology of the Intel Omni-Path network providing a low latency and high bandwidth communication across the entire system. |
![]() | 8 PFlop/s | is the theoretical peak performance of the “Lise” system which demonstrated a LINPACK performance of 5.4 PFlop/s. |
![]() | 502 TByte | distributed main memory across all compute nodes is installed. |
![]() | 8.4 PByte | on-line storage capacity is provided by global parallel file systems (DDN Lustre and GRIDScaler). |
Large Storage Capacities for HPC & Big Data Analytics
In HPC, today’s complex computational workflows including pre- / post-processing and computational steps require large on-line storage capacities. Even for increasingly on-the-fly computational methods the input and output data sets for individual jobs on the "Lise" system hit the Tera byte scale.
With the on-going unification of requirements in the HPC and Big Data world, these large-scale storage capacities providing an optimal bandwidth to data sets become vital for current and future workloads.
Storage Infrastructure for HLRN-IV at a glance:
![]() | 8.1 PByte | on-line storage capacity is available in a globally accessible parallel file systems (Lustre) offering a high bandwidth access to application data in batch jobs. |
![]() | 340 TByte | on-line storage capacity is provided in a globally accessible NAS appliance for permanent project data and program code development. |
![]() | Peta-scale | tape library for archiving large data sets is operated independently by ZIB. |