Our Parallel and Distributed Computing Division evaluates, procures and operates supercomputer and large-scale on-line storage resources, enabling advanced simulation and data analysis workflows for science.
The most powerful system being currently operated is part of the North-German Supercomputing Alliance (HLRN). Consultants from physics and materials science, chemistry and bioinformatics as well as earth system research and engineering sciences provide advice to researchers from Berlin and the other member states of HLRN. The consultants support the efficient implementation of projects in both large-scale computing and data analysis, facilitating innovative research.
The HLRN-III Complex at ZIB
„Konrad“ – a Cray XC40/XC30 massively parallel supercomputer system is the main compute component of the HLRN-III complex operated at ZIB. It is complemented by Peta-scale file systems for on-line storage. For data archive a magnetic tape library with several tape robots in a separate high-security room is provided.
The HLRN-III system “Konrad” at a glance:
|10 cabinets||and additional 11 slim blower cabinets provide a high-density compute solution.|
|1872 nodes||are installed in total, each with 64 GByte main memory and two Intel Xeon processors; 1128 Cray XC40 nodes contain Intel Haswell CPUs and 744 Cray XC30 nodes have Intel IvyBridge CPUs.|
|3744 CPUs||with 12 cores each, of which 2256 are Intel Haswell CPUs and 1488 are Intel IvyBridge CPUs.|
|44’928 cores||is the total number of physical cores. With Hyperthreading twice as much cores are available to support highly parallel workloads.|
|Dragonfly||is the interconnect topology of the Cray Aries network providing a low latency and high bandwidth communication across the entire system.|
|1.4 PFlop/s||is the theoretical peak performance of the “Konrad” system which demonstrated a LINPACK performance of 991 TFlop/s.|
|120 TByte||distributed main memory across all compute nodes is installed.|
|4.2 PByte||on-line storage capacity is provided by global file systems.|
Large Storage Capacities for HPC & Big Data Analytics
In HPC, today’s complex computational workflows including pre- / post-processing and computational steps require large on-line storage capacities. Even for increasingly on-the-fly computational methods the input and output data sets for individual jobs on HLRN-III hit the Tera byte scale.
With the on-going unification of requirements in the HPC and Big Data world, these large-scale storage capacities providing an optimal bandwidth to data sets become vital for current and future workloads.
Storage Infrastructure for HLRN-III at a glance:
|3.7 PByte||on-line storage capacity is available in two globally accessible parallel file systems (Lustre) offering a high bandwidth access to application data in batch jobs.|
|0.5 PByte||on-line storage capacity is provided in a globally accessible NAS appliance for permanent project data and program code development.|
|Peta-scale||tape library for archiving large data sets is operated independently by ZIB.|
Transition Towards New Technologies
Next-generation supercomputer systems will provide innovative technologies such as:
- many-core CPUs, that introduce an new scale of parallelism per node, and
- large-capacity non-volatile memory.
These technologies will offer opportunities for vastly more complex and resource demanding computational workflows, but their full benefits can only be exploited by radical code modernizations.
To support the transition of HLRN applications towards these foreseeable technologies, we operate a Cray Test and Development System (TDS) that provides latest Intel Xeon Phi many-core compute capabilities and Cray DataWarp nodes with non-volatile storage tightly integrated into the Cray Aries network.
The Cray Test & Development System (TDS) at a glance:
|16 KNC nodes||each with a Intel Xeon Phi 5120D coprocessors (KNC) and a 10 core Intel Ivy Bridge host CPU, and with 32 GB memory.|
|80 KNL nodes||with the next-generation of Intel MIC processors ‘Knights Landing’ (KNC), the first stand-alone, many-core CPU (expected 1H/2016).|
|8 DataWarp nodes||each providing 3.2 TiB capacity of non-volatile memory (SSD) and a sustained I/O bandwidth of 3 GB/s.|
|16 KNC coprocessors||each Xeon Phi coprocessor offers 60 compute cores on a single chip and 8 GiB GDDR5 local memory|
|960 KNC cores||are provided by all Intel Xeon Phi coprocessors.|
|Dragonfly||interconnect topology is used for the Cray Aries network.|
|19 TFlop/s||theoretical peak performance can be reached, where all Xeon Phi coprocessors contribute 16 TFlop/s (double precision).|
|25.6 TiB||non-volatile memory on SSDs is available in total.|