What do such diverse fields as climate modeling, renewable energy, artificial intelligence, and precision medicine have in common? They all rely on high-performance computing (HPC), the art of innovative research with the help of super-powerful computers.

Today, HPC systems are similar to tens of thousands of individual computers (computing nodes) with superfast processors connected by a special data network so that they can work together to solve one single problem, often being a million times more potent than a standard laptop.

Here at ZIB we have been operating HPC systems for science for 40 years. Presently, we manage the HPC system “Lise”, with a total computing capacity of approximately 10 petaflops per second. To put this into perspective, this implies that within a single second, our system can perform calculations equivalent to what the entire global population of 8 billion individuals would take 2 years to achieve, working around the clock.

Since 2021, ZIB is part of the National High-Performance Computing Alliance (NHR) that has been built to pool top-level HPC resources and expertise for German science. In addition to building and operating HPC systems, the focus is on helping scientists and engineers to run HPC workloads as well as on optimizing energy efficiency by decreasing energy consumption while increasing computing power.

Recently, artificial intelligence (AI) and big data have revolutionized the HPC world. Machine learning, for example, the art of training AI based on lots of data, requires enormous computing power, which can only be provided by sophisticated computer architectures based on accelerators like GPUs, a high-performance communication network, and sophisticated memory and storage, all working together to deliver the best performance regarding computing power as well as energy efficiency. At ZIB, we not only work hard to make our HPC system fit for AI and unleash the potential of AI technologies, but also consider their transparency and societal implications, in particular in critical applications.

Our HPC system holds immense potential for researchers. We actively assist our users in unlocking this potential through scientific consulting services and also open up new horizons for computational research: ZIB researchers are leveraging the system to drive innovation and address research challenges. Examples include advancements in renewable energy [14, 17], sustainable mobility and city planing [9], cutting-edge drug design [7, 10, 11, 13, 15], and enhancing the reliability and security of artificial intelligence [12, 16]. Additional research is focused on unlocking the potential of new compute architectures, e. g. with GPUs [1, 5] and reconfigurable devices like FPGAs [2, 3, 8] as well as new memory technologies [4].

HPC is currently more accessible and valuable than it has ever been. At ZIB, we are working with many other scientists in Berlin, in Germany, and all around the world to harness its capabilities to address the pressing challenges that our society encounters.

 

References:

[1] S. Alhaddad, J. Förstner, S. Groth, D. Grünewald, Y. Grynko, F. Hannig, T. Kenter, F. Pfreundt, C. Plessl, M. Schotte, T. Steinke, J. Teich, M. Weiser, and F. Wende. The HighPerMeshes Framework for Numerical Algorithms on Unstructured Grids. Concurrency and Computation: Practice and Experience 34.14 (2022). DOI: 10.1002/cpe.6616.

[2] S. Christgau, D. Everingham, F. Mikolajczak, N. Schelten, B. Schnor, M. Schroetter, B. Stabernack, and F. Steinert. Enabling Communication with FPGA-based Network-attached Accelerators for HPC Workloads. In: Proceedings of the SC’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, SC-W 2023, Denver, CO, USA, November 12-17, 2023. 2023, 530–538. DOI: 10.1145/3624062.3624540.

[3] S. Christgau, M. Knaust, and T. Steinke. A First Step towards Support for MPI Partitioned Communication on SYCL-programmed FPGAs. In: IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing, H2RC@SC 2022, Dallas, TX, USA, November 13-18, 2022. 2022, 9–17. DOI: 10.1109/H2RC56700.2022.00007.

[4] S. Christgau and T. Steinke. Leveraging a Heteroge-neous Memory System for a Legacy Fortran Code: The Interplay of Storage Class Memory, DRAM and OS. In: 2020 IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC). 2020, 17–24. DOI: 10.1109/MCHPC51950.2020.00008.

[5]    S. Christgau and T. Steinke. Porting a Legacy CUDA Stencil Code to oneAPI. In: 2020 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020, New Orleans, LA, USA, May 18-22, 2020. 2020, 359–367. DOI: 10.1109/IPDPSW50202.2020.00070.

[6] L. Donati, M. Weber, and B. G. Keller. A review of Girsanov Reweighting and of Square Root Approximation for building molecular Markov State Models. Journal of Mathematical Physics 63.12 (2022), 123306-1 –123306-21. DOI: 10.1063/5.0127227.

[7] C. Gorgulla, A. Boeszoermenyi, Z.-F. Wang, P. D. Fischer, P. W. Coote, K. M. P. Das, Y. S. Malets, D. S. Radchenko, Y. S. Moroz, D. A. Scott, K. Fackeldey, M. Hoffmann, I. Iavniuk, G. Wagner, and H. Arthanari. An open-source drug discovery platform enables ultra-large virtual screens. Nature 580.5 (2020), 663–668. DOI: 10.1038/s41586-020-2117-z.

[8] M. Knaust, E. Seiler, K. Reinert, and T. Steinke. Co-Design for Energy Efficient and Fast Genomic Search: Interleaved Bloom Filter on FPGA. In: FPGA ’22: Proceedings of the 2022 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2022, 180–189. DOI: 10.1145/3490422.3502366.

[9] B. Maronga, M. Winkler, and D. Li. Can Areawide Building Retrofitting Affect the Urban Microclimate? An LES Study for Berlin, Germany. Journal of Applied Meteorology and Climatology 61.7 (2022), 800–817. DOI: https://doi.org/10.1175/JAMC-D-21-0216.1. URL: https://journals.ametsoc.org/view/journals/apme/61/7/JAMC-D-21-0216.1.xml.

[10] R. J. Rabben, S. Ray, and M. Weber. ISOKANN: Invariant subspaces of Koopman operators learned by a neural network. The Journal of Chemical Physics 153.11 (2020), 114109. DOI: 10.1063/5.0015132.

[11] S. Ray, K. Fackeldey, C. Stein, and M. Weber. Coarse Grained MD Simulations of Opioid interactions with the mu-opioid receptor and the surrounding lipid membrane. Biophysica 3.2 (2023), 263–275. DOI: 10.3390/biophysica3020017.

[12] S. Sadiku, M. Wagner, and S. Pokutta. Group-wise sparse and explainable adversarial attacks. 2023. arXiv: 2311.17434 [cs.CV].

[13] C. Secker, K. Fackeldey, M. Weber, S. Ray, C. Gorgulla, and C. Schütte. Novel multi-objective affinity approach allows to identify pH-specific mu-opioid receptor agonists. Journal of Cheminformatics 15 (2023). DOI: 10.1186/s13321-023-00746-4.

[14] N. Tateiwa, Y. Shinano, M. Yasuda, S. Kaji, K. Yamamura, and K. Fujisawa. Development and analysis of massive parallelization of a lattice basis reduction algorithm. Japan Journal of Industrial and Applied Mathematics 40.1 (2023). DOI: 10.1007/s13160-023-00580-z.

[15] P. Trepte, C. Secker, S. Kostova, S. B. Maseko, S. G. Choi, J. Blavier, I. Minia, E. S. Ramos, P. Casson-net, S. Golusik, M. Zenkner, S. Beetz, M. J. Liebich, N. Scharek, A. Schütz, M. Sperling, M. Lisurek, Y. Wang, K. Spirohn, T. Hao, M. A. Calderwood, D. E. Hill, M. Landthaler, J. Olivet, J.-C. Twizere, M. Vidal, and E. E. Wanker. AI-guided pipeline for protein-protein interaction drug discovery identifies a SARS-CoV-2 inhibitor. Molecular Systems Biology (2023). DOI: 10.1101/2023.06.14.544560.

[16] S. Wäldchen, K. Sharma, B. Turan, M. Zimmer, and S. Pokutta. Interpretability guarantees with Merlin-Arthur classifiers. To Appear in Proceedings of AIS-TATS (2024).

[17] R. Yokoyama, Y. Shinano, and T. Wakui. An effective approach for deriving and evaluating approximate optimal design solutions of energy supply systems by time series aggregation. Frontiers in Energy Research 11 (2023), 1–14. DOI: 10.3389/fenrg.2023. 1128681.