In this video from ISC 2019, Thomas Lippert from the Jülich Supercomputing Centre describes how modular supercomputing is paving the way forward for HPC in Europe. After that, Bernhard Frohwitter and Hugo Falter from ParTec discuss how the company supports modular supercomputing for enhanced user productivity.
The Modular Supercomputer Architecture (MSA) is an innovative approach to build High-Performance Computing (HPC) and High-Performance Data Analytics (HPDA) systems by coupling various compute modules, following a building-block principle. Each module is tailored to the needs of a specific group of applications, and all modules together behave as a single machine. This is ensured by connecting them through a high-speed network federation and operating them with a uniform system software and programming environment. This allows one application or workflow to be distributed over several modules, running each part of its code onto the best suited hardware module.
Creating a modular supercomputer that best fits the requirements of the diverse, increasingly complex, and newly emerging applications is the objective of DEEP-EST, an EU project launched on July 1, 2017, lead and coordinated by the Jülich Supercomputing Centre. The DEEP-EST project has built a prototype with three compute modules: the Cluster Module (CM), the Extreme Scale Booster (ESB), and the Data Analytics Module (DAM). The CM is a general-purpose cluster and targets low/medium scalable applications, while the ESB is built as a cluster of accelerators to provide energy-efficient computing power to high scalable codes. Last, but not least, the DAM addresses the specific needs of Machine/Deep Learning, Artificial Intelligence and Big Data applications and workloads.
Looking at the challenges future supercomputing architectures have to overcome, the question arises if the leap to Exascale computing can only be achieved by special-purpose next-gen systems with diverging properties. The novel concepts the DEEP and DEEP-ER projects put forward challenge this idea and contrast it with a modular approach to future supercomputers: The system features different types of processors that facilitate different concurrency levels. It also employs a fine-grained memory hierarchy with new technologies like non-volatile and network attached memory and a massively parallel file system to just mention a few building blocks. All these components can be seen as pool of resources HPC application scientists can choose from. Designing this sophisticated architecture was only possible through a continuous co-design effort and vivid exchange between hardware, system software, and application developers. All in all, this novel architecture allows for diverging properties within a single system and its modular concept could also allow for integrating completely new technologies like bio or quantum computing once available.”
This is a syndicated post. Read the original post at Source link .