HPC System Hornet Ready to Serve Highest Computational Demands
Supercomputer Hornet of the High Performance Computing Center Stuttgart (HLRS) is ready for extreme-scale computing challenges. The newly installed HPC system (High Performance Computing) successfully finished extensive simulation projects that by far exceeded the calibre of previously performed simulation runs at HLRS:
Six so called XXL-Projects from computationally demanding scientific fields such as planetary research, climatology, environmental chemistry, aerospace, and scientific engineering were recently applied on the HLRS supercomputer. With each application scaling up to all of Hornet’s available 94,646 compute cores, the machine was put through a demanding endurance test. The achieved results more than satisfied the HLRS HPC experts as well as the scientific users: Hornet lived up to the challenge and passed the simulation “burn-in runs” with flying colors.
The new HLRS supercomputer Hornet, a Cray XC40 system which in its current configuration delivers a peak performance of 3.8 PetaFlops (1 PetaFlops = 1 quadrillion floating point operations per second), was declared “up and running” in late 2014.
In its early installation phase, prior to making the machine available for general use, HLRS had invited national scientists and researchers from various fields to apply large-scale simulation projects on Hornet. The goal was to deliver evidence that all related HPC hardware and software components required to smoothly run highly complex and extreme-scale compute jobs are up and ready for top-notch challenges. Six perfectly suited XXL-Projects were identified and implemented on the HLRS supercomputer:
(1) “Convection Permitting Channel Simulation”, Institute of Physics and Meteorology, Universität Hohenheim
(84,000 compute cores, 84 compute hours, 330 TB of data + 120 TB for pre-processing)
Objective: Run a latitude belt simulation around the Earth at a resolution of a few km for a time period long enough to cover various extreme events on the Northern hemisphere and to study the model performance.
(2) “Direct Numerical Simulation of a Spatially-Developing Turbulent Boundary Along a Flat Plate”, Institute of Aerodynamics and Gas Dynamics (IAG), Universität Stuttgart
(93,840 compute cores, 70 machine hours, 30 TB of data)
Objective: To conduct a direct numerical simulation of the complete transition of a boundary layer flow to fully-developed turbulence along a flat plate up to high Reynolds numbers.
(3) “Prediction of the Turbulent Flow Field Around a Ducted Axial Fan”, Institute of Aerodynamics, RWTH Aachen University
(92,000 compute cores, 110 machine hours, 80 TB of data)
Objective: To better understand the development of vortical flow structures and the turbulence intensity in the tip-gap of a ducted axial fan.
(4) “Large-Eddy Simulation of a Helicopter Engine Jet”, Institute of Aerodynamics, RWTH Aachen University
(94,646 compute cores, 300 machine hours, 120 TB of data)
Objective: Analysis of the impact of internal perturbations due to geometric variations on the flow field and the acoustic field of a helicopter engine jet.
(5) “Ion Transport by Convection and Diffusion“, Institute of Simulation Techniques and Scientific Computing, Universität Siegen
(94.080 compute cores, 5 machine hours, 1.1 TB of data)
Objective: To better understand and optimize the electrodialysis desalination process.
(6) “Large Scale Numerical Simulation of Planetary Interiors”, German Aerospace Center/Technische Universität Berlin
(54,000 compute cores, 3 machine hours, 2 TB of data)
Objective: To study the effect of heat driven convection within planets on the evolution of a planet (how is the surface influenced, how are conditions for life maintained, how do plate-tectonics work, and how quickly can a planet cool).
Demand for High Performance Computing on the Rise
Demand for High Performance Computing is unbroken. Scientists continue to crave for ever increasing computing power. They are eagerly awaiting the availability of even faster systems and better scalable software enabling them to attack and puzzle out the most challenging scientific and engineering problems. “Supply generates demand”, states Prof. Dr.-Ing. Michael M. Resch, Director of HLRS. “With the abilities of ultra-fast machines like Hornet both industry and researchers are quickly realizing that fully leveraging the vast capabilities of such a supercomputer opens unprecedented opportunities and helps them deliver results previously impossible to obtain. We are positive that our HPC infrastructure will be leveraged to its full extent. Hornet will be an invaluable tool in supporting researchers in their pursuit for answers to the most pressing subjects of today’s time, leading to scientific findings and knowledge of great and enduring value,” adds Professor Resch.
Outlook
Following its ambitious technology roadmap, HLRS is currently striving to implement a planned system expansion which is scheduled to be completed by the end of 2015. The HLRS supercomputing infrastructure will then deliver a peak performance of more than seven PetaFlops (quadrillion mathematical calculations per second) and feature 2.3 petabytes of additional file system storage.
More information about the HLRS XXL-Projects can be found at http://www.gauss-centre.eu/gauss-centre/EN/Projects/XXL_Projects_Hornet/XXL_Proj…
About HLRS: The High Performance Computing Center Stuttgart (HLRS) of the University of Stuttgart is one of the three German supercomputer institutions forming the national Gauss Centre for Supercomputing. HLRS supports German and pan-European researchers as well as industrial users with leading edge supercomputing technology, HPC trainings, and support.
About GCS: The Gauss Centre for Supercomputing (GCS) combines the three national supercom-puting centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advance Computing in Europe), an international non-profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.
GCS has its headquarters in Berlin/Germany.
Media Contact
More Information:
http://www.uni-stuttgart.de/All latest news from the category: Information Technology
Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.
This area covers topics such as IT services, IT architectures, IT management and telecommunications.
Newest articles
A ‘language’ for ML models to predict nanopore properties
A large number of 2D materials like graphene can have nanopores – small holes formed by missing atoms through which foreign substances can pass. The properties of these nanopores dictate many…
Clinically validated, wearable ultrasound patch
… for continuous blood pressure monitoring. A team of researchers at the University of California San Diego has developed a new and improved wearable ultrasound patch for continuous and noninvasive…
A new puzzle piece for string theory research
Dr. Ksenia Fedosova from the Cluster of Excellence Mathematics Münster, along with an international research team, has proven a conjecture in string theory that physicists had proposed regarding certain equations….