ALCF's ESP Projects Report Scientific Progress at Investigators Meeting

science
Researchers at the ESP meeting

Researchers from the ALCF’s 16 Early Science Program (ESP) projects convened at the ESP Investigators Meeting May 15-16 to discuss outcomes enabled by their participation in this preparatory program designed to ready Mira, the ALCF’s new 10-petaflops IBM Blue Gene/Q supercomputer, for the transition to full production.

Among the presentations were efforts that included simulations unprecedented in scale, scope, or complexity. Summaries from select presentations follow. To see the full agenda from the May event, with links to presentation slides, click HERE.

Researching High-Speed Combustion and Detonation

Scientific Domain: Chemistry

ESP PI Alexei Khokhlov and his research team from The University of Chicago are exploring the physical mechanisms of the burning and detonation of hydrogen-oxygen mixtures—information of critical importance in the design of safe hydrogen fuel systems.

While the associated phenomena of flame acceleration and deflagration-to-detonation transition (DDT) have been studied for 50 years, researchers like Khokhlov have only recently had access to computers of sufficient power to model DDT in hydrogen-oxygen mixtures using first principles. Flame acceleration and DDT involve multiple physical processes—chemical reactions, microscopic transport, rapidly evolving compressible, turbulent fluid flow, and a wide range of temporal and spatial scales—which makes numerical modeling of these phenomena extremely computationally intense.

Early access to ALCF Blue Gene/Q resources has allowed for modeling with greater accuracy, and a 2.5x performance per core improvement over efforts on Intrepid. As a first step, researchers were able to simulate phenomena of strong and weak ignition of hydrogen-oxygen in reflected shock tube experiments. Strong ignition (the onset of a detonation near the end wall which occurs for sufficiently strong reflected shocks) has been long understood in terms of a simple one-dimensional theory.  Understanding a DDT behind weaker shocks, on the contrary, has been a significant challenge of combustion theory for the last 50 years.

With decreasing shock strength, ignition moves away from the end wall, occurs in multiple spots, and happens up to three orders of magnitude faster compared to theoretical predictions. The group’s latest three-dimensional simulations were able to resolve the reflected shock bifurcation phenomena where the boundary layer is lifted off the wall and a near-sonic turbulent recirculation jet is created with order-of-one pressure fluctuations, strong vortexes, and multiple shock-lets. The simulations were first validated with non-reactive experiments in CO2, showing an excellent agreement with the experiment, and then carried out for reactive hydrogen-oxygen mixtures. Violent turbulent pulsations in the recirculation jet in a reactive system seem to be responsible ultimately for causing the ignition spots to move from the end wall to a vicinity of strong turbulence and significant shortening of the ignition process, in a good quantitative agreement with available experimental data.

With groundwork complete, the team embarked on high-resolution simulations of DDT in a hydrogen-oxygen mixture in long pipes. Researchers hope this will allow them to view and study phenomena not observable in experiments.

Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model

Scientific Domain: Earth Science

As co-PI to Venkatramani Balaji with NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL), Chris Kerr reported on the ESP team’s efforts to provide a fundamental understanding of the role of clouds in dynamics of climate. The group aims to develop a high-resolution global climate-weather model to improve our understanding and prediction of the behavior of the atmosphere, ocean, and climate, including information to aid in hurricane research and prediction, seasonal forecasting, and understanding and projecting climate change.

Work underway focuses on the Atmospheric Model Intercomparison Project (AMIP)-an experiment that combines full atmospheric model dynamics and physics, land, and prescribed sea-surface temperatures. Due to the complexities of modeling multiple model components at high-resolutions, this groundbreaking work is highly computationally intensive and can only be conducted on high-performance computing resources like Mira.

In this revolutionary model, researchers solved fully compressible non-hydrostatic equations to simulate climate on a cubed-sphere grid. Scientists are then able to stretch, compress and nest the grid on parts of the sphere to provide finer resolution to study specific areas of interest.

To tune the model, researchers perform experiments from the active 2005 hurricane season in the North Atlantic and Pacific basins and used those diagnostics to improve the model—a process that took six months and resulted in extensive changes to dynamics and physics schemes in the model.

With early access to resources and expertise provided through the ESP, model resolutions of early AMIP studies have refined the model resolution from 25 to 3.5 km. The team has also realized an 8x improvement of performance per node from the Blue Gene/P to the Blue Gene/Q with excellent strong scaling. In addition, the code now scales to 1M hardware threads and is tuned to run on other leadership-class systems. “Because of the computational work we were able to accomplish on Mira, porting to other platforms has been made significantly easier,” Kerr noted.

Global Simulation of Plasma Microturbulence at the Petascale & Beyond

Scientific Domain: Physics

ESP PI William Tang and his research group at Princeton University and Princeton Plasma Physics Laboratory are conducting HPC-enabled simulations of plasma microturbulence targeting transformational scientific insights to help accelerate progress in worldwide efforts to harness the power of nuclear fusion as an alternative to fossil fuels.

Because fusion energy physics spans an extreme range of time and spatial scales, associated simulations must be prepared to exploit local concurrency to take advantage of the world’s most powerful supercomputing systems, including Mira. The group has accordingly engaged in collaborations with computer scientists to develop a C-version of the lead code GTC-P written in Fortran-90.  The current greatly optimized form of this C-version of GTC-P-C is much better able to exploit computer science (CS) community advances in multi-threading for low memory-per-core systems to enable significantly increased speed and reliability on Blue Gene/Q.

“In addition to the impressive greater than 10 gain in ‘time to solution’ enabled by moving GTC-P from Blue Gene/P to Blue Gene/Q, additional software improvements have been achieved in a timely way by working with the C version of GTC-P to produce an additional 50 percent gain,” explains Tang. He adds that “not only is this new code more efficient, having a modern C version allows us to better interact with a much wider group of CS experts whose research advances (e.g., in multi-threading methods) are usually carried out using C instead of Fortran.”

In addition to code improvements, the group cites other key accomplishments through the ESP, including:

  • Convergence studies on Intrepid and especially Mira that yielded important scientific findings regarding particle resolution required to ensure physics fidelity for simulations of much larger-sized plasma systems such as ITER.
  • Results on how key kinetic turbulence dynamics behave over long temporal scales–now including collisional effects at unprecedented resolution that provide a glimpse of what researchers may be able to accomplish with fusion reactor-relevant simulations on future exascale systems.
  • Excellent scalability of global PIC codes demonstrated on modern HPC platforms such as Mira providing a solid foundation for incorporating additional realistic effects to enable greater physics fidelity for further improving understanding.

Tang describes the group’s ESP experience as a priceless “training-wheel” phase, saying, “The advancements made—in part as a result of our early access to ALCF systems and computational scientist resource—have produced an efficient and portable modern HPC application code that does not require “hand holding” when porting to other advanced systems.  This has helped enable us to gain access to and be able to take advantage of time provided on some of the top supercomputers worldwide such as the Blue Gene/Q, Sequoia, at Lawrence Livermore National Laboratory and the Fujitsu K computer in Japan.

Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates

Scientific Domain: Earth Science

An interdisciplinary research team led by Thomas Jordan at the Southern California Earthquake Center (SCEC) is designing, conducting, and analyzing high-resolution simulations of earthquakes to inform building designers and emergency planners. As part of this work, SCEC’s ESP project focused on the use of multi-scale dynamic rupture models to improve estimates of ground motion in earthquakes by more accurately modeling the slip that occurs on a fault in complex three-dimensional geometry.

ESP efforts allowed the team to achieve nearly ideal weak scaling of the SORD dynamic rupture code on Mira in pure MPI mode (with no multi-threading), and a wallclock time speedup of a factor of two going from Blue Gene/P to Blue Gene/Q. They also introduced OpenMP multi-threading in SORD, and achieved nearly ideal strong scaling up to 16 threads per node.

Like many ESP projects, the ease of transitioning from the Blue Gene/P to the Blue Gene/Q has benefited the SCEC team’s efforts as well. This is especially evident in the group’s work to improve a 200-meter velocity model by adding inversion studies. Says co-PI Philip Maechling, an information technology architect at SCEC, “We ran 18 iterations of the velocity code on Intrepid, and the last two on Mira. The similarity between the two machines has really helped to facilitate these improvements.”

Using a highly optimized version of the SORD dynamic rupture simulation code that was developed as part of their ESP project, the SCEC group simulated an earthquake rupture on a rough, complex, fault surface. The dynamic rupture simulation output was used as an input to prototype wave propagation software that ran a deterministic earthquake wave propagation simulation at frequencies up to 10Hz.

What Next for ALCF’s ESP Projects?

ESP projects with time remaining in their original project allocations will continue to access ALCF resources through the end of 2013, or until they exhaust their allocations. Most will have months or years of work ahead in analyzing the results of their ESP runs. Many will extend and refine their results from the ESP runs using allocations available through other programs, including INCITE; while others are starting new runs on new problems.

When asked if he would characterize the ALCF’s Early Science Program as a success, ESP Manager Timothy Williams, remarked, “Over the course of the program, a wide variety of applications were ported to Blue Gene/Q, all with significantly improved performance relative to Blue Gene/P or other previous-generation machines. Twelve of the 16 ESP projects are current INCITE projects, and the remaining four plan to submit 2014 INCITE proposals. The eleven postdocs assigned to ESP projects have advanced their careers as computational scientists significantly as a result of their work under this program. Based on all this, yes, I would definitely classify the Early Science Program as a success.”

For more information about the ALCF’s ESP projects, visit: http://www.alcf.anl.gov/programs/esp

 

Allocations