Performance Analysis, Modeling and Scaling of HPC Applications and Tools

PI Abhinav Bhatele, Lawrence Livermore National Laboratory
Project Description

Efficient use of supercomputers at Department of Energy centers is vital for maximizing system throughput, minimizing energy costs and enabling science breakthroughs faster. This requires complimentary efforts along several directions to optimize the performance of scientific simulation codes and the underlying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively.

This project supports time allocations on supercomputers at Argonne Leadership Computing Facility (ALCF, Argonne National Laboratory) and Oak Ridge Leadership Computing Facility (OLCF, Oak Ridge National Laboratory) to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. This project will enable investigations into big data issues when analyzing performance data on leadership class computing systems and to assist the High Performance Computing (HPC) community in making the most effective use of these resources.

The allocation time supports research and development of computer science efforts in the areas of computational science applications, programming models, runtimes, and performance tools and models that will help prepare the HPC community for exascale.
 

Allocations