ATPESC helps groom a new generation of supercomputer users

facility accomplishment
2014 group photo
Ray Loy assists an attendee
Attendees collaborate on hands-on exercises
Days at the 2014 ATPESC began early and ran well into the evening with full slates of lectures, talks by invited dinner speakers

For researchers in the sciences, having the ability to access supercomputing resources can represent a significant opportunity when addressing scientific pursuits.

However, using these resources effectively presents numerous challenges to these researchers when navigating architectures, programming models and languages, algorithms, toolkits and frameworks, and visualization and data analysis, among other areas.

To help bridge some of these training and knowledge gaps, Paul Messina, director of science for the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, has spearheaded organization of the Argonne Training Program on Extreme Scale Computing (ATPESC). According to Messina, “the use of systems like Mira can enable breakthroughs in science, but to use them productively requires significant expertise in a number of disciplines. Our training program exposes the participants to those topics and provides hands-on exercises for experimenting with most of them.”

Recap of the 2014 ATPESC

This year’s program took place August 3–15, 2014, in St. Charles, Illinois, and marked the second year of ATPESC. Organized and hosted by Argonne as an intensive two-week training program for future users of leadership-class machines, ATPESC was again considered a resounding success, filling a need articulated repeatedly by the participants in attendance.

The program gave the attendees a unique opportunity to access hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including Argonne’s IBM Blue Gene/Q Vesta and Mira systems, the Oak Ridge Leadership Computing Facility’s Titan system, and the National Energy Research Scientific Computing Center’s Edison system.

Days at the 2014 ATPESC began early and ran well into the evening with full slates of lectures, talks by invited dinner speakers on current topics and trends in HPC, and hands-on exercises that started after dinner each evening and ran for as long as three hours.

A total of 62 researchers from around the world were chosen from an applicant pool of 150 individuals vying for the opportunity to learn from renowned HPC experts about designing, implementing, and executing large-scale computer science and engineering applications effectively and across a variety of supercomputing platforms—including methodologies expected to be applicable to future systems in 2017 and beyond.

The ATPESC Backbone: Seven Topical Tracks Presented by Leaders in the HPC Field

ATPESC has three goals. First, to provide the participants with in-depth knowledge on topics and skills that are needed to conduct computational science and engineering research on today’s and tomorrow’s high-end computers. Second, to make them aware of available software and techniques for all the topics, so that when their research requires a certain skill or software tools, they know where to look to find it instead of reinventing the tools or methodologies.  And third, through exposure to the trends in HPC architectures and software, to indicate approaches that are likely to provide performance portability over the next decade or more.

An additional intent of the program is that these participants will share what they have learned with their research groups and colleagues back home, extending the ATPESC’s reach even further.

Participants arrived at the training on Sunday, August 3, diving right into the material in a crash course on logging into the ALCF systems and running simple jobs. From there, the event cycled through seven core program tracks relevant to the use of supercomputing systems for large-scale science and engineering research:

  • Hardware architectures
  • Programming models and languages
  • Numerical algorithms and software
  • Toolkits and frameworks
  • Visualization and data analysis
  • Data intensive computing and I/O
  • Community codes and software engineering

Sessions were conceptualized, developed, and presented by recognized HPC experts from national laboratories, computing institutions, academia, and industry. In fact, there were as many presenters—63 in all—as attendees. James Reinders, parallel programming evangelist at Intel Corp., had this to say after delivering two sessions during Week 1:

“It was a real honor to share my thoughts with the talent assembled. As I interacted with the participants, it really struck me that I was talking with the future of high-performance computing. I look forward to many years ahead in HPC myself, but I met attendees who will take it even farther. These researchers are working to create an amazing future that most people in the world do not even understand is possible. Based on my experiences at ATPESC, I can say the world of HPC will be in good hands.”

Much of Week 1 covered advanced features of the most widely used programming models—MPI and OpenMP—as well as hybrid and accelerator programming and emerging languages and frameworks such as Charm++, Chapel, and the partitioned global address space (PGAS) languages.

Another feature of Week 1 was immersion in critical math-related aspects of HPC, with sessions led by presenters from two of the DOE Office of Science SciDAC Institutes (for Scientific Discovery through Advanced Computing): FASTMath (Frameworks, Algorithms, and Scalable Technologies for Mathematics) and SDAV (Scalable Data Management, Analysis, and Visualization) teams. The SciDAC Institutes provide intellectual resources in applied mathematics and computer science, expertise in algorithms and methods, and scientific software tools to advance scientific discovery through modeling and simulation.

In Week 2 of the ATPESC, speakers ranged from Jack Dongarra, who has contributed to the design and implementation of open source software packages—including LAPACK (for Linear Algebra Package) and ScaLAPACK—that were among the original critical software libraries designed for distributed memory machines, to Aron Ahmadia of the U.S. Army Engineer Research and Development Center, who was an attendee at last year’s inaugural ATPESC and returned as a presenter this year. Aron travels extensively to teach best practices in writing software code to scientists so they can more quickly bypass programming hurdles and move on to addressing their scientific pursuits.

Week 2 lectures also covered how to build successful community codes and software engineering approaches for scientific codes. These topics have become increasingly important in computational science and engineering research and are seldom covered in university courses. Rounding out Week 2 were presentations and exercises on software tools debugging and tuning applications on massively parallel computers, visualization, and data-intensive computing.

In addition, a number of Argonne’s top computer and computational scientists lectured and provided direct support to the training aims of the ATPESC by leading sessions and developing exercises for attendees. A complete list of lecturers and their affiliations is available at the ATPESC website; visitors to this site will also find the presentation slides, and videos of all lectures will be posted online this fall (of 2014).

2015 Will Be the Third Year of the ATPESC

The DOE’s Office of Science has provided funding for the ATPESC for three consecutive summers—beginning with last year’s 2013 ATPESC. To receive information about next year’s 2015 ATPESC offering and other upcoming ALCF events, visit www.alcf.anl.gov and sign up for our monthly newsletter (scroll down to the bottom right and enter your email in the box provided). Or, join the ATPESC mailing list by visiting http://extremecomputingtraining.anl.gov/.

Organizations That Participated at the 2014 ATPESC

The 2014 ATPESC featured world-renowned presenters from these organizations:

Academic Institutions: Boston University, ETH (Swiss Federal Institute of Technology), SciDAC Institutes: FASTMath and SDAV, King Abdullah University of Science and Technology, Northwestern University, Rensselaer Polytechnic Institute, Rice University, Stanford University, UC Berkeley, University of Chicago, University of Houston, University of Illinois at Urbana-Champaign, University of Tennessee, and University of Utah.

National Laboratories/Federal Agencies: Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratories, and U.S. Army Engineer Research and Development Center.

Private Industry: Allinea Software, Ltd., Cray, Inc., The HDF Group, IBM, Intel, Kitware, Inc., ParaTools, and Rogue Wave.

This work is supported by the Office of Science of the U.S. Department of Energy.  The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.  For more information, please visit science.energy.gov