The Argonne Training Program for Extreme-Scale Computing (ATPESC) culminated on August 9, 2013, with a final exam, distribution of certificates of completion, and rounds of handshakes from the 63 bleary-eyed but enthused attendees of the two-week program designed to train the next generation of supercomputer users. Citing “intensity” and “caliber of instructors” as key differentiators between this and other training programs they had attended, students overwhelming agreed that they had received a solid overview of trends and topics of importance in extreme-scale computing today.
The 2013 ATPESC class was comprised primarily of PhD students and postdocs from a variety of disciplines and institutions, from across the country and around the world. Students were chosen from an applicant pool of more than 150 individuals vying for the opportunity to learn from renowned HPC experts how to design, implement, and execute large-scale computational science and engineering applications effective across a variety of supercomputing platforms—including methodologies expected to be applicable to future exascale systems. As part of their training, students were given access to some of today’s most powerful supercomputing resources, such as Argonne’s IBM Blue Gene/Q systems Vesta and Mira, Oak Ridge’s Cray System, Titan, and Georgia Tech’s 264-node cluster, Keeneland.
The event was organized into core program tracks relevant to the use of supercomputing systems for large-scale science and engineering research, including:
- Supercomputing Architecture Trends
- Programming Languages, Programming Models
- FASTMath – Mathematical Software and Numerical Algorithms
- Toolkits, Frameworks, and Community Tools
- I/O and Big Data
- Community Codes and Case Studies
Program track content was conceptualized, developed, and presented by teams of recognized HPC experts from national laboratories, computing institutions, academia, and industry. A list of speakers and their presentations are available at the ATPESC event website.
The 2013 ATPESC days were filled with back-to-back lectures and hands-on exercises. Talks by invited dinner speakers gave students a respite from the days’ intensity, while providing them with an overview of current HPC trends and topics.
Paul Messina, Director of Science for the Argonne Leadership Computing Facility (ALCF) conceived and secured funding for the training program from the U.S. Department of Energy’s Office of Science as a way to grow the user community of today’s high-end systems and of those expected to be available in 2017 and beyond.
Said Messina, “This program fills a gap that exists today in the training that computational scientists receive through formal education or other shorter courses. Preparing our next generation of users is critical to maximizing the breakthrough science that our leadership-class resources can facilitate.”
The Argonne Training Program for Extreme-Scale Computing received funding from the DOE's Office of Science for three consecutive summers—beginning in 2013. To receive information about ATPESC 2014 and other upcoming Argonne Leadership Computing Facility events, visit www.alcf.anl.gov and sign up for our monthly newsletter (scroll down to the bottom right and enter your email in the box provided). Or, join the ATPESC mailing list by visiting http://extremecomputingtraining.anl.gov/
ATPESC Student Snapshots
Jeff is a fourth-year PhD candidate in Mechanical Engineering at the California Institute of Technology studying coarse-grained molecular dynamics, specifically the atomistic to continuum transition through the quasicontinuum method. His work includes a heavy emphasis on the application of high-performance computing to systems of engineering interest.
“I study an area of physics that would be out of reach without HPC. I applied for the program because I love HPC, program HPC systems all day, taught an HPC class, and yet I hadn’t had formal training in it for lack of opportunity and availability. The class was exactly what I was looking for: background and foundational knowledge, how to leverage the current state of the art, and a glimpse at what the future of HPC looks like–all from leaders in the field. It was invaluable.”
Nikela is a first-year PhD student in Parallel Computing Systems and is part of the PDSG research group in the Computing Systems Laboratory at the School of Electrical Engineering of the National Technical University of Athens, Greece.Her research focuses on performance modeling, optimizations and auto-tuning for large-scale applications. One of her recent projects involved optimizing parallel lattice QCD codes, running under PRACE-2IP with access to the Cray XK7 supercomputer at ETH Zurich.
“Within the two weeks, I was introduced to novel and trending parallel software and hardware in classes given by the very same outstanding researchers who conceived and developed them, and had the chance to meet, discuss and exchange ideas with prominent students and exceptional researchers of the HPC community.”
Fredrik is a second year PhD student in Computer Science at MIT Computer Science and Artificial Intelligence Laboratory, where he is a member of the D-TEC X-Stack project at work on declarative and operational programming model constructs and compiler techniques to support practical higher-level, tunable and performance-portable programming on extreme-scale machines.
“As a computer scientist, the Programming Model and Application tracks were extremely helpful in giving me a greater understanding of how the algorithms used on extreme-scale machines fit together, and what the challenges of implementing them using contemporary programming models are. I plan to use what I learned at ATPESC to inform my research and to work to transfer the knowledge to the MIT X-Stack team. I hope this will help us develop technologies that solve real problems faced by the HPC community."
Jordan works as research engineer in the DOE’s National Energy Technology Laboratory’s (NETL) Computational and Basic Sciences Division where he is a user and developer of MFIX, an open-source reacting multiphase CFD software.
“My primary goal for applying to attend ATPESC was to gain knowledge that will help me refine MFIX so we can fully utilize the NETL-SBEUC, a 24K-core system used for energy-related simulations. I intend to incorporate many of the strategies I learned at ATPESC into my regular development activities.”