Argonne training program provides crash course in supercomputing

outreach
ATPESC 2023

Kevin Fotso (left) and Lipi Gupta (right) take in a technical lecture at the 2023 Argonne Training Program on Extreme-Scale Computing. (Image by Argonne National Laboratory)

Over the past 11 years, 768 researchers have passed through the annual Argonne Training Program on Extreme-Scale Computing.

ATPESC 2023

Argonne computer scientist Yanfei Guo leads a session on Message Passing Interface at the 2023 Argonne Training Program on Extreme-Scale Computing. (Image by Argonne National Laboratory)

Using supercomputers for scientific research comes with a steep learning curve.

There’s specialized hardware that is constantly evolving, scientific codes that need to be developed or updated to run efficiently on the ever-changing hardware and an array of tools and techniques that can help you make the most of your time on a supercomputer.

To help build a new generation of researchers who can use supercomputers for science, the U.S. Department of Energy’s (DOE) Argonne National Laboratory hosts the annual Argonne Training Program on Extreme-Scale Computing (ATPESC). With support from DOE’s Exascale Computing Project, ATPESC brings in some of the world’s top experts in high performance computing (HPC) to teach attendees the ins and outs of using current and next-generation supercomputers.

“ATPESC is all about equipping researchers with the skills and knowledge they need to harness the world’s most powerful supercomputers for groundbreaking science and engineering,” said Ray Loy, ATPESC program director and lead for training, debugging and math libraries at the Argonne Leadership Computing Facility, a DOE Office of Science user facility. ​We’ve seen past ATPESC attendees go on to lead their own HPC research projects and play key roles in advancing the development of supercomputing codes and technologies.”

ATPESC 2023

Seventy-five participants from around the world attended the 2023 training program at the Q Center in St. Charles, Illinois. (Image by Argonne National Laboratory)

Aimed at researchers who have some experience with HPC, the program is designed to take their skills to the next level.

Supercomputing is a very broad topic,” Loy said. ​“ATPESC was created to fill in the knowledge gaps that often exist in traditional computational science courses and training events offered by universities and other institutions.”

The intensive two-week program includes lectures, hands-on sessions using DOE supercomputers and evening talks. The curriculum covers everything from emerging hardware technologies and software development to code debugging and artificial intelligence (AI) methods.

This summer, ATPESC wrapped up its 11th year, adding another 75 ​graduates” to the now 768 who have passed through the program since its launch in 2013. Below, we highlight five of the 2023 attendees’ experiences and how the program will impact their careers moving forward.

Fatima Bagheri

Fatima Bagheri

Fatima Bagheri

Fatima Bagheri is a National Science Foundation postdoctoral fellow at the University of Texas at Arlington. Her research is focused on using computers to model the magnetic fields of exoplanets — planets that orbit stars outside of our solar system.

The planets’ magnetic fields are essential and required for life as we know it on Earth,” Bagheri said. ​Understanding their origins and their interactions with their stellar hosts helps us better assess the possibility of extraterrestrial life in the universe.”

Bagheri came to ATPESC to expand her HPC knowledge and learn about methods that could help advance her team’s research into exoplanets. This includes transitioning their modeling efforts to next-generation exascale supercomputers — systems capable of performing a billion billion calculations per second.

Deploying our code to exascale machines requires revamping our codebase and adapting it to take advantage of hardware accelerators,” Bagheri said. ​I know that is no easy task and could require multiple years of teamwork. I plan to communicate the ideas and tools I have learned about at ATPESC with my collaborators to lay out a plan toward reaching this goal.”

Bagheri is also passionate about bringing more researchers from underrepresented communities into scientific computing. By sharing the latest advancements and opportunities in HPC with other women and minority groups, she hopes to help improve access to the educational and computing resources that are needed to excel in computational and data sciences.

This program was an immensely valuable venue for me to start bridging our underrepresented community of students to the advanced world of HPC,” she said.

Kevin Fotso

Kevin Fotso

Kevin Fotso

Kevin Fotso’s ATPESC experience will benefit both his day job and his Ph.D. research.

As a bioinformatics technical analyst at the University of Colorado Anschutz Medical Campus, he supports the community of researchers who use the university’s Alpine HPC system. Fotso is also finishing his Ph.D. in biomedical engineering, with a focus on using HPC and machine learning for multiple sclerosis lesion modeling and disability prediction.

I decided to apply for ATPESC to get a global view of scientific computing,” Fotso said. ​I was hoping to improve my GPU (graphic processing units) programming skills while also learning the latest trends and techniques in machine learning, software sustainability, data movement and hardware architectures.”

With his newfound knowledge, Kevin looks forward to helping enhance the workflows used by bioinformatics researchers at the university. He also plans to apply what he learned about machine learning to his efforts to develop methods for predicting the level of physical disability in multiple sclerosis patients.

“ATPESC was a life-changing experience,” Fotso said. ​There was always a very constructive dialogue between the lecturers and the participants which made the class quite interactive and interesting. The program opened up new perspectives for me in terms of efficient HPC computation and software development.”

Lipi Gupta

Lipi Gupta

Lipi Gupta

Lipi Gupta is no stranger to DOE’s powerful supercomputers. While working on a Ph.D. in physics at the University of Chicago, she began using supercomputers at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at DOE’s Lawrence Berkeley National Laboratory.

Her work to develop a pipeline for near-real-time analysis of experimental data at NERSC spurred an interest in building and supporting research communities. This led to her current role of science engagement engineer at NERSC, where she works with the facility’s user community to provide technical support and design HPC training content for new users.

Because I pivoted to HPC from physics, there are many parts of traditional HPC that I have not studied or tried,” Gupta said. ​I applied to ATPESC to gain the context needed for learning more about all of the different aspects of scientific computing at scale.”

Getting the lay of the land in terms of the various facets of HPC and how they connect to each other has prepared me to continue learning about HPC myself, and to be able to support other HPC users,” she added.

For anyone considering applying for ATPESC in the future, Gupta offered some words of advice: ​Prepare for a fire hose of information, but don’t worry about being a sponge. The material will persist after the two weeks, but the opportunity to discuss HPC topics, use cases and trends in scientific computing with future colleagues will only be available while you attend.”

Caitlin Whitter

Caitlin Whitter

Caitlin Whitter

Caitlin Whitter, a Ph.D. student in computer science at Purdue University, also came to ATPESC with some previous experience working at a DOE lab. As a DOE Computational Science Graduate Fellow, she spent time at DOE’s Lawrence Berkeley National Laboratory, where she applied a form of AI called graph convolution networks to the prediction of molecular properties.

My research lies in the intersection of machine learning and computational chemistry,” she said. ​We often use high performance computing because of the size of the datasets involved in our work.”

For Whitter, ATPESC presented an opportunity to learn the fundamentals of HPC from leading experts in the field. She also valued the opportunity to make connections and share ideas with her fellow attendees as well as the ATPESC lecturers.

I was exposed to topics I had never encountered before and had conversations with people whom I might not have ever met,” she said. ​“ATPESC is a unique environment from which you will benefit through learning a great deal, gaining hands-on experience that will influence your current and future projects, and meeting interesting people who are working on incredible things.”

Chonglin Zhang

Chonglin Zhang

Chonglin Zhang

Chonglin Zhang came to ATPESC at a pivotal moment in his career. He arrived as a research scientist at Rensselaer Polytechnic Institute, where he was developing a simulation code to model the plasma turbulence in fusion energy devices. Soon after ATPESC ended, Zhang began a new position as an assistant professor of mechanical engineering at the University of North Dakota.

Both my previous and current research focus on computer modeling and simulation,” he said. ​High performance computing is needed to carry out such research.”

While Zhang already had some experience with supercomputers, he was looking for an all-encompassing training course to bolster his skills in the rapidly evolving field of HPC.

Since I am not a computer scientist by training, I felt there was a need for me to learn all things HPC in an organized and systematic way,” he said. ​I was hoping to gain a deeper understanding of the different components needed for high performance computing and use this knowledge to help me to be a better computational scientist.”

Looking ahead, Zhang said he learned several things that will help propel his work in simulating high-speed aerodynamics, space propulsion and plasma physics.

I got so many new ideas and thoughts related to computational science and my research, which I plan to explore in my future research,” Zhang said.  

ATPESC is funded by the Exascale Computing Project, a collaborative effort of the DOE Office of Science and the National Nuclear Security Administration, and organized by staff from the ALCF.

==========

The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.