ALCF’s new Polaris supercomputer to help pave the way to exascale


The new system will help accelerate efforts to prepare applications and workloads for Aurora.

With the arrival of the Polaris supercomputer at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, researchers will have a powerful new tool to prepare for science in the exascale era.

Developed in collaboration with Hewlett Packard Enterprise (HPE), Polaris is a leading-edge system that will give scientists and application developers a platform to test and optimize codes for Aurora, the upcoming Intel-HPE exascale supercomputer to be housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Like Aurora, Polaris is a hybrid system equipped with both graphics processing units (GPUs) and central processing units (CPUs).

Aurora and DOE’s other upcoming exascale machines will be capable of performing a billion billion calculations per second. As some of the world’s first exascale-class supercomputers, the DOE systems will combine unprecedented processing power with advanced capabilities for artificial intelligence (AI) and data analysis, enabling researchers to tackle important scientific challenges, such as discovering new materials for clean energy applications, increasing our understanding of the global impacts of climate change, and exploring the increasingly large datasets generated at DOE experimental facilities, at a scale not possible today.

But conducting science on machines that are orders of magnitude more powerful than today’s top supercomputers requires significant preparatory work. With DOE’s Exascale Computing Project (ECP) and initiatives like the ALCF’s Aurora Early Science Program, researchers have been working behind the scenes for years to ensure exascale applications, software, and hardware will be ready for science as soon as the first exascale systems hit the floor. The new Polaris system will be a valuable resource for Argonne researchers and the entire scientific community as they continue to prepare for the next generation of DOE’s high-performance computing (HPC) resources.

“With a CPU-GPU hybrid configuration that is similar to future exascale systems, Polaris will be critical to advancing our exascale readiness efforts,” said Katherine Riley, ALCF director of science. “Polaris gives Argonne its largest GPU-accelerated system to date and will provide a solid foundation for our user community to get ready for Aurora.”

The HPE Apollo Gen10+ based supercomputer is equipped with 560 2nd and 3rd Gen AMD EPYC processors and 2,240 NVIDIA A100 Tensor Core GPUs. The testbed will deliver approximately 44 petaflops of peak double precision performance and nearly 1.4 exaops of theoretical artificial intelligence (AI) performance, which is based on mixed-precision compute capabilities. Like Aurora, Polaris uses the Slingshot interconnect technology, which is designed to support the simulation, data, and machine learning workloads that will drive science in the exascale era.

“The Polaris hardware is not exactly the same as Aurora, but it has a multi-GPU node architecture,” said Kalyan Kumaran, ALCF director of technology. “Polaris will be a very useful resource for people to test and port their applications on large GPU nodes with exascale-level interconnects.”

The Polaris software environment is equipped with the HPE Cray programming environment, HPE Performance Cluster Manager (HPCM) system software, and the ability to test programming models, such as OpenMP and SYCL, that will be available on Aurora and the next generation of DOE supercomputers. Polaris users will also benefit from NVIDIA’s HPC software development kit, a suite of compilers, libraries, and tools for GPU code development.

The delivery and installation of Polaris began in August. Initially, the testbed system will be dedicated to research teams participating in the ECP, Aurora Early Science Program, and the ALCF Data Science Program. In 2022, Polaris will be made available to the broader HPC community for a wide range of science and engineering projects. The system is particularly well suited to handle data-intensive research campaigns that utilize machine learning and other artificial intelligence techniques.

At Argonne, Polaris will also be a key resource for further integrating supercomputing resources with DOE experimental facilities. Initial efforts will be focused on the co-located Advanced Photon Source (APS) and Center for Nanoscale Materials (CNM), both DOE Office of Science User Facilities.

“A lot of the experimental facilities are facing real challenges in terms of the amount of data they're generating, how they manage it, and even how they use it to steer new experiments,” said ALCF Director Michael Papka. “We’ve been working toward coupling the Argonne user facilities together more closely for the past few years and Polaris will really help us to accelerate that effort. We're going to learn a lot about the resources and capabilities that APS and CNM users need and how we can better support their data-intensive research moving forward.”


The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.