Argonne’s pioneering computing program pivots to exascale

Author: 
Laura Wolf and Gail Pieper

Facebook Twitter LinkedIn Google E-mail Printer-friendly version

When it comes to the breadth and range of the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s contributions to the field of high-performance computing (HPC), few if any other organizations come close. Argonne has been building advanced parallel computing environments and tools since the 1970s. Today, the laboratory serves as both an expertise center and a world-renowned source of cutting-edge computing resources used by researchers to tackle the most pressing challenges in science and engineering.

Since its digital automatic computer days in the early 1950s, Argonne has been interested in designing and developing algorithms and mathematical software for scientific purposes, such as the Argonne Subroutine Library in the 1960s and the so-called ​“PACKs” - e.g., EISPACK, LINPACK, MINPACK and FUNPACK - as well as Basic Linear Algebra Subprograms (BLAS) in the 1970s. In the 1980s, Argonne established a parallel computing program – nearly a decade before computational science was explicitly recognized as the new paradigm for scientific investigation and the government inaugurated the first major federal program to develop the hardware, software and workforce needed to solve ​“grand challenge” problems.

A place for experimenting and community building

By the late 1980s, the Argonne Computing Research Facility (ACRF) housed as many as 10 radically different parallel computer designs – nearly every emerging parallel architecture – on which applied mathematicians and computer scientists could explore algorithm interaction, program portability and parallel programming tools and languages. By 1987, Argonne was host