2017 Summer Students’ Presentations

Event Sponsor: 
Argonne Leadership Computing Facility Students' Presentation
Start Date: 
Aug 30 2017 - 1:00pm
Building/Room: 
Building 240/Room 1405
Location: 
Argonne National Laboratory
Speaker(s): 
Paul Gressier
Ivana Marcinic
Anna Kim

2017 Summer Students’ Presentations

Time: 1:00 – 1:10 pm
Presenter: Paul Gressier
Education: MS 2018, Engineering Sciences, École Nationale Supérieure d'Électronique, Informatique,
Télécommunications, Mathématique et Mécanique de Bordeaux.
Mentor: Venkat Vishwanath

Optimizing Data Movement with Data Transformation
For years, the computing power of supercomputers has continuously increased to the extent that the
exascale era is expected for 2019. At the same time, the ratio between computing power and I/O
performance has significantly decreased: generating data is much faster than the ability to transport it.
Thus, optimizing data movement is critical for improved performance. A large-scale application such as
HACC simulating the mass evolution of the universe can handle petabytes of data per experimental
campaign. In terms of hardware, the current and upcoming architectures are seeing the emergence of a
very deep memory hierarchy and different tiers of storage with various performance (MCDRAM, onnode
SSD, burst buffers). Taking those levels of hierarchy into account is decisive. My mission during this
internship at Argonne National Laboratory was to explore a way to optimize data movement through
the different tiers of memory and storage with data transformations. This technique consists in changing
the characteristics of the dataset without loss of information or as little as possible. Data compression,
for instance, reduces the amount of data moving from the application to a memory tier, thereby
decreasing the communication cost. During these three months, I implemented a memory abstraction
layer and a framework performing various transformations. I tested it on different leadership-class
supercomputers with a benchmark simulating data movement.

 

Time: 1:15 – 1:25 pm
Presenter: Ivana Marcinic
Education: PhD 2021, Computer Science, The University of Chicago
Mentors: Venkat Vishwanath

Power Monitoring and Control for Large-Scale HPC Applications on Theta
Power and energy consumption of applications on high performance computing (HPC) systems is of
paramount importance as we march to exascale systems and beyond. HPC systems are now being
architected with a diverse set of power monitoring capabilities. At an application and user level, there
are several available tools for basic power profiling of applications on a single node. However, profiling
large-scale multi-node applications is a challenge as most tools are not designed for these use cases. To
address this, we developed a power monitoring and control library that enables HPC users of all
backgrounds to profile their applications with as little as a few lines of simple code. The library fully
exploits the power monitoring and capping capabilities on Cray and Intel systems and provides detailed
insights into energy and power consumption. It also enables users to control power consumption of
their application at runtime via power limiting. We will discuss the efficacy of our library on the ALCF
Theta system with a range of applications. This work is an important platform for further research on
the topic of power consumption and optimization in HPC systems.

 

Time: 1:30 – 1:40 pm
Presenter: Anna Kim
Education: BS 2019, Computer Science & Mathematics, The University of Chicago
Mentor: Hal Finkel

Proxy Scientific Applications for the Future of Supercomputing
Supercomputers are used in situations where many operations are needed, such as approximate
weather phenomena and simulate supernovae. In the similar way that petascale computing has opened
up a new realm of solving problems, exascale computing will do the same by providing computing
speeds 50 times faster than what’s possible today and carrying out quintillions of operations in a
second. To aid in the development of the exascale ecosystem, we gathered information for forty proxy
applications, integrated them into a popular package manager in the HPC world called Spack as well as
the LLVM test suite, and began the process for profiling these applications.