Parallel Programming with MPI

Event Sponsor: 
Mathematics and Computer Science Division Tutorial
Start Date: 
Jun 26 2017 - 10:00am
Building/Room: 
Building 240/Room 1416
Location: 
Argonne National Laboratory
Host: 
Pavan Balaji, Ken Raffenetti, Halim Amer, and Yanfei Guo

Introduction to MPI

Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now, and knowledge of MPI is considered a pre-requisite for most people aiming for a career in parallel programming. This is a beginner-level tutorial aimed at introducing parallel programming with MPI. This tutorial will provide an overview of MPI, its offered features, current implementations of MPI, and its suitability for parallel computing environments. Together with a brief overview of MPI and its features the tutorial will also discuss good programming practices and issues to watch out for in MPI programming. Finally, several application case studies, including examples in nuclear physics, combustion, and quantum chemistry applications, and how they use MPI, will be shown.

Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with basic programming knowledge in MPI and equip them to the capability to get started with MPI programming. Based on these emerging trends and the associated challenges, the goals of this tutorial are:

  • Making the attendees familiar with MPI programming and its associated benefits
  • Providing an overview of available MPI implementations and the status of their capabilities with respect to the MPI standard
  • Illustrating MPI usage models from various example application domains including nuclear physics, computational chemistry, and combustion.

Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:

  • Newcomers to the field of distributed memory programming models who are interested in familiarizing themselves with MPI
  • Managers and administrators responsible for setting-up next generation high-end systems, and facilities in their organizations/laboratories
  • Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, and storage centers
  • System administrators of large-scale clusters
  • Developers of next generation parallel middleware and applications.

Advanced Parallel Programming with MPI-3

Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now. However, a vast majority of applications only rely on basic MPI-1 features without taking advantage of the rich set of functionality the rest of the standard provides. Further, with the advent of MPI-3 (released September 2012), a vast number of new features are being introduced in MPI, including efficient one-sided communication, support for external tools, non-blocking collective operations, and improved support for topology-aware data movement. This is an advanced-level tutorial that will provide an overview of various powerful features in MPI, especially with MPI-2 and MPI-3.

Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with advanced programming knowledge in MPI and equip them with the knowledge of powerful techniques present in various MPI versions including the MPI-3 standard. Based on these emerging trends and the associated challenges, the goals of this tutorial are:

  • Providing an overview of current large-scale applications and data movement efficiency issues they are facing
  • Providing an overview of the advanced powerful features available in MPI-2 and MPI-3
  • Illustrating how scientists, researchers and developers can use these features to design new applications

Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:

  • Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, storage centers
  • System administrators of large-scale clusters
  • Developers of next generation middleware and applications

 

Miscellaneous Information: 

The tutorial registration is free and this event is open to the public. An Argonne Gate Pass is not required. To help plan for room size and light refreshments, please RSVP to Mary Dzielski by June 22nd (Thursday).

NOTE: This year's tutorials are part of the Scaling to Petascale Institute. From June 26-30, 2017, the free week-long institute will prepare participants to scale simulations and data analytics programs to petascale-class computing systems. Participants must register to attend one of the host sites or to watch the sessions live on YouTube. For details, visit: https://bluewaters.ncsa.illinois.edu/petascale-summer-institute. Only register for the Scaling to Petascale Institute if you plan to attend more than just the MPI tutorials on Monday.