The challenge of exascale supercomputing has led to major changes in architectures of the top supercomputers. The Exascale supercomputers has become more parallel and heterogeneous than ever which brings new challenges to communication libraries such as MPI. The MPI library today not only need to find a way to provide high-performance communication at large scale, but also need to be able to interact with different programming models efficiently. In this talk, I will present the recent improvement and progress on the MPICH project for addressing those challenges. I will also discuss what are the future directions for MPI to keep evolving to support emerging applications such as AI for Science.
Dr. Yanfei Guo holds an appointment as a Computer Scientist at Argonne National Laboratory. He is a member of the Programming Models and Runtime Systems group. He has been working on multiple software projects including MPI, Yaksa and OSHMPI. His research interests include parallel programming models and runtime systems in extreme-scale supercomputing systems, data-intensive computing, and cloud computing systems. Yanfei has received the best paper award at the USENIX International Conference on Autonomic Computing 2013 (ICAC'13). His work on programming models and runtime systems has been published in peer-reviewed conferences and journals including the ACM/IEEE Supercomputing Conference (SC'14, SC'15) and IEEE Transactions on Parallel and Distributed Systems (TPDS). Yanfei has delivered eight tutorials on MPI to various audience levels from university students to researchers. Yanfei served as a reviewer and technical committee member in many journals and conferences. He is a member of the IEEE and the ACM.