Message Passing on Data-Parallel Architectures

Event Sponsor: 
Mathematics and Computer Science Division Seminar
Start Date: 
May 16 2008 (All day)
Building/Room: 
Building 221, Conference Room A216
Location: 
Argonne National Laboratory
Speaker(s): 
Jeff Stuart
Speaker(s) Title: 
University of California, Davis
Host: 
Rob Ross

The challenges in implementing a message passing interface usable by data-parallel processors are many. To explore these challenges, we design and implement the "DCGN" (pronounced as decagon) API on NVIDIA GPUs that is nearly identical to MPI and allows full access to the underlying architecture. We introduce the notion of data-parallel thread-groups as a way to map resources to MPI ranks. We use a method that also allows the data-parallel processors to run autonomously from user-written CPU code. In order to facilitate communication, we use a sleep-based polling system to store and retrieve messages. Unlike previous systems, our method provides both performance and flexibility. By running a test suite of applications with different communication requirements, we find that a tolerable amount of overhead is incurred, somewhere between one and five percent depending on the application, and indicate the locations where this overhead accumulates. We conclude that with innovation!
s in chipsets and drivers, this overhead will be mitigated and provide similar, if not improved, performance to typical CPU-based MPI implementations.

Miscellaneous Information: