Friday, February 22, 2008

Challenges For The Message-Passing Interface In The PetaFLOPS Era

William D. Gropp

MPI has been a successful parallel programming model. The combination of performance, scalability, composability, and support for libraries has made it relatively easy to build complex parallel applications. Further, these applications scale well, with some applications already running on systems with over 128000 processors. However, MPI is by no means the perfect parallel programming model. This talk will review the strengths of MPI with respect to other parallel programming models and discuss some of the weaknesses and limitations of MPI in the areas of performance, productivity, scalability, and interoperability. The impact of recent developments in computing, such as multicore (and manycore), better networks, and global view programming models on both MPI and applications that use MPI will be covered, as well as lessons from the success of MPI that are relevant to furture progress in parallel computing. The talk will conclude with a discussion of what extensions (or even changes) may be needed in MPI, and what issues should be addressed by combining MPI with other parallel programming models.


Challenges for the Message Passing Interface in the Petaflops Era, University of Illinois at Urbana-Champaign, March 26, 2007.

No comments: