skip to main content
10.1145/2488551.2488553acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Enabling MPI interoperability through flexible communication endpoints

Published: 15 September 2013 Publication History

Abstract

The current MPI model defines a one-to-one relationship between MPI processes and MPI ranks. This model captures many use cases effectively, such as one MPI process per core and one MPI process per node. However, this semantic has limited interoperability between MPI and other programming models that use threads within a node. In this paper, we describe an extension to MPI that introduces communication endpoints as a means to relax the one-to-one relationship between processes and threads. Endpoints enable a greater degree interoperability between MPI and other programming models, and we illustrate their potential for additional performance and computation management benefits through the decoupling of ranks from processes.

References

[1]
Berkeley UPC. Berkeley UPC user's guide version 2.16.0. Technical report, U. C. Berkeley and LBNL, 2013.
[2]
Milind Bhandarkar, L. V. Kale, Eric de Sturler, and Jay Hoeflinger. Object-Based Adaptive Load Balancing for MPI Programs. In Proc. Intl. Conf. on Computational Science, volume LNCS 2074, pages 108--117, May 2001.
[3]
James Dinan, Pavan Balaji, Ewing Lusk, P. Sadayappan, and Rajeev Thakur. Hybrid parallel programming with MPI and unified parallel C. In Proc. 7th ACM international conference on Computing frontiers, CF '10, 2010.
[4]
Ulrich Drepper. ELF handling for thread-local storage. Technical report, Red Hat, Inc., December 2005.
[5]
Andrew Friedley, Torsten Hoefler, Greg Bronevetsky, Andrew Lumsdaine, and Ching-Chen Ma. Ownership passing: Efficient distributed memory programming on multi-core systems. In Proc. 18th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming, 2013.
[6]
David Goodell, William Gropp, Xin Zhao, and Rajeev Thakur. Scalable memory use in MPI. In Proc. Recent Adv. in MPI - 18th European MPI Users' Group Meeting, EuroMPI 2011, September 2011.
[7]
Torsten Hoefler, James Dinan, Darius Buntinas, Pavan Balaji, Brian Barrett, Ron Brightwell, William Gropp, Vivek Kale, and Rajeev Thakur. Leveraging MPI's one-sided communication interface for shared-memory programming. In Proc. Recent Adv. in MPI - 19th European MPI Users' Group Meeting, EuroMPI '12, September 2012.
[8]
Torsten Hoefler, James Dinan, Darius Buntinas, Pavan Balaji, Brian Barrett, Ron Brightwell, William Gropp, Vivek Kale, and Rajeev Thakur. MPI+MPI: A new, hybrid approach to parallel programming with MPI plus shared memory. J. Computing (to appear), 2013.
[9]
Feng Ji, Ashwin M. Aji, James Dinan, Darius Buntinas, Pavan Balaji, Rajeev Thakur, Wu-Chun Feng, and Xiaosong Ma. MPI-ACC: An integrated and extensible approach to data movement in accelerator-based systems. In Proc. 14th IEEE Intl. Conf. on High Performance Computing and Communications, HPCC '12, June 2012.
[10]
Humaira Kamal and Alan Wagner. FG-MPI: Fine-grain MPI for multicore and clusters. In 11th IEEE Intl. Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC), pages 1--8, 2010.
[11]
S. Kumar, A. R. Mamidala, D. A. Faraj, B. Smith, M. Blocksome, B. Cernohous, D. Miller, J. Parker, J. Ratterman, P. Heidelberger, Dong Chen, and B. Steinmacher-Burrow. PAMI: A parallel active message interface for the Blue Gene/Q supercomputer. In Proc. 26th Intl. IEEE Parallel & Distributed Processing Symposium (IPDPS), May 2012.
[12]
S. Li, T. Hoefler, and M. Snir. NUMA-Aware shared memory collective communication for MPI. In Proc. 22nd Intl. ACM Symp. on High-Performance Parallel and Distributed Computing (HPDC), Jun. 2013.
[13]
Message Passing Interface Forum. MPI-2: Extensions to the Message-Passing Interface, July 1997. http://www.mpi-forum.org/docs/docs.html.
[14]
MPICH -- A portable implementation of MPI. http://www.mpich.org.
[15]
Over 100 Million MPI Processes with MPICH, January 2013. http://www.mpich.org/2013/01/15/over-100-million-processes-with-mpich/.
[16]
Lorna Smith and Mark Bull. Development of mixed mode MPI/OpenMP applications. Scientific Programming, 9(2,3):83--98, 2001.
[17]
Jeff A. Stuart, Pavan Balaji, and John D. Owens. Extending MPI to accelerators. In ASBD 2011: First Workshop on Architectures and Systems for Big Data, October 2011.
[18]
Jesper Larsson Träff. Compact and efficient implementation of the MPI group operations. In Proc. Recent Adv. in MPI - 17th European MPI Users' Group Meeting, EuroMPI 2010, September 2010.
[19]
Hao Wang, Sreeram Potluri, Miao Luo, Ashish Singh, Sayantan Sur, and Dhabaleswar Panda. MVAPICH2-GPU: Optimized GPU to GPU communication for InfiniBand clusters. In Proc. Intl. Supercomputing Conf., ISC, 2011.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroMPI '13: Proceedings of the 20th European MPI Users' Group Meeting
September 2013
289 pages
ISBN:9781450319034
DOI:10.1145/2488551
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • ARCOS: Computer Architecture and Technology Area, Universidad Carlos III de Madrid

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. MPI
  2. endpoints
  3. hybrid parallel programming
  4. interoperability

Qualifiers

  • Research-article

Funding Sources

Conference

EuroMPI '13
Sponsor:
  • ARCOS
EuroMPI '13: 20th European MPI Users's Group Meeting
September 15 - 18, 2013
Madrid, Spain

Acceptance Rates

EuroMPI '13 Paper Acceptance Rate 22 of 47 submissions, 47%;
Overall Acceptance Rate 66 of 139 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media