skip to main content
10.1145/2488551.2488565acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Energy estimation for MPI broadcasting algorithms in large scale HPC systems

Published: 15 September 2013 Publication History

Abstract

Future supercomputers will gather hundreds of millions of communicating cores. The movement of data in such systems will be very energy consuming. We address in this paper the issue of energy consumption of data broadcasting in such large scale systems. To this end, we propose a framework to estimate the energy consumed by different MPI broadcasting algorithms for various execution settings. Validation results show that our estimations are highly accurate and allow to select the least consuming broadcasting algorithm.

References

[1]
G. Aloisio and S. Fiore. Towards Exascale Distributed Data Management. IJHPCA, 23(4):398--400, 2009.
[2]
F. Cappello and D. Etiemble. MPI versus MPI+OpenMP on IBM SP for the NAS benchmarks. In ACM/IEEE SC'00, Dallas, Texas, USA, November 2000.
[3]
Daniel M. Wadsworth and Zizhong Chen. Performance of MPI broadcast algorithms. In IEEE IPDPS 2008, Miami, Florida USA, April 14-18, 2008, pages 1--7.
[4]
Franck Cappello et al. Grid'5000: a large scale and highly reconfigurable grid experimental testbed. In IEEE/ACM GRID 2005, November 13-14, 2005, Seattle, Washington, USA, pages 99--106, 2005.
[5]
R. Ge, X. Feng, S. Song, H.-C. Chang, D. Li, and K. W. Cameron. Powerpack: Energy profiling and analysis of high-performance systems and applications. IEEE Transactions on Parallel and Distributed Systems, 99:658--671, 2009.
[6]
H. Hlavacs, G. Da Costa, and J.-M. Pierson. Energy consumption of residential and professional switches. In IEEE CSE, 2009.
[7]
Jelena Pjesivac-Grbovic, Thara Angskun, George Bosilca, Graham E. Fagg, Edgar Gabriel and Jack Dongarra. Performance Analysis of MPI Collective Operations. In IEEE IPDPS 2005, 4-8 April 2005, Denver, CO, USA, 2005.
[8]
P. M. Kogge and et al. ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems. In DARPA Information Processing Techniques Office, page pp. 278, Washington, DC, September 28 2008.
[9]
M. E. M. Diouri, M. F. Dolz, O. Glück, L. Lefèvre, P. Alonso, S. Catalán, R. Mayo, and E. S. Quintana-Ortí. Solving some mysteries in power monitoring of servers: Take care of your wattmeters! In EE-LSDS 2013, Vienna, Austria, April, 22-24 2013.
[10]
M. E. M. Diouri, O. Glück and L. Lefèvre and F. Cappello. ECOFIT: A Framework to Estimate Energy Consumption of Fault Tolerance protocols during HPC executions. In IEEE/ACM CCGrid 2013, Delft, Netherlands, May 13-16, 2013.
[11]
Min Yeol Lim, Allan Porterfield and Robert Fowler. SoftPower: Fine-Grain Power Estimations Using Performance Counters. In ACM HPDC, July 2010.
[12]
Priya Mahadevan, Puneet Sharma, Sujata Banerjee and Parthasarathy Ranganathan. A power benchmarking framework for network devices. In NETWORKING 2009 Conference, Aachen, Germany, May 11-15, 2009., pages 795--808, 2009.
[13]
R. Rabenseifner, G. Hager, and G. Jost. Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes. In 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, February 2009, pages 427--436.
[14]
R. Thakur and W. Gropp. Improving the Performance of Collective Operations in MPICH. In European PVM/MPI Users' Group Meeting, Venice, Italy, September 29 - October 2, 2003, pages 257--267, 2003.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroMPI '13: Proceedings of the 20th European MPI Users' Group Meeting
September 2013
289 pages
ISBN:9781450319034
DOI:10.1145/2488551
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • ARCOS: Computer Architecture and Technology Area, Universidad Carlos III de Madrid

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. HPC applications
  2. MPI broadcasting
  3. hybrid programming
  4. power and energy consumption

Qualifiers

  • Research-article

Conference

EuroMPI '13
Sponsor:
  • ARCOS
EuroMPI '13: 20th European MPI Users's Group Meeting
September 15 - 18, 2013
Madrid, Spain

Acceptance Rates

EuroMPI '13 Paper Acceptance Rate 22 of 47 submissions, 47%;
Overall Acceptance Rate 66 of 139 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media