skip to main content
10.1145/2907294.2907298acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

Improving GPU Performance Through Resource Sharing

Published: 31 May 2016 Publication History

Abstract

Graphics Processing Units (GPUs) consisting of Streaming Multiprocessors (SMs) achieve high throughput by running a large number of threads and context switching among them to hide execution latencies. The number of thread blocks, and hence the number of threads that can be launched on an SM, depends on the resource usage--e.g. number of registers, amount of shared memory--of the thread blocks. Since the allocation of threads to an SM is at the thread block granularity, some of the resources may not be used up completely and hence will be wasted.
We propose an approach that shares the resources of SM to utilize the wasted resources by launching more thread blocks. We show the effectiveness of our approach for two resources: register sharing, and scratchpad (shared memory) sharing. We further propose optimizations to hide long execution latencies, thus reducing the number of stall cycles. We implemented our approach in GPGPU-Sim simulator and experimentally validated it on 19 applications from 4 different benchmark suites: GPGPU-Sim, Rodinia, CUDA-SDK, and Parboil. We observed that applications that underutilize register resource show a maximum improvement of 24% and an average improvement of 11% with register sharing. Similarly, the applications that underutilize scratchpad resource show a maximum improvement of 30% and an average improvement of 12.5% with scratchpad sharing. The remaining applications, which do not waste any resources, perform similar to the baseline approach.

References

[1]
CUDA C Programming Guide. https://docs.nvidia.com/cuda/cuda-c-programming-guide/.
[2]
CUDA-SDK. http://docs.nvidia.com/cuda/cuda-samples.
[3]
GPGPU-Sim. http://www.gpgpu-sim.org.
[4]
Parboil Benchmarks. http://impact.crhc.illinois.edu/Parboil/parboil.aspx.
[5]
M. Abdel-Majeed and M. Annavaram. Warped Register File: A Power Efficient Register File for GPGPUs. In HPCA, 2013.
[6]
J. Anantpur and R. Govindarajan. Taming Control Divergence in GPUs through Control Flow Linearization. In CC, 2014.
[7]
A. Bakhoda, G. Yuan, W. Fung, H. Wong, and T. Aamodt. Analyzing CUDA workloads using a detailed GPU simulator. In ISPASS, 2009.
[8]
N. Brunie, S. Collange, and G. Diamos. Simultaneous Branch and Warp Interweaving for Sustained GPU Performance. In ISCA, 2012.
[9]
S. Che, M. Boyer, J. Meng, D. Tarjan, J. Sheaffer, S.-H. Lee, and K. Skadron. Rodinia: A benchmark suite for heterogeneous computing. In IISWC, 2009.
[10]
G. Diamos, B. Ashbaugh, S. Maiyuran, A. Kerr, H. Wu, and S. Yalamanchili. SIMD Re-convergence at Thread Frontiers. In MICRO, 2011.
[11]
W. W. L. Fung and T. M. Aamodt. Thread Block Compaction for Efficient SIMT Control Flow. In HPCA, 2011.
[12]
W. W. L. Fung, I. Sham, G. Yuan, and T. M. Aamodt. Dynamic Warp Formation and Scheduling for Efficient GPU Control Flow. In MICRO, 2007.
[13]
M. Gebhart, D. R. Johnson, D. Tarjan, S. W. Keckler, W. J. Dally, E. Lindholm, and K. Skadron. A Hierarchical Thread Scheduler and Register File for Energy-Efficient Throughput Processors. ACM Trans. Comput. Syst., 2012.
[14]
M. Gebhart, S. W. Keckler, B. Khailany, R. Krashinsky, and W. J. Dally. Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor. In MICRO, 2012.
[15]
T. D. Han and T. S. Abdelrahman. Reducing Branch Divergence in GPU Programs. In GPGPU-4, 2011.
[16]
V. Jatala, J. Anantpur, and A. Karkare. Improving GPU Performance Through Resource Sharing. CoRR, http://arxiv.org/abs/1503.05694, 2015.
[17]
H. Jeon, G. S. Ravi, N. S. Kim, and M. Annavaram. GPU Register File Virtualization. MICRO, 2015.
[18]
A. Jog, O. Kayiran, N. Chidambaram Nachiappan, A. K. Mishra, M. T. Kandemir, O. Mutlu, R. Iyer, and C. R. Das. OWL: Cooperative Thread Array Aware Scheduling Techniques for Improving GPGPU Performance. In ASPLOS, 2013.
[19]
O. Kayiran, A. Jog, M. Kandemir, and C. Das. Neither more nor less: Optimizing thread-level parallelism for GPGPUs. In PACT, 2013.
[20]
M. Lee, S. Song, J. Moon, J. Kim, W. Seo, Y. Cho, and S. Ryu. Improving GPGPU resource utilization through alternative thread block scheduling. In HPCA, 2014.
[21]
S.-Y. Lee, A. Arunkumar, and C.-J. Wu. Cawa: Coordinated warp scheduling and cache prioritization for critical warp acceleration of gpgpu workloads. In ISCA, 2015.
[22]
S.-Y. Lee and C.-J. Wu. CAWS: Criticality-aware Warp Scheduling for GPGPU Workloads. In PACT, 2014.
[23]
D. Li, M. Rhu, D. R. Johnson, M. O'Connor, M. Erez, D. Burger, D. S. Fussell, and S. W. Redder. Priority-based cache allocation in throughput processors. In HPCA, 2015.
[24]
T. Li, V. K. Narayana, E. El-Araby, and T. El-Ghazawi. GPU Resource Sharing and Virtualization on High Performance Computing Systems. In ICPP, 2011.
[25]
W. Ma and G. Agrawal. An Integer Programming Framework for Optimizing Shared Memory Use on GPUs. PACT, 2010.
[26]
J. Meng, D. Tarjan, and K. Skadron. Dynamic Warp Subdivision for Integrated Branch and Memory Divergence Tolerance. In ISCA, 2010.
[27]
V. Narasiman, M. Shebanow, C. J. Lee, R. Miftakhutdinov, O. Mutlu, and Y. N. Patt. Improving GPU Performance via Large Warps and Two-level Warp Scheduling. In MICRO, 2011.
[28]
M. Rhu and M. Erez. CAPRI: Prediction of Compaction-adequacy for Handling Control-divergence in GPGPU Architectures. In ISCA, 2012.
[29]
T. G. Rogers, M. O'Connor, and T. M. Aamodt. Cache-Conscious Wavefront Scheduling. In MICRO, 2012.
[30]
A. Sethia, D. A. Jamshidi, and S. Mahlke. Mascar: Speeding up GPU warps by reducing memory pitstops. In HPCA, 2015.
[31]
D. Tarjan and K. Skadron. On demand register allocation and deallocation for a multithreaded processor, 2011. US Patent App. 12/649,238.
[32]
P. Xiang, Y. Yang, and H. Zhou. Warp-level divergence in GPUs: Characterization, impact, and mitigation. In HPCA, 2014.
[33]
X. Xie, Y. Liang, X. Li, Y. Wu, G. Sun, T. Wang, and D. Fan. Enabling Coordinated Register Allocation and Thread-level Parallelism Optimization for GPUs. In MICRO, 2015.
[34]
Y. Yang, P. Xiang, M. Mantor, N. Rubin, and H. Zhou. Shared Memory Multiplexing: A Novel Way to Improve GPGPU Throughput. In PACT, 2012.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HPDC '16: Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing
May 2016
302 pages
ISBN:9781450343145
DOI:10.1145/2907294
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 May 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. register sharing
  2. scratchpad sharing
  3. thread level parallelism
  4. warp scheduling

Qualifiers

  • Research-article

Funding Sources

  • Google India Private Limited
  • TCS

Conference

HPDC'16
Sponsor:

Acceptance Rates

HPDC '16 Paper Acceptance Rate 20 of 129 submissions, 16%;
Overall Acceptance Rate 166 of 966 submissions, 17%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)50
  • Downloads (Last 6 weeks)6
Reflects downloads up to 27 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media