skip to main content
research-article

Performance evaluation of convolutional neural network on Tianhe-3 prototype

Published: 01 November 2021 Publication History

Abstract

Exascale supercomputers will greatly support the expanding computational resource demand of convolutional neural networks (CNNs). At present, the prototype cluster of Tianhe-3 supercomputer, which is based on the Chinese-made many-core processors, the Phytium-2000+ (FTP) and Matrix-2000+ (MTP), has gone into service. We evaluated the training performance of CNN on the Tianhe-3 prototype. The performance of image convolution and matrix multiplication on the FTP and MTP was tested to evaluate the single-node performance, and the Allreduce element was tested to evaluate the scalability of the distributed training on the prototype cluster. We also qualitatively analyzed the performance bottlenecks of CNN on the FTP and MTP processors by Roofline model and provided some optimization suggestions for improving the CNN on the Tianhe-3 prototype.

References

[1]
Abadi M, Barham P, Chen J (2016) Tensorflow: A system for large-scale machine learning. In: 12th USENIX symposium on operating systems design and implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016, USENIX Association, pp 265–283
[2]
Awan AA, Subramoni H, Panda DK (2017) An in-depth performance characterization of CPU- and gpu-based DNN training on modern architectures. In: Proceedings of the machine learning on HPC environments, MLHPC@SC 2017, Denver, CO, USA, November 13, 2017, ACM, pp 8:1–8:8
[3]
Chetlur S, Woolley C, Vandermersch P (2014) cudnn: Efficient primitives for deep learning. CoRR abs/1410.0759, arXiv:1410.0759
[4]
Chilimbi TM, Suzue Y, Apacible J (2014) Project adam: Building an efficient and scalable deep learning training system. In: 11th USENIX symposium on operating systems design and implementation, OSDI ’14, Broomfield, CO, USA, October 6-8, 2014, USENIX Association, pp 571–582
[5]
Dean J, Corrado G, Monga R, (2012) Large scale distributed deep networks. In: Advances in neural information processing systems 25: 26th annual conference on neural information processing systems, (2012) Proceedings of a meeting held December 3–6, 2012. Lake Tahoe, Nevada, United States, pp 1232–1240
[6]
Developer N (2018) Nvidia turing architecture whitepaper. Whitepaper, accessed April 26, 2020
[7]
Fang J, Fu H, Zhao W (2017) swdnn: A library for accelerating deep learning applications on sunway taihulight. In: 2017 IEEE International parallel and distributed processing symposium, IPDPS 2017, Orlando, FL, USA, May 29 - June 2, 2017, IEEE Computer Society, pp 615–624
[8]
Gibiansky A (2016) Bringing hpc techniques to deep learning. Website, http://research.baidu.com/bringing-hpc-techniques-deep-learning/, accessed Mar 22, 2018
[9]
Goto K, van de Geijn RA (2008) Anatomy of high-performance matrix multiplication. ACM Trans Math Softw 34(3):12:1-12:25
[10]
He K, Zhang X, Ren (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
[11]
Howard AG, Zhu M, Chen B (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR abs/1704.04861, arXiv:1704.04861
[12]
Jack D (2017) Report on the tianhe-2a system. Technical report, https://www.icl.utk.edu/files/publications/2017/icl-utk-970-2017.pdf, accessed April 4, 2020
[13]
Jang M, Kim K, Kim K (2011) The performance analysis of ARM NEON technology for mobile platforms. In: Research in applied computation symposium, RACS ’11, Miami, FL, USA, October 19-22, 2011, ACM, pp 104–106
[14]
JD M (1996) Stream benchmark. Website, http://www.cs.virginia.edu/stream/ref.html#what, accessed April 26, 2020
[15]
Jia X, Song S, He W (2018) Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes. CoRR abs/1807.11205, arXiv:1807.11205
[16]
Krizhevsky A, Sutskever I, and Hinton GE Imagenet classification with deep convolutional neural networks Adv Neural Inform Process Syst 2012 25 1097-1105
[17]
Lavin A, Gray S (2016) Fast algorithms for convolutional neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, IEEE Computer Society, pp 4013–4021
[18]
Li Y, Chen X, and Liu J OHTMA: an optimized heuristic topology-aware mapping algorithm on the tianhe-3 exascale supercomputer prototype Front Inf Technol Electron Eng 2020 21 6 939-949
[19]
Lian X, Zhang C, Zhang H, (2017) Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. Advances in neural information processing systems 30: annual conference on neural information processing systems, (2017) 4–9 December 2017. Long Beach, CA, USA, pp 5330–5340
[20]
McIntosh-Smith S, Price J, Deakin T (2019) A performance analysis of the first generation of hpc-optimized arm processors. Concurr Comput Pract Exp 31(16)
[21]
Molchanov P, Tyree S, Karras T (2017) Pruning convolutional neural networks for resource efficient inference. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net
[22]
Rajovic N, Rico A, and Puzovic N Tibidabo: making the case for an arm-based HPC system Fut Gener Comput Syst 2014 36 322-334
[23]
Research B (2019) Deepbench. Website, https://github.com/baidu-research/DeepBench, accessed April 26, 2020
[24]
Shazeer N, Mirhoseini A, Maziarz K (2017) Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net
[25]
Sun D, Liu S, Gaudiot J (2017) Enabling embedded inference engine with ARM compute library: a case study. CoRR abs/1704.03751, arXiv:1704.03751
[26]
Szegedy C, Liu W, Jia Y (2015) Going deeper with convolutions. In: IEEE conference on computer vision and pattern recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, IEEE Computer Society, pp 1–9
[27]
Watcharapichat P, Morales VL, Fernandez RC (2016) Ako: Decentralised deep learning with partial gradient exchange. In: Proceedings of the seventh ACM symposium on cloud computing, Santa Clara, CA, USA, October 5-7, 2016, ACM, pp 84–97
[28]
Williams S, Waterman A, and Patterson DA Roofline: an insightful visual performance model for multicore architectures Commun ACM 2009 52 4 65-76
[29]
Yokoyama D, Schulze B, and Borges F The survey on ARM processors for HPC J Supercomput 2019 75 10 7003-7036
[30]
You X, Yang H, and ZL (2019) Performance evaluation and analysis of linear algebra kernels in the prototype tianhe-3 cluster. In: Supercomputing Frontiers - 5th Asian Conference, SCFA 2019, Singapore, March 11-14, 2019, Proceedings, Springer, Lecture Notes in Computer Science, vol 11416, pp 86–105
[31]
Zhang X, Wang Q, W S (2020) Openblas: an optimized blas library. Website, http://www.openblas.net/, accessed April 25, 2020
[32]
Zhu R, Zhao K, and Yang H Aligraph: a comprehensive graph neural network platform Proc VLDB Endow 2019 12 12 2094-2105

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image The Journal of Supercomputing
The Journal of Supercomputing  Volume 77, Issue 11
Nov 2021
1392 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 November 2021
Accepted: 18 March 2021

Author Tags

  1. Tianhe-3 prototype
  2. Convolutional neural network
  3. Performance evaluation
  4. Roofline model

Qualifiers

  • Research-article

Funding Sources

  • the National Key R&D Program of China

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media