skip to main content
10.1145/2897937.2898011acmotherconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks

Published: 05 June 2016 Publication History

Abstract

This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.

References

[1]
R. Venkatesan, et al., "Spintastic: spin-based stochastic logic for energy-efficient computing," Proc. DATE, pp. 1575--1578, 2015.
[2]
A. Krizhevsky, et al., "Imagenet classification with deep convolutional neural networks," Proc. NIPS, pp. 1097--1105, 2012.
[3]
G. Hinton, et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Process. Mag., vol. 29, pp. 82--97, 2012.
[4]
C. Szegedy, et al., "Going deeper with convolutions," arXiv preprint arXiv:1409.4842, 2014.
[5]
A. Coates, et al., "Deep learning with COTS HPC systems," Proc. ICML, pp. 1337--1345, 2013.
[6]
M. Montemerlo, et al., "Junior: The stanford entry in the urban challenge," J. Field Robot., vol. 25, pp. 569--597, 2008.
[7]
K. H. Lee, et al., "A low-power processor with configurable embedded machine-learning accelerators for high-order and adaptive analysis of medical-sensor signals," IEEE J. Solid-State Circuit, vol. 48, pp. 1625--1637, 2013.
[8]
M. Courbariaux, et al., "Training deep neural networks with low precision multiplications," Proc. workshop contribution at ICLR, 2015.
[9]
H. Geoffrey, et al., "Distilling the knowledge in a neural network," Proc. NIPS workshop, 2014.
[10]
B. D. Brown, et al., "Stochastic neural computation. II. Soft competitive learning," IEEE Trans. Comput., vol. 50, pp. 906--920, 2001.
[11]
K. Sanni, et al., "FPGA implementation of a Deep Belief Network architecture for character recognition using stochastic computation," Proc. CISS, pp. 1--5, 2015.
[12]
Y. A. LeCun, et al., "Efficient backprop," in Neural networks: Tricks of the trade, ed: Springer, 2012, pp. 9--48.
[13]
S. Han, et al., "Learning both weights and connections for efficient neural networks," Proc. NIPS, 2015.
[14]
J. Schmidhuber, "Deep learning in neural networks: An overview," Neural Networks, vol. 61, pp. 85--117, 2015.
[15]
S. Venkataramani, et al., "Scalable-effort classifiers for energyefficient machine learning," Proc. DAC, p. 67, 2015.
[16]
L. Deng, "The MNIST database of handwritten digit images for machine learning research," IEEE Signal Process. Mag., vol. 29, pp. 141--142, 2012.
[17]
B. D. Brown, et al., "Stochastic neural computation. I. Computational elements," IEEE Trans. Comput., vol. 50, pp. 891--905, 2001.
[18]
A. Ardakani, et al., "VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing," arXiv preprint arXiv:1509.08972, 2015.
[19]
O. C. Ibe, Elements of Random Walk and Diffusion Processes: John Wiley & Sons, 2013.
[20]
K. Kim, et al., "Approximate De-randomizer for Stochastic Circuits," Proc. ISOCC, 2015.
[21]
A. Alaghi, et al., "Survey of stochastic computing," ACM Trans. Embed. Comput. Syst., vol. 12, p. 92, 2013.
[22]
K. Kim, et al., "An Energy-Efficient Random Number Generator for Stochastic Circuits," Proc. ASP-DAC, 2016.

Cited By

View all
  1. Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      DAC '16: Proceedings of the 53rd Annual Design Automation Conference
      June 2016
      1048 pages
      ISBN:9781450342360
      DOI:10.1145/2897937
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 June 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. deep learning
      2. deep neural networks
      3. energy efficiency
      4. stochastic computing

      Qualifiers

      • Research-article

      Conference

      DAC '16

      Acceptance Rates

      Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)139
      • Downloads (Last 6 weeks)15
      Reflects downloads up to 25 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media