skip to main content
research-article

SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs

Published: 09 December 2022 Publication History

Abstract

Compared to conventional artificial neural networks, spiking neural networks (SNNs) are more biologically plausible and require less computation due to their event-driven nature of spiking neurons. However, the default asynchronous execution of SNNs also poses great challenges to accelerate their performance on FPGAs.
In this work, we present a novel synchronous approach for rate-encoding-based SNNs, which is more hardware friendly than conventional asynchronous approaches. We first quantitatively evaluate and mathematically prove that the proposed synchronous approach and asynchronous implementation alternatives of rate-encoding-based SNNs are similar in terms of inference accuracy, and we highlight the computational performance advantage of using SyncNN over an asynchronous approach. We also design and implement the SyncNN framework to accelerate SNNs on Xilinx ARM-FPGA SoCs in a synchronous fashion. To improve the computation and memory access efficiency, we first quantize the network weights to 16-bit, 8-bit, and 4-bit fixed-point values with the SNN-friendly quantization techniques. Moreover, we encode only the activated neurons by recording their positions and corresponding number of spikes to fully utilize the event-driven characteristics of SNNs, instead of using the common binary encoding (i.e., 1 for a spike and 0 for no spike).
For the encoded neurons that have dynamic and irregular access patterns, we design parameterized compute engines to accelerate their performance on the FPGA, where we explore various parallelization strategies and memory access optimizations. Our experimental results on multiple Xilinx ARM-FPGA SoC boards demonstrate that our SyncNN is scalable to run multiple networks, such as LeNet, Network in Network, and VGG, on various datasets such as MNIST, SVHN, and CIFAR-10. SyncNN not only achieves competitive accuracy (99.6%) but also achieves state-of-the-art performance (13,086 frames per second) for the MNIST dataset. Finally, we compare the performance of SyncNN with conventional CNNs using the Vitis AI and find that SyncNN can achieve similar accuracy and better performance compared to Vitis AI for image classification using small networks.

References

[1]
Guo-qiang Bi and Mu-ming Poo. 1998. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience 18, 24 (1998), 10464–10472.
[2]
Sander M. Bohte, Joost N. Kok, and Han La Poutré. 2002. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 1 (2002), 17–37.
[3]
Olaf Booij and Hieu tat Nguyen. 2005. A gradient descent rule for spiking neurons emitting multiple spikes. Information Processing Letters 95, 6 (Sept.2005), 552–558.
[4]
Romain Brette and Dan F. M. Goodman. 2012. Simulating spiking neural networks on GPU. Network: Computation in Neural Systems 23, 4 (2012), 167–182.
[5]
Kit Cheung, Simon R. Schultz, and Wayne Luk. 2016. NeuroFlow: A general purpose spiking neural network simulation platform using customizable processors. Frontiers in Neuroscience 9 (2016), 516.
[6]
M. A. De, Shen JunCheng, G. U. ZongHua, Zhang Ming, Zhu XiaoLei, X. U. XiaoQiang, Qi Xu, Shen YangJing, and Gang Pan. 2017. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. Journal of Systems Architecture 77 (Jan.2017), 43–51.
[7]
Peter U. Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 1–8. DOI:
[8]
Steve K. Esser, Rathinakumar Appuswamy, Paul Merolla, John V. Arthur, and Dharmendra S Modha. 2015. Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc., 1117–1125.
[9]
H. Fang, Z. Mei, A. Shrestha, Z. Zhao, Y. Li, and Q. Qiu. 2020. Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs. In 2020 IEEE/ACM International Conference on Computer Aided Design (ICCAD’20). 1–9.
[10]
Haowen Fang, Amar Shrestha, Ziyi Zhao, and Qinru Qiu. 2020. Exploiting neuron and synapse filter dynamics in spatial temporal learning of deep spiking neural network. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20). 2771–2778.
[11]
A. K. Fidjeland and M. P. Shanahan. 2010. Accelerated simulation of spiking neural networks using GPUs. In The 2010 International Joint Conference on Neural Networks (IJCNN’10). 1–8.
[12]
J. Han, Z. Li, W. Zheng, and Y. Zhang. 2020. Hardware implementation of spiking neural networks on FPGA. Tsinghua Science and Technology 25, 4 (2020), 479–486.
[13]
Eric Hunsberger and Chris Eliasmith. 2016. Training spiking deep networks for neuromorphic hardware. arXiv:1611.05141 (2016). arxiv:1611.05141
[14]
Xiping Ju, Biao Fang, Rui Yan, Xiaoliang Xu, and Huajin Tang. 2020. An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Computation 32, 1 (2020), 182–204.
[15]
Taro Kawao, Masato Neishi, Tomohiro Okamoto, Amir Masoud Gharehbaghi, Takashi Kohno, and Masahiro Fujita. 2016. Spiking neural network simulation on FPGAs with automatic and intensive pipelining. In 2016 International Symposium on Nonlinear Theory and Its Applications (NOLTA’16). 202–205.
[16]
Yann LeCun, Y. Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521 (May2015), 436–444.
[17]
Alec Lu, Zhenman Fang, Weihua Liu, and Lesley Shannon. 2021. Demystifying the memory system of modern datacenter FPGAs for software programmers through microbenchmarking. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’21). Association for Computing Machinery, 105–115.
[18]
Wolfgang Maass. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 9 (1997), 1659–1671.
[19]
Christian Mayr, Sebastian Höppner, and Steve Furber. 2019. SpiNNaker 2: A 10 million core processor system for brain simulation and machine learning. arXiv:1911.02385 (2019). arxiv:cs.ET/1911.02385
[20]
S. McKennoch, Dingding Liu, and L. G. Bushnell. 2006. Fast modifications of the spikeprop algorithm. In The 2006 IEEE International Joint Conference on Neural Network Proceedings. 3970–3977.
[21]
Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668–673.
[22]
Jayram Moorkanikara Nageswaran, Nikil Dutt, Jeff Krichmar, Alex Nicolau, and Alexander Veidenbaum. 2009. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Networks: The Official Journal of the International Neural Network Society 22 (Aug.2009), 791–800.
[23]
Morcos, Benjamin. 2019. NengoFPGA: An FPGA Backend for the Nengo Neural Simulator. http://hdl.handle.net/10012/14923.
[24]
D. Neil and S. Liu. 2014. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 22, 12 (2014), 2621–2628.
[25]
Danilo Pani, Paolo Meloni, Giuseppe Tuveri, Francesca Palumbo, Paolo Massobrio, and Luigi Raffo. 2017. An FPGA platform for real-time simulation of spiking neuronal networks. Frontiers in Neuroscience 11 (2017), 90.
[26]
Michael Pfeiffer and Thomas Pfeil. 2018. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience 12 (2018), 774.
[27]
Filip Ponulak and Andrzej Kasiundefinedski. 2010. Supervised learning in spiking neural networks with resume: Sequence learning, classification, and spike shifting. Neural Computing 22, 2 (Feb.2010), 467–510.
[28]
A. Rosado-Muñoz, M. Bataller-Mompeán, and J. Guerrero-Martínez. 2012. FPGA implementation of spiking neural networks. IFAC Proceedings Volumes 45, 4 (2012), 139–144. 1st IFAC Conference on Embedded Systems, Computational Intelligence and Telematics in Control.
[29]
Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, and Michael Pfeiffer. 2016. Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv:1612.04052 (2016). arxiv:stat.ML/1612.04052.
[30]
J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner. 2010. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In 2010 IEEE International Symposium on Circuits and Systems (ISCAS’10). 1947–1950.
[31]
Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Kaushik Roy. 2019. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience 13 (2019), 95.
[32]
Amar Shrestha, Haowen Fang, Qing Wu, and Qinru Qiu. 2019. Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In Proceedings of the International Conference on Neuromorphic Systems (ICONS’19). Association for Computing Machinery, New York, NY, Article 10, 8 pages.
[33]
Sumit Bam Shrestha and Garrick Orchard. 2018. SLAYER: Spike layer error reassignment in time. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. Curran Associates, Inc., 1412–1421.
[34]
Evangelos Stromatias, Dan Neil, Francesco Galluppi, Michael Pfeiffer, Shih-Chii Liu, and Steve Furber. 2015. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 1–8.
[35]
A. Taherkhani, A. Belatreche, Y. Li, and L. P. Maguire. 2015. DL-ReSuMe: A delay learning-based remote supervised method for spiking neurons. IEEE Transactions on Neural Networks and Learning Systems 26, 12 (2015), 3137–3149.
[36]
Amirhossein Tavanaei and Anthony Maida. 2019. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330 (2019), 39–47.
[37]
Runchun M. Wang, Chetan S. Thakur, and André van Schaik. 2018. An FPGA-based massively parallel neuromorphic cortex simulator. Frontiers in Neuroscience 12 (2018), 213.
[38]
Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. 2018. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12 (2018), 331.
[39]
Xilinx. 2021. Vitis AI: Adaptable and Real-Time AI Inference Acceleration. https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html.
[40]
Chen Zhang, Guangyu Sun, Zhenman Fang, Peipei Zhou, Peichen Pan, and Jason Cong. 2019. Caffeine: Toward uniformed representation and acceleration for deep convolutional neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 11 (2019), 2072–2085.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Reconfigurable Technology and Systems
ACM Transactions on Reconfigurable Technology and Systems  Volume 15, Issue 4
December 2022
476 pages
ISSN:1936-7406
EISSN:1936-7414
DOI:10.1145/3540252
  • Editor:
  • Deming Chen
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2022
Online AM: 09 February 2022
Accepted: 25 January 2022
Revised: 18 December 2021
Received: 17 August 2021
Published in TRETS Volume 15, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Spiking neural network
  2. deep learning
  3. hardware acceleration
  4. FPGA
  5. synchronous execution

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • Natural Sciences and Engineering Research Council of Canada (NSERC Discovery
  • Canada Foundation for Innovation John R. Evans Leaders Fund and British Columbia Knowledge Dev. Fund; Simon Fraser University New Faculty Start-up

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)463
  • Downloads (Last 6 weeks)58
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media