skip to main content
research-article
Open access

Spiking Neural Networks in Spintronic Computational RAM

Published: 29 September 2021 Publication History

Abstract

Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity.

References

[1]
Shaahin Angizi, Jiao Sun, Wei Zhang, and Deliang Fan. 2019. GraphS: A graph processing accelerator leveraging SOT-MRAM. In Proceedings of the 2019 Design, Automation, and Test in Europe Conference and Exhibition (DATE’19). IEEE, Los Alamitos, CA, 378–383.
[2]
Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, and Kaushik Roy. 2017. Resparc: A reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In Proceedings of the 54th Annual Design Automation Conference. 1–6.
[3]
Anakha V. Babu, Osvaldo Simeone, and Bipin Rajendran. 2020. SpinAPS: A high-performance spintronic accelerator for probabilistic spiking neural networks. arXiv:2008.02189.
[4]
Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C. Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. 2014. Nengo: A Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics 7 (2014), 48.
[5]
S.-H. Chae, Jong Kim, Dongseung Kim, S. Hong, and Sunggu Lee. 1995. DTN: A new partitionable torus topology. In Proceedings of the International Conference on Parallel Processing. 84–91.
[6]
Mei-Chin Chen, Abhronil Sengupta, and Kaushik Roy. 2018. Magnetic skyrmion as a spintronic deep learning spiking neuron processor. IEEE Transactions on Magnetics 54, 8 (2018), 1–7.
[7]
Z. Chowdhury, J. D. Harms, S. K. Khatamifard, M. Zabihi, Y. Lv, A. P. Lyle, S. S. Sapatnekar, U. R. Karpuzcu, and J. Wang. 2018. Efficient in-memory processing using spintronics. IEEE Computer Architecture Letters 17, 1 (Jan. 2018), 42–46. https://doi.org/10.1109/LCA.2017.2751042
[8]
Z. I. Chowdhury, S. K. Khatamifard, Z. Zhao, M. Zabihi, S. Resch, M. Razaviyayn, J. Wang, S. Sapatnekar, and U. R. Karpuzcu. 2019. Spintronic in-memory pattern matching using computational RAM (CRAM). IEEE Journal on Exploratory Solid-State Computational Devices and Circuits PP, 99 (2019), 1. https://doi.org/10.1109/JXCDC.2019.2951157
[9]
Khanh N. Dang and Abderazek Ben Abdallah. 2019. An efficient software-hardware design framework for spiking neural network systems. In Proceedings of the 2019 International Conference on Internet of Things, Embedded Systems, and Communications (IINTEC’19).
[10]
Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, et al. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 1 (2018), 82–99.
[11]
N. G. De Bruijn. 1946. A combinatorial problem. Proceedings of the Koninklijke Nederlandse Academie van Wetenschappen 49 (1946), 758–764. https://ci.nii.ac.jp/naid/10019660672/en/
[12]
Julie Dethier, Paul Nuyujukian, Chris Eliasmith, Terrence C. Stewart, Shauki A. Elasaad, Krishna V. Shenoy, and Kwabena A. Boahen. 2011. A brain-machine interface operating with a real-time spiking neural network control algorithm. In Advances in Neural Information Processing Systems. 2213–2221.
[13]
Peter U. Diehl and Matthew Cook. 2015. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience 9 (2015), 99.
[14]
X. Dong, C. Xu, Y. Xie, and N. P. Jouppi. 2012. NVSim: A circuit-level performance, energy, and area model for emerging nonvolatile memory. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 31, 7(July 2012), 994–1007. https://doi.org/10.1109/TCAD.2012.2185930
[15]
P. Faizian, M. A. Mollah, X. Yuan, Z. Alzaid, S. Pakin, and M. Lang. 2018. Random regular graph and generalized De Bruijn graph with k-shortest path routing. IEEE Transactions on Parallel and Distributed Systems 29, 1 (Jan. 2018), 144–155. https://doi.org/10.1109/TPDS.2017.2741492
[16]
Charlotte Frenkel, Martin Lefebvre, Jean-Didier Legat, and David Bol. 2018. A 0.086-mm2 12.7-pJ/SOP 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28-nm CMOS. IEEE Transactions on Biomedical Circuits and Systems 13, 1 (2018), 145–158.
[17]
Steve B. Furber, Francesco Galluppi, Steve Temple, and Luis A. Plana. 2014. The Spinnaker project. Proceedings of the IEEE 102, 5 (2014), 652–665.
[18]
Saransh Gupta, Mohsen Imani, and Tajana Rosing. 2019. Exploring processing in-memory for different technologies. In Proceedings of the 2019 Great Lakes Symposium on VLSI. 201–206.
[19]
Mohammad Hosseinabady, Mohammad Reza Kakoee, Jimson Mathew, and Dhiraj K. Pradhan. 2008. De Bruijn graph as a low latency scalable architecture for energy efficient massive NoCs. In Proceedings of the 2008 Conference on Design, Automation, and Test in Europe. IEEE, Los Alamitos, CA, 1370–1373.
[20]
Guenole Jan, Luc Thomas, Son Le, Yuan-Jen Lee, Huanlong Liu, Jian Zhu, Ru-Ying Tong, et al. 2014. Demonstration of fully functional 8Mb perpendicular STT-MRAM chips with sub-5ns writing for non-volatile embedded memories. In Proceedings of the 2014 Symposium on VLSI Technology (VLSI-Technology’14): Digest of Technical Papers. IEEE, Los Alamitos, CA, 1–2.
[21]
Xin Jin, Mikel Lujan, Luis A. Plana, Sergio Davies, Steve Temple, and Steve B. Furber. 2010. Modeling spiking neural networks on SpiNNaker. Computing in Science & Engineering 12, 5 (2010), 91–97.
[22]
Shruti R. Kulkarni, Deepak Vinayak Kadetotad, Shihui Yin, Jae-Sun Seo, and Bipin Rajendran. 2019. Neuromorphic hardware accelerator for snn inference based on SST-RAM crossbar arrays. In Proceedings of the 2019 26th IEEE International Conference on Electronics, Circuits, and Systems (ICECS’19). IEEE, Los Alamitos, CA, 438–441.
[23]
Shahar Kvatinsky, Dmitry Belousov, Slavik Liman, Guy Satat, Nimrod Wald, Eby G. Friedman, Avinoam Kolodny, and Uri C. Weiser. 2014. MAGIC—Memristor-aided logic. IEEE Transactions on Circuits and Systems II: Express Briefs 61, 11(2014), 895–899.
[24]
Naveen Murali G, F. Lalchhandama, Kamalika Datta, and Indranil Sengupta. 2018. Modelling and simulation of non-ideal MAGIC NOR gates on memristor crossbar. In Proceedings of the 2018 8th International Symposium on Embedded Computing and System Design (ISED’18). IEEE, Los Alamitos, CA, 124–128.
[25]
Dayeol Lee, Gwangmu Lee, Dongup Kwon, Sunghwa Lee, Youngsok Kim, and Jangwoo Kim. 2018. Flexon: A flexible digital neuron for efficient spiking neural network simulations. In Proceedings of the 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA’18). IEEE, Los Alamitos, CA, 275–288.
[26]
Roberto Lent, Frederico A. C. Azevedo, Carlos H. Andrade-Moraes, and Ana V. O. Pinto. 2012. How many neurons do you have? Some dogmas of quantitative neuroscience under revision. European Journal of Neuroscience 35, 1 (2012), 1–9.
[27]
Andrew Lines, Prasad Joshi, Ruokun Liu, Steve McCoy, Jonathan Tse, Yi-Hsin Weng, and Mike Davies. 2018. Loihi asynchronous neuromorphic research chip. In Proceedings of the 2018 24th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC’18). IEEE, Los Alamitos, CA, 32–33.
[28]
Guoping Liu and Kyungsook Y. Lee. 1993. Optimal routing algorithms for generalized De Bruijn digraphs. In Proceedings of the 1993 International Conference on Parallel Processing (ICPP’93), Vol.  3. IEEE, Los Alamitos, CA, 167–174.
[29]
Henry Markram. 2012. The human brain project. Scientific American 306, 6 (2012), 50–55.
[30]
Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, et al. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668–673.
[31]
Surya Narayanan, Karl Taht, Rajeev Balasubramonian, Edouard Giacomin, and Pierre-Emmanuel Gaillardon. 2020. SpinalFlow: An architecture and dataflow tailored for spiking neural networks. In Proceedings of the 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA’20). IEEE, Los Alamitos, CA, 349–362.
[32]
D. E. Nikonov and I. A. Young. 2019. Benchmarking delay and energy of neural inference circuits. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits 5, 2(2019), 75–84.
[33]
Hiroki Noguchi, Kazutaka Ikegami, Keiichi Kushida, Keiko Abe, Shogo Itai, Satoshi Takaya, Naoharu Shimomura, et al. 2015. 7.5 A 3.3 ns-access-time 71.2 W/MHz 1Mb embedded STT-MRAM using physically eliminated read-disturb scheme and normally-off memory architecture. In Proceedings of the 2015 IEEE International Solid-State Circuits Conference (ISSCC’15): Digest of Technical Papers. IEEE, Los Alamitos, CA, 1–3.
[34]
Eustace Painkras, Luis A. Plana, Jim Garside, Steve Temple, Francesco Galluppi, Cameron Patterson, David R. Lester, Andrew D. Brown, and Steve B. Furber. 2013. SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE Journal of Solid-State Circuits 48, 8 (2013), 1943–1953.
[35]
Yu Pan, Peng Ouyang, Yinglin Zhao, Wang Kang, Shouyi Yin, Youguang Zhang, Weisheng Zhao, and Shaojun Wei. 2018. A multilevel cell STT-MRAM-based computing in-memory accelerator for binary convolutional neural network. IEEE Transactions on Magnetics 54, 11 (2018), 1–5.
[36]
Dayane Reis, Michael Niemier, and X. Sharon Hu. 2018. Computing in memory with FeFETs. In Proceedings of the International Symposium on Low Power Electronics and Design. 1–6.
[37]
Salonik Resch, S. Karen Khatamifard, Zamshed I. Chowdhury, Masoud Zabihi, Zhengyang Zhao, Husrev Cilasun, Jian-Ping Wang, Sachin S. Sapatnekar, and Ulya R. Karpuzcu. 2020. MOUSE: Inference in non-volatile memory for energy harvesting applications. In Proceedings of the 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’20). IEEE, Los Alamitos, CA, 400–414.
[38]
Salonik Resch, S. Karen Khatamifard, Zamshed Iqbal Chowdhury, Masoud Zabihi, Zhengyang Zhao, Jian-Ping Wang, Sachin S. Sapatnekar, and Ulya R. Karpuzcu. 2019. PIMBALL: Binary neural networks in spintronic memory. ACM Transactions on Architecture and Code Optimization 16, 4 (Oct. 2019), Article 41, 26 pages. https://doi.org/10.1145/3357250
[39]
Reza Sabbaghi-Nadooshan, Mehdi Modarressi, and Hamid Sarbazi-Azad. 2008. The 2D DBM: An attractive alternative to the simple 2D mesh topology for on-chip networks. In Proceedings of the 2008 IEEE International Conference on Computer Design. IEEE, Los Alamitos, CA, 486–490.
[40]
Daisuke Saida, Saori Kashiwada, Megumi Yakabe, Tadaomi Daibou, Naoki Hase, Miyoshi Fukumoto, Shinji Miwa, et al. 2016. Sub-3 ns pulse with sub-100 A switching of 1x–2x nm perpendicular MTJ for high-performance embedded STT-MRAM towards sub-20 nm CMOS. In Proceedings of the 2016 IEEE Symposium on VLSI Technology. IEEE, Los Alamitos, CA, 1–2.
[41]
S. Schmitt, J. Klaehn, G. Bellec, A. Gruebl, M. Guettler, A. Hartel, S. Hartmann, et al. 2017. Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN’17). 2227–2234. https://doi.org/10.1109/IJCNN.2017.7966125
[42]
Abhronil Sengupta, Aparajita Banerjee, and Kaushik Roy. 2016. Hybrid spintronic-CMOS spiking neural network with on-chip learning: Devices, circuits, and systems. Physical Review Applied 6, 6 (2016), 064003.
[43]
R. Singleton. 1967. A method for computing the fast Fourier transform with auxiliary memory and limited high-speed storage. IEEE Transactions on Audio and Electroacoustics 15, 2 (1967), 91–98.
[44]
Gopalakrishnan Srinivasan, Priyadarshini Panda, and Kaushik Roy. 2018. STDP-based unsupervised feature learning using convolution-over-time in spiking neural networks for energy-efficient neuromorphic computing. ACM Journal on Emerging Technologies in Computing Systems 14, 4(2018), 1–12.
[45]
Terrence Stewart, Feng-Xuan Choo, and Chris Eliasmith. 2012. Spaun: A perception-cognition-action model using spiking neurons. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol.  34.
[46]
Evangelos Stromatias, Miguel Soto, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. 2017. An event-driven classifier for spiking neural networks fed with synthetic or dynamic vision sensor data. Frontiers in Neuroscience 11 (2017), 350.
[47]
E. I. Vatajelu and L. Anghel. 2017. Fully-connected single-layer STT-MTJ-based spiking neural network under process variability. In Proceedings of the 2017 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’17). 21–26. https://doi.org/10.1109/NANOARCH.2017.8053727
[48]
Jian-Ping Wang and Jonathan D. Harms. 2015. General structure for computational random access memory (CRAM). US Patent 9,224,447.
[49]
Jian-Ping Wang, Sachin S. Sapatnekar, Chris H. Kim, Paul Crowell, Steve Koester, Supriyo Datta, Kaushik Roy, et al. 2017. A pathway to enable exponential scaling for the beyond-CMOS era. In Proceedings of the 54th Annual Design Automation Conference. 1–6.
[50]
Kezhou Yang, Akul Malhotra, Sen Lu, and Abhronil Sengupta. 2020. All-spin Bayesian neural networks. IEEE Transactions on Electron Devices 67, 3 (2020), 1340–1347.
[51]
Shihui Yin, Deepak Kadetotad, Bonan Yan, Chang Song, Yiran Chen, Chaitali Chakrabarti, and Jae-Sun Seo. 2017. Low-power neuromorphic speech recognition engine with coarse-grain sparsity. In Proceedings of the 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC’17). IEEE, Los Alamitos, CA, 111–114.
[52]
M. Zabihi, Z. I. Chowdhury, Z. Zhao, U. R. Karpuzcu, J. Wang, and S. S. Sapatnekar. 2019. In-memory processing on the spintronic CRAM: From hardware design to application mapping. IEEE Transactions on Computers 68, 8 (Aug. 2019), 1159–1173. https://doi.org/10.1109/TC.2018.2858251
[53]
M. Zabihi, Z. Zhao, D. Mahendra, Z. I. Chowdhury, S. Resch, T. Peterson, U. R. Karpuzcu, J. Wang, and S. S. Sapatnekar. 2019. Using Spin-Hall MTJs to build an energy-efficient in-memory computation platform. In Proceedings of the 20th International Symposium on Quality Electronic Design (ISQED’20). 52–57. https://doi.org/10.1109/ISQED.2019.8697377
[54]
Deming Zhang, Lang Zeng, Youguang Zhang, Weisheng Zhao, and Jacques Olivier Klein. 2016. Stochastic spintronic device based synapses and spiking neurons for neuromorphic computation. In Proceedings of the 2016 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’16). IEEE, Los Alamitos, CA, 173–178.

Cited By

View all

Index Terms

  1. Spiking Neural Networks in Spintronic Computational RAM

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Architecture and Code Optimization
    ACM Transactions on Architecture and Code Optimization  Volume 18, Issue 4
    December 2021
    497 pages
    ISSN:1544-3566
    EISSN:1544-3973
    DOI:10.1145/3476575
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 September 2021
    Accepted: 01 July 2021
    Revised: 01 May 2021
    Received: 01 November 2020
    Published in TACO Volume 18, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Processing in memory
    2. computational random access memory
    3. non-volatile memory
    4. spiking neural networks

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)292
    • Downloads (Last 6 weeks)35
    Reflects downloads up to 14 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media