Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier
Abstract
:1. Introduction
- We demonstrate the ability of the proposed layer to perform effective reduction in the dimension of the input data vectors without loss of classification performance;
- We explore the tradeoff between the number of spikes needed for encoding the input information and classification performance to find input encoding parameters that minimize the number of spikes while maintaining competitive classification performance;
- We compare different methods for initializing the weights of the proposed layer.
- Layers with random or logistic function-generated weights can efficiently extract meaningful features from input data;
- Logistic functions enable achieving high accuracy with less result dispersion.
2. Datasets
- The MNIST dataset contains 60,000 training and 10,000 testing black-and-white images of size 28 × 28 pixels, representing handwritten digits from 0 to 9. The brightness of each pixel ranges from 0 to 255, where 0 corresponds to an absolutely black pixel and 255 to an absolutely white pixel. This dataset has become a benchmark for evaluating the performance of various classification algorithms. The examples from the dataset are depicted in Figure 1.
- The Fisher’s Iris dataset contains 150 samples of iris flowers, with 50 samples for each of the three species. Each sample consists of four numeric features describing the length and width of the sepals and the length and width of the petals. The data visualization is presented in Figure 2A, illustrating the non-linearity of the task using only two features.
- The Breast Cancer Wisconsin (Diagnostic) dataset consists of 569 samples containing information on cell characteristics from breast biopsy samples and their corresponding diagnosis: malignant or benign tumor, with 212 and 357 samples, respectively. The features are numeric and describe the morphological and structural characteristics of the cells, such as nucleus size, radius, area, and others. The data visualization, as shown in Figure 2B, employs only two features, akin to the case of Fisher’s irises.
3. Spiking Neural Network
3.1. General Architecture
3.2. Spike Generators
3.3. Processing Elements
3.4. Weight Initialization
- Random values—the weights are generated from a uniform distribution within the range of −1 to 1;
- Logistic functions—the weights are determined by the values of logistic functions, the general form of which looks as follows:
3.5. Decoding
4. Experiments and Results
4.1. Agenda of Experiments
- The criteria for selecting the feed time window;
- The accuracy dependence on the number of processing elements;
- The accuracy dependence on the maximal number of spikes in the case of a more effective number of processing elements, defined in experiments of point 1;
- The accuracy dependence on the number of output spikes with a given number of neurons with thresholds;
- The influence of stochastic input signal on the accuracy.
4.2. Analyzing the Time Window Size for Each Dataset
4.3. Searching the Optimal Number of Processing Elements
4.4. Searching the Optimal Number of Generated Spikes with a Given Number of PE from the Previous Experiments
4.5. Searching the Optimal Threshold Reflecting the Number of Output Spikes with a Given Number of Processing Elements That Are Replaced by Neurons and Several Generated Spikes from the Previous Experiments
4.6. The Influence of Stochastic Input Signal on the Accuracy
4.7. Efficiency of the Proposed Approach and Comparison with Other Existing Methods
4.8. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
BP | Backpropagation |
LR | Logistic regression |
MNIST | Modified National Institute of Standards and Technology |
NC | Nanocomposite |
NEST | Neural Simulation Tool |
SNN | Spiking neural network |
STDP | Spike-timing-dependent plasticity |
PE | Processing elements |
PPX | Poly-p-xylylene |
References
- Diehl, P.U.; Pedroni, B.U.; Cassidy, A.; Merolla, P.; Neftci, E.; Zarrella, G. TrueHappiness: Neuromorphic emotion recognition on TrueNorth. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4278–4285. [Google Scholar] [CrossRef]
- Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
- Pei, J.; Deng, L.; Song, S.; Zhao, M.; Zhang, Y.; Wu, S.; Wang, G.; Zou, Z.; Wu, Z.; He, W.; et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 2019, 572, 106–111. [Google Scholar] [CrossRef] [PubMed]
- Wan, W.; Kubendran, R.; Schaefer, C.; Eryilmaz, S.B.; Zhang, W.; Wu, D.; Deiss, S.; Raina, P.; Qian, H.; Gao, B.; et al. A compute-in-memory chip based on resistive random-access memory. Nature 2022, 608, 504–512. [Google Scholar] [CrossRef] [PubMed]
- Kim, D.; Shin, J.; Kim, S. Implementation of reservoir computing using volatile WOx-based memristor. Appl. Surf. Sci. 2022, 599, 153876. [Google Scholar] [CrossRef]
- Yang, J.; Cho, H.; Ryu, H.; Ismail, M.; Mahata, C.; Kim, S. Tunable synaptic characteristics of a Ti/TiO2/Si memory device for reservoir computing. ACS Appl. Mater. Interfaces 2021, 13, 33244–33252. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.; Liu, J.; Shi, J.; Zeng, T.; Shan, X.; Wang, Z.; Zhao, X.; Xu, H.; Liu, Y. Nitrogen-induced ultralow power switching in flexible ZnO-based memristor for artificial synaptic learning. Appl. Phys. Lett. 2021, 118, 103502. [Google Scholar] [CrossRef]
- Díaz-Pernil, D.; Pena-Cantillana, F.; Gutiérrez-Naranjo, M.A. A parallel algorithm for skeletonizing images by using spiking neural P systems. Neurocomputing 2013, 115, 81–91. [Google Scholar] [CrossRef]
- Ryu, H.; Kim, S. Implementation of a reservoir computing system using the short-term effects of Pt/HfO2/TaOx/TiN memristors with self-rectification. Chaos Solitons Fractals 2021, 150, 111223. [Google Scholar] [CrossRef]
- Mikhaylov, A.; Pimashkin, A.; Pigareva, Y.; Gerasimova, S.; Gryaznov, E.; Shchanikov, S.; Zuev, A.; Talanov, M.; Lavrov, I.; Demin, V.; et al. Neurohybrid memristive CMOS-integrated systems for biosensors and neuroprosthetics. Front. Neurosci. 2020, 14, 358. [Google Scholar] [CrossRef]
- Wang, Q.; Pan, G.; Jiang, Y. An Ultra-Low Power Threshold Voltage Variable Artificial Retina Neuron. Electronics 2022, 11, 365. [Google Scholar] [CrossRef]
- Lee, Y.; Mahata, C.; Kang, M.; Kim, S. Short-term and long-term synaptic plasticity in Ag/HfO2/SiO2/Si stack by controlling conducting filament strength. Appl. Surf. Sci. 2021, 565, 150563. [Google Scholar] [CrossRef]
- Wu, X.; Dang, B.; Wang, H.; Wu, X.; Yang, Y. Spike-Enabled Audio Learning in Multilevel Synaptic Memristor Array-Based Spiking Neural Network. Adv. Intell. Syst. 2022, 4, 2100151. [Google Scholar] [CrossRef]
- Cramer, B.; Stradmann, Y.; Schemmel, J.; Zenke, F. The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 2744–2757. [Google Scholar] [CrossRef] [PubMed]
- Tang, G.; Kumar, N.; Yoo, R.; Michmizos, K. Deep reinforcement learning with population-coded spiking neural network for continuous control. In Proceedings of the 2020 Conference on Robot Learning, Virtual Event/Cambridge, MA, USA, 16–18 November 2020; pp. 2016–2029. [Google Scholar]
- Matsukatova, A.N.; Iliasov, A.I.; Nikiruy, K.E.; Kukueva, E.V.; Vasiliev, A.L.; Goncharov, B.V.; Sitnikov, A.V.; Zanaveskin, M.L.; Bugaev, A.S.; Demin, V.A.; et al. Convolutional Neural Network Based on Crossbar Arrays of (Co-Fe-B) x (LiNbO3)100-x Nanocomposite Memristors. Nanomaterials 2022, 12, 3455. [Google Scholar] [CrossRef] [PubMed]
- Shahsavari, M.; Boulet, P. Parameter exploration to improve performance of memristor-based neuromorphic architectures. IEEE Trans.-Multi-Scale Comput. Syst. 2017, 4, 833–846. [Google Scholar] [CrossRef]
- Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
- Velichko, A. Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map. Electronics 2020, 9, 1432. [Google Scholar] [CrossRef]
- Sboev, A.G.; Serenko, A.V.; Kunitsyn, D.E.; Rybka, R.B.; Putrolaynen, V.V. Towards Solving Classification Tasks Using Spiking Neurons with Fixed Weights. In Proceedings of the International Conference on Neuroinformatics, Moscow, Russia, 23–27 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 102–110. [Google Scholar] [CrossRef]
- Orhan, E. The Leaky Integrate-and-Fire Neuron Model. 2012, Volume 3, pp. 1–6. Available online: http://www.cns.nyu.edu/~eorhan/notes/lif-neuron.pdf (accessed on 13 December 2023).
- Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: New York, NY, USA, 2013; Volume 398. [Google Scholar]
- Sboev, A.; Serenko, A.; Rybka, R.; Vlasov, D. Solving a classification task by spiking neural network with STDP based on rate and temporal input encoding. Math. Methods Appl. Sci. 2020, 43, 7802–7814. [Google Scholar] [CrossRef]
- Paugam-Moisy, H.; Bohte, S.M. Computing with spiking neuron networks. Handb. Nat. Comput. 2012, 1, 1–47. [Google Scholar]
- Sboev, A.; Vlasov, D.; Rybka, R.; Davydov, Y.; Serenko, A.; Demin, V. Modeling the Dynamics of Spiking Networks with Memristor-Based STDP to Solve Classification Tasks. Mathematics 2021, 9, 3237. [Google Scholar] [CrossRef]
- Li, J.; Xu, H.; Sun, S.Y.; Li, N.; Li, Q.; Li, Z.; Liu, H. In Situ Learning in Hardware Compatible Multilayer Memristive Spiking Neural Network. IEEE Trans. Cogn. Dev. Syst. 2021, 14, 448–461. [Google Scholar] [CrossRef]
- Gerlinghoff, D.; Luo, T.; Goh, R.S.M.; Wong, W.F. Desire backpropagation: A lightweight training algorithm for multi-layer spiking neural networks based on spike-timing-dependent plasticity. Neurocomputing 2023, 560, 126773. [Google Scholar] [CrossRef]
- Demin, V.A.; Nekhaev, D.V.; Surazhevsky, I.A.; Nikiruy, K.E.; Emelyanov, A.V.; Nikolaev, S.N.; Rylkov, V.V.; Kovalchuk, M.V. Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network. Neural Netw. 2021, 134, 64–75. [Google Scholar] [CrossRef]
- Minnekhanov, A.A.; Shvetsov, B.S.; Martyshov, M.M.; Nikiruy, K.E.; Kukueva, E.V.; Presnyakov, M.Y.; Forsh, P.; Rylkov, V.V.; Erokhin, V.V.; Demin, V.A.; et al. On the resistive switching mechanism of parylene-based memristive devices. Org. Electron. 2019, 74, 89–95. [Google Scholar] [CrossRef]
- Sboev, A.; Kunitsyn, D.; Balykov, M.A. Spoken Digits Classification Using a Spiking Neural Network with Fixed Synaptic Weights. In Proceedings of the 2023 Annual International Conference on Brain-Inspired Cognitive Architectures for Artificial Intelligence, the 14th Annual Meeting of the BICA Society, Ningbo, China, 13–15 October 2023. in press. [Google Scholar]
Weights | Spike Counts | Performance | ||
---|---|---|---|---|
Minimum | Desired | Min | Max | |
Fisher’s Iris | ||||
Logistic functions | 26 | 312 | 0.95 | 0.96 |
Random values | 52 | 312 | 0.93 | 0.97 |
Logistic regression | 0.93 | 0.97 | ||
STDP-based approach on rate and temporal input encoding [23] | 0.95 | 1.0 | ||
SpikeProp and Theta Neuron BP [24] | 0.96 | 0.98 | ||
2-layer SNN with NC or PPX plasticity [25] | 0.93 | 1.0 | ||
Wisconsin Breast Cancer | ||||
Logistic functions | 3 | 8 | 0.94 | 0.97 |
Random values | 2 | 24 | 0.94 | 0.97 |
Logistic regression | 0.92 | 0.95 | ||
STDP-based approach on rate and temporal input encoding [23] | 0.88 | 0.92 | ||
SpikeProp and Theta Neuron BP [24] | 0.97 | 0.99 | ||
2-layer SNN with NC or PPX plasticity [25] | 0.88 | 0.96 | ||
MNIST | ||||
Logistic functions | 160 | 160 | 0.92 | 0.92 |
Random values | 160 | 160 | 0.92 | 0.92 |
Logistic regression | 0.92 | 0.92 | ||
3-layer SNN with STDP [26] | 0.95 | 0.95 | ||
3-layer SNN with STDP and BP [27] | 0.98 | 0.98 | ||
2-layer SNN (100 neurons) with NC plasticity [28] | 0.89 | 0.89 |
Weights | Number of PE | Spike Counts | , mV | Time Window, ms |
---|---|---|---|---|
Fisher’s Iris | ||||
Logistic functions | 3 | 312 | −70 | 5200 |
Random values | 3 | 312 | −70 | 5200 |
Wisconsin Breast Cancer | ||||
Logistic functions | 16 | 8 | −69.94 | 200 |
Random values | 16 | 24 | −69.8 | 200 |
MNIST | ||||
Logistic functions | 350 | 160 | −70 | 200 |
Random values | 350 | 160 | −70 | 200 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sboev, A.; Rybka, R.; Kunitsyn, D.; Serenko, A.; Ilyin, V.; Putrolaynen, V. Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier. Big Data Cogn. Comput. 2023, 7, 184. https://doi.org/10.3390/bdcc7040184
Sboev A, Rybka R, Kunitsyn D, Serenko A, Ilyin V, Putrolaynen V. Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier. Big Data and Cognitive Computing. 2023; 7(4):184. https://doi.org/10.3390/bdcc7040184
Chicago/Turabian StyleSboev, Alexander, Roman Rybka, Dmitry Kunitsyn, Alexey Serenko, Vyacheslav Ilyin, and Vadim Putrolaynen. 2023. "Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier" Big Data and Cognitive Computing 7, no. 4: 184. https://doi.org/10.3390/bdcc7040184