skip to main content
10.1145/3613424.3623771acmconferencesArticle/Chapter ViewAbstractPublication PagesmicroConference Proceedingsconference-collections
research-article

SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

Published: 08 December 2023 Publication History

Abstract

Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic ‘0’ and ‘1’, AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides toward developing an AQFP-based BNN accelerator, several critical challenges remain, preventing the design from being a comprehensive solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN acceleration framework that leverages software-hardware co-optimization to eventually make the AQFP devices a feasible solution for BNN acceleration. Specifically, we investigate the randomized behavior of the AQFP devices and analyze the impact of crossbar size on current attenuation, subsequently formulating the current amplitude into the values suitable for use in BNN computation. To tackle the accumulation problem and improve overall hardware performance, we propose a stochastic computing-based accumulation module and a clocking scheme adjustment-based circuit optimization method. To effectively train the BNN models that are compatible with the distinctive characteristics of AQFP devices, we further propose a novel randomized BNN training solution that utilizes algorithm-hardware co-optimization, enabling simultaneous optimization of hardware configurations. In addition, we propose implementing batch normalization matching and the weight rectified clamp method to further improve the overall performance. We validate our SupeRBNN framework across various datasets and network architectures, comparing it with implementations based on different technologies, including CMOS, ReRAM, and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our design achieves an energy efficiency of approximately 7.8 × 104 times higher than that of the ReRAM-based BNN framework while maintaining a similar level of model accuracy. Furthermore, when compared with superconductor-based counterparts, our framework demonstrates at least two orders of magnitude higher energy efficiency.

References

[1]
Shamiul Alam, Md Shafayat Hossain, Srivatsa Rangachar Srinivasa, and Ahmedullah Aziz. 2023. Cryogenic memory technologies. Nature Electronics 6, 3 (2023), 185–198.
[2]
Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R Stanley Williams, Paolo Faraboschi, Wen-mei W Hwu, John Paul Strachan, and Kaushik Roy. 2019. PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. 715–731.
[3]
Christopher L. Ayala, Tomoyuki Tanaka, Ro Saito, Mai Nozoe, Naoki Takeuchi, and Nobuyuki Yoshikawa. 2021. MANA: A Monolithic Adiabatic iNtegration Architecture Microprocessor Using 1.4-zJ/op Unshunted Superconductor Josephson Junction Devices. IEEE Journal of Solid-State Circuits 56, 4 (2021), 1152–1165. https://doi.org/10.1109/JSSC.2020.3041338
[4]
Timothy J Baker and John P Hayes. 2020. Bayesian accuracy analysis of stochastic circuits. In 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 1–9.
[5]
Ron Banner, Yury Nahshan, and Daniel Soudry. 2019. Post training 4-bit quantization of convolutional networks for rapid-deployment. Advances in Neural Information Processing Systems 32 (2019).
[6]
Michael G Bechtel, Elise McEllhiney, Minje Kim, and Heechul Yun. 2018. Deeppicar: A low-cost deep neural network-based autonomous car. In 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA).
[7]
Arnout Beckers, Farzan Jazaeri, Andrea Ruffino, Claudio Bruschini, Andrea Baschirotto, and Christian Enz. 2017. Cryogenic characterization of 28 nm bulk CMOS technology for quantum computing. In 2017 47th European Solid-State Device Research Conference (ESSDERC). IEEE, 62–65.
[8]
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 (2013).
[9]
Adrian Bulat, Brais Martinez, and Georgios Tzimiropoulos. 2020. Bats: Binary architecture search. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Springer, 309–325.
[10]
Ilkwon Byun, Dongmoon Min, Gyu-hyeon Lee, Seongmin Na, and Jangwoo Kim. 2020. CryoCore: A fast and dense processor architecture for cryogenic computing. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEE, 335–348.
[11]
Ruizhe Cai, Olivia Chen, Ao Ren, Ning Liu, Caiwen Ding, Nobuyuki Yoshikawa, and Yanzhi Wang. 2019. A Majority Logic Synthesis Framework for Adiabatic Quantum-Flux-Parametron Superconducting Circuits. In Proceedings of the 2019 on Great Lakes Symposium on VLSI (Tysons Corner, VA, USA) (GLSVLSI ’19). Association for Computing Machinery, New York, NY, USA, 189–194. https://doi.org/10.1145/3299874.3317980
[12]
Ruizhe Cai, Olivia Chen, Ao Ren, Ning Liu, Nobuyuki Yoshikawa, and Yanzhi Wang. 2019. A Buffer and Splitter Insertion Framework for Adiabatic Quantum-Flux-Parametron Superconducting Circuits. In 2019 IEEE 37th International Conference on Computer Design (ICCD). 429–436. https://doi.org/10.1109/ICCD46524.2019.00067
[13]
Ruizhe Cai, Ao Ren, Olivia Chen, Ning Liu, Caiwen Ding, Xuehai Qian, Jie Han, Wenhui Luo, Nobuyuki Yoshikawa, and Yanzhi Wang. 2019. A stochastic-computing based deep learning framework using adiabatic quantum-flux-parametron superconducting technology. In Proceedings of the 46th International Symposium on Computer Architecture. 567–578.
[14]
Yi-Chen Chang, Hongjia Li, Olivia Chen, Yanzhi Wang, Nobuyuki Yoshikawa, and Tsung-Yi Ho. 2020. ASAP: An Analytical Strategy for AQFP Placement. In 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). 1–7.
[15]
Olivia Chen, Ruizhe Cai, Yanzhi Wang, Fei Ke, Taiki Yamae, Ro Saito, Naoki Takeuchi, and Nobuyuki Yoshikawa. 2019. Adiabatic quantum-flux-parametron: Towards building extremely energy-efficient circuits and systems. Scientific reports 9, 1 (2019), 1–10.
[16]
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, 2014. Dadiannao: A machine-learning supercomputer. In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE, 609–622.
[17]
Jungwook Choi, Zhuo Wang, 2018. Pact: Parameterized clipping activation for quantized neural networks. arXiv:1805.06085 (2018).
[18]
John Clarke and Alex I Braginski. 2006. The SQUID handbook: Applications of SQUIDs and SQUID systems. John Wiley & Sons.
[19]
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Binaryconnect: Training deep neural networks with binary weights during propagations. In NeurIPS.
[20]
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 (2016).
[21]
Erik Debenedictis. 2018. IEEE Superconducting and Quantum Information Activities.Technical Report. Sandia National Lab.(SNL-NM), Albuquerque, NM (United States).
[22]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248–255.
[23]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
[24]
Emerson S. Fang. 1989. A Josephson integrated circuit simulator (JSIM) for superconductive electronics application.
[25]
T.V. Filippov, Y.A. Polyakov, V.K. Semenov, and K.K. Likharev. 1995. Signal resolution of RSFQ comparators. IEEE Transactions on Applied Superconductivity 5, 2 (1995), 2240–2243. https://doi.org/10.1109/77.403031
[26]
Coenrad J. Fourie, Kyle Jackman, Johannes Delport, Lieze Schindler, Tessa Hall, Pascal Febvre, Lucas Iwanikow, Olivia Chen, Christopher L. Ayala, Nobuyuki Yoshikawa, Mark Law, Thomas A. Weingartner, Yanzhi Wang, Peter Beerel, Sandeep Gupta, Haipeng Zha, Sasan Razmkhah, Mustafa Altay Karamuftuoglu, and Massoud Pedram. 2023. Results from the ColdFlux Superconductor Integrated Circuit Design Tool Project. IEEE Transactions on Applied Superconductivity (2023), 1–27. https://doi.org/10.1109/TASC.2023.3306381
[27]
Rongliang Fu, Junying Huang, Haibin Wu, Xiaochun Ye, Dongrui Fan, and Tsung-Yi Ho. 2022. JBNN: A Hardware Design for Binarized Neural Networks Using Single-Flux-Quantum Circuits. IEEE Trans. Comput. 71, 12 (2022), 3203–3214.
[28]
Rongliang Fu, Mengmeng Wang, Yirong Kan, Nobuyuki Yoshikawa, Tsung-Yi Ho, and Olivia Chen. 2023. A Global Optimization Algorithm for Buffer and Splitter Insertion in Adiabatic Quantum-Flux-Parametron Circuits. In Proceedings of the 28th Asia and South Pacific Design Automation Conference (Tokyo, Japan) (ASPDAC ’23). Association for Computing Machinery, New York, NY, USA, 769–774. https://doi.org/10.1145/3566097.3567936
[29]
James E Gentle. 2003. Random number generation and Monte Carlo methods. Vol. 381. Springer.
[30]
Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. 2019. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In ICCV. 4852–4861.
[31]
Yuxing He, Naoki Takeuchi, and Nobuyuki Yoshikawa. 2020. Low-latency power-dividing clocking scheme for adiabatic quantum-flux-parametron logic. Applied Physics Letters 116, 18 (2020), 182602. https://doi.org/10.1063/5.0005612 arXiv:https://doi.org/10.1063/5.0005612
[32]
Zhezhi He and Deliang Fan. 2019. Simultaneously optimizing weight and quantizer of ternary neural network using truncated gaussian approximation. In CVPR.
[33]
D Scott Holmes. 2021. Cryogenic Electronics And Quantum Information Processing. In 2021 IEEE International Roadmap for Devices and Systems Outbriefs. IEEE, 1–93.
[34]
D Scott Holmes, Andrew L Ripple, and Marc A Manheimer. 2013. Energy-efficient superconducting computing—Power budgets and requirements. IEEE Transactions on Applied Superconductivity 23, 3 (2013), 1701610–1701610.
[35]
Chao-Yuan Huang, Yi-Chen Chang, Ming-Jer Tsai, and Tsung-Yi Ho. 2021. An Optimal Algorithm for Splitter and Buffer Insertion in Adiabatic Quantum-Flux-Parametron Circuits. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). 1–8. https://doi.org/10.1109/ICCAD51958.2021.9643456
[36]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning. PMLR, 448–456.
[37]
Koki Ishida, Ilkwon Byun, Ikki Nagaoka, Kosuke Fukumitsu, Masamitsu Tanaka, Satoshi Kawakami, Teruo Tanimoto, Takatsugu Ono, Jangwoo Kim, and Koji Inoue. 2020. SuperNPU: An extremely fast neural processing unit using superconducting logic devices. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 58–72.
[38]
Sanjay Kariyappa, Hsinyu Tsai, Katie Spoon, Stefano Ambrogio, Pritish Narayanan, Charles Mackin, An Chen, Moinuddin Qureshi, and Geoffrey W Burr. 2021. Noise-resilient DNN: Tolerating noise in PCM-based AI accelerators via noise-aware training. IEEE Transactions on Electron Devices 68, 9 (2021), 4356–4362.
[39]
Riduan Khaddam-Aljameh, Milos Stanisavljevic, J Fornt Mas, Geethan Karunaratne, Matthias Braendli, Femg Liu, Abhairaj Singh, Silvia M Müller, Urs Egger, Anastasios Petropoulos, 2021. HERMES Core–A 14nm CMOS and PCM-based In-Memory Compute Core using an array of 300ps/LSB Linearized CCO-based ADCs and local digital processing. In 2021 Symposium on VLSI Circuits. IEEE, 1–2.
[40]
Hyungjun Kim, Yulhwa Kim, and Jae-Joon Kim. 2019. In-memory batch-normalization for resistive memory based binary neural network hardware. In Proceedings of the 24th Asia and South Pacific Design Automation Conference. 645–650.
[41]
Kyounghoon Kim, Jongeun Lee, and Kiyoung Choi. 2015. Approximate de-randomizer for stochastic circuits. In 2015 International SoC Design Conference (ISOCC). IEEE, 123–124.
[42]
Phil C Knag, Gregory K Chen, H Ekin Sumbul, Raghavan Kumar, Steven K Hsu, Amit Agarwal, Monodeep Kar, Seongjong Kim, Mark A Anders, Himanshu Kaul, 2020. A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS. IEEE journal of solid-state circuits 56, 4 (2020), 1082–1092.
[43]
Cong Leng, Zesheng Dou, Hao Li, Shenghuo Zhu, and Rong Jin. 2018. Extremely low bit neural network: Squeeze the last bit out with admm. In Thirty-Second AAAI Conference on Artificial Intelligence.
[44]
Hongjia Li, Mengshu Sun, Tianyun Zhang, Olivia Chen, Nobuyuki Yoshikawa, Bei Yu, Yanzhi Wang, and Yibo Lin. 2021. Towards AQFP-Capable Physical Design Automation. In 2021 Design, Automation Test in Europe Conference Exhibition (DATE). 954–959. https://doi.org/10.23919/DATE51398.2021.9474259
[45]
Weitao Li, Pengfei Xu, Yang Zhao, Haitong Li, Yuan Xie, and Yingyan Lin. 2020. TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain. arXiv preprint arXiv:2005.01206 (2020).
[46]
Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Yan Wang, Yongjian Wu, Feiyue Huang, and Chia-Wen Lin. 2020. Rotated binary neural network. Advances in neural information processing systems 33 (2020), 7474–7485.
[47]
Xiaofan Lin, Cong Zhao, and Wei Pan. 2017. Towards accurate binary convolutional neural network. Advances in neural information processing systems 30 (2017).
[48]
Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. 2020. Reactnet: Towards precise binary neural network with generalized activation functions. In European conference on computer vision. Springer, 143–159.
[49]
Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. 2018. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Proceedings of the European conference on computer vision (ECCV). 722–737.
[50]
K. Loe and E. Goto. 1985. Analysis of flux input and output Josephson pair device. IEEE Transactions on Magnetics 21, 2 (1985), 884–887. https://doi.org/10.1109/TMAG.1985.1063734
[51]
S. Nagasawa, K. Hinode, T. Satoh, H. Akaike, Y. Kitagawa, and M. Hidaka. 2005. Reliability evaluation of Nb 10 kA/cm2 fabrication process for large-scale SFQ circuits. Physica C: Superconductivity and its Applications 426-431 (2005), 1525 – 1532. https://doi.org/10.1016/j.physc.2005.03.058 Proceedings of the 17th International Symposium on Superconductivity (ISS 2004).
[52]
SS Teja Nibhanupudi, Siddhartha Raman Sundara Raman, Mikaël Cassé, Louis Hutin, and Jaydeep P Kulkarni. 2021. Ultra-low-voltage UTBB-SOI-based, pseudo-static storage circuits for cryogenic CMOS applications. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits 7, 2 (2021), 201–208.
[53]
Behraoz Parhami and Chi-Hsiang Yeh. 1995. Accumulative parallel counters. In Conference Record of The Twenty-Ninth Asilomar Conference on Signals, Systems and Computers, Vol. 2. IEEE, 966–970.
[54]
Thi-Nhan Pham, Quang-Kien Trinh, Ik-Joon Chang, and Massimo Alioto. 2022. STT-BNN: A Novel STT-MRAM In-Memory Computing Macro for Binary Neural Networks. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 12, 2 (2022), 569–579. https://doi.org/10.1109/JETCAS.2022.3169759
[55]
Divya Prasad, Manoj Vangala, Mudit Bhargava, Arnout Beckers, Alexander Grill, Davide Tierno, Krishnendra Nathella, Thanusree Achuthan, David Pietromonaco, James Myers, 2022. Cryo-computing for infrastructure applications: A technology-to-microarchitecture co-optimization study. In 2022 International Electron Devices Meeting (IEDM). IEEE, 23–5.
[56]
Weikang Qian, Xin Li, Marc D Riedel, Kia Bazargan, and David J Lilja. 2010. An architecture for fault-tolerant computation with stochastic logic. IEEE transactions on computers 60, 1 (2010), 93–105.
[57]
Haotong Qin, Ruihao Gong, Xianglong Liu, Mingzhu Shen, Ziran Wei, Fengwei Yu, and Jingkuan Song. 2020. Forward and backward information retention for accurate binary neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2250–2259.
[58]
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision. Springer, 525–542.
[59]
Herbert E. Robbins. 2007. A Stochastic Approximation Method. Annals of Mathematical Statistics 22 (2007), 400–407.
[60]
Ro Saito, Christopher L. Ayala, Olivia Chen, Tomoyuki Tanaka, Tomohiro Tamura, and Nobuyuki Yoshikawa. 2021. Logic Synthesis of Sequential Logic Circuits for Adiabatic Quantum-Flux-Parametron Logic. IEEE Transactions on Applied Superconductivity 31, 5 (2021), 1–5. https://doi.org/10.1109/TASC.2021.3061636
[61]
Ro Saito, Christopher L. Ayala, and Nobuyuki Yoshikawa. 2021. Buffer Reduction Via N-Phase Clocking in Adiabatic Quantum-Flux-Parametron Benchmark Circuits. IEEE Transactions on Applied Superconductivity 31, 6 (2021), 1–8. https://doi.org/10.1109/TASC.2021.3073837
[62]
Rakshith Saligram, Suman Datta, and Arijit Raychowdhury. 2021. CryoMem: A 4K-300K 1.3 GHz eDRAM macro with hybrid 2T-gain-cell in a 28nm logic process for cryogenic applications. In 2021 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 1–2.
[63]
Rakshith Saligram, Divya Prasad, David Pietromonaco, Arijit Raychowdhury, and Brian Cline. 2021. A 64-bit arm CPU at cryogenic temperatures: Design technology co-optimization for power and performance. In 2021 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 1–2.
[64]
Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R Stanley Williams, and Vivek Srikumar. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Computer Architecture News (2016).
[65]
M. Suzuki, M. Maezawa, H. Takato, H. Nakagawa, F. Hirayama, S. Kiryu, M. Aoyagi, T. Sekigawa, and A. Shoji. 1999. An interface circuit for a Josephson-CMOS hybrid digital system. IEEE Transactions on Applied Superconductivity 9, 2 (1999), 3314–3317. https://doi.org/10.1109/77.783738
[66]
Naoki Takeuchi, Dan Ozawa, Yuki Yamanashi, and Nobuyuki Yoshikawa. 2013. An adiabatic quantum flux parametron as an ultra-low-power logic device. Superconductor Science and Technology 26, 3 (2013), 035010.
[67]
Naoki Takeuchi, Taiki Yamae, Christopher L. Ayala, Hideo Suzuki, and Nobuyuki Yoshikawa. 2019. An adiabatic superconductor 8-bit adder with 24kBT energy dissipation per junction. Applied Physics Letters 114, 4 (01 2019). https://doi.org/10.1063/1.5080753 arXiv:https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/1.5080753/13036671/042602_1_online.pdf042602.
[68]
Naoki Takeuchi, Yuki Yamanashi, and Nobuyuki Yoshikawa. 2013. Measurement of 10 zJ energy dissipation of adiabatic quantum-flux-parametron logic using a superconducting resonator. Applied Physics Letters 102, 5 (2013), 052602.
[69]
Naoki Takeuchi, Yuki Yamanashi, and Nobuyuki Yoshikawa. 2015. Adiabatic quantum-flux-parametron cell library adopting minimalist design. Journal of Applied Physics 117, 17 (5 2015), 173912.
[70]
Tomoyuki Tanaka, Christopher L. Ayala, Qiuyun Xu, Ro Saito, and Nobuyuki Yoshikawa. 2019. Fabrication of Adiabatic Quantum-Flux-Parametron Integrated Circuits Using an Automatic Placement Tool Based on Genetic Algorithms. IEEE Transactions on Applied Superconductivity 29, 5 (2019), 1–6. https://doi.org/10.1109/TASC.2019.2900220
[71]
Eleonora Testa, Siang-Yun Lee, Heinz Riener, and Giovanni De Micheli. 2021. Algebraic and Boolean Optimization Methods for AQFP Superconducting Circuits. In 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). 779–785.
[72]
Sergey K. Tolpygo, Vladimir Bolkhovsky, Terence J. Weir, Leonard M. Johnson, Mark A. Gouker, and William D. Oliver. 2015. Fabrication Process and Properties of Fully-Planarized Deep-Submicron Nb/Al–Nb Josephson Junctions for VLSI Circuits. IEEE Transactions on Applied Superconductivity 25, 3 (2015), 1–12. https://doi.org/10.1109/TASC.2014.2374836
[73]
Thomas J. Walls, Timur V. Filippov, and Konstantin K. Likharev. 2002. Quantum Fluctuations in Josephson Junction Comparators. Physical Review Letters 89, 21 (Nov. 2002), 217004. https://doi.org/10.1103/PhysRevLett.89.217004
[74]
Qiuyun Xu, Christopher L. Ayala, Naoki Takeuchi, Yuki Murai, Yuki Yamanashi, and Nobuyuki Yoshikawa. 2017. Synthesis Flow for Cell-Based Adiabatic Quantum-Flux-Parametron Structural Circuit Generation With HDL Back-End Verification. IEEE Transactions on Applied Superconductivity 27, 4 (2017), 1–5. https://doi.org/10.1109/TASC.2017.2662017
[75]
Zihan Xu, Mingbao Lin, Jianzhuang Liu, Jie Chen, Ling Shao, Yue Gao, Yonghong Tian, and Rongrong Ji. 2021. Recu: Reviving the dead weights in binary neural networks. In Proceedings of the IEEE/CVF international conference on computer vision. 5198–5208.
[76]
Taiki Yamae, Naoki Takeuchi, and Nobuyuki Yoshikawa. 2019. A reversible full adder using adiabatic superconductor logic. Superconductor Science and Technology 32, 3 (jan 2019), 035005. https://doi.org/10.1088/1361-6668/aaf8c9
[77]
Tomoharu Yamauchi, Hao San, Nobuyuki Yoshikawa, and Olivia Chen. 2023. Design and Implementation of Energy-Efficient Binary Neural Networks Using Adiabatic Quantum-Flux-Parametron Logic. IEEE Transactions on Applied Superconductivity 33, 5 (2023), 1–5. https://doi.org/10.1109/TASC.2023.3243180
[78]
Jiwei Yang, Xu Shen, Jun Xing, Xinmei Tian, Houqiang Li, Bing Deng, Jianqiang Huang, and Xian-sheng Hua. 2019. Quantization networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7308–7316.
[79]
Kai Zhong, Tianchen Zhao, Xuefei Ning, Shulin Zeng, Kaiyuan Guo, Yu Wang, and Huazhong Yang. 2020. Towards lower bit multiplication for convolutional neural network training. arXiv preprint arXiv:2006.02804 3, 4 (2020).
[80]
Shuchang Zhou, Yuxin Wu, 2016. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv:1606.06160 (2016).
[81]
Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. 2017. Trained ternary quantization. In ICLR.

Cited By

View all

Index Terms

  1. SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        MICRO '23: Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture
        October 2023
        1528 pages
        ISBN:9798400703294
        DOI:10.1145/3613424
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 08 December 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. AQFP
        2. BNN
        3. Deep Learning
        4. Stochastic Computing
        5. Superconducting

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        MICRO '23
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 484 of 2,242 submissions, 22%

        Upcoming Conference

        MICRO '24

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 157
          Total Downloads
        • Downloads (Last 12 months)157
        • Downloads (Last 6 weeks)11
        Reflects downloads up to 15 Sep 2024

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media