skip to main content
research-article

ATCN: Resource-efficient Processing of Time Series on Edge

Published: 08 October 2022 Publication History

Abstract

This article presents a scalable deep learning model called Agile Temporal Convolutional Network (ATCN) for highly accurate fast classification and time series prediction in resource-constrained embedded systems. ATCN is a family of compact networks with formalized hyperparameters that enable application-specific adjustments to be made to the model architecture. It is primarily designed for embedded edge devices with very limited performance and memory, such as wearable biomedical devices and real-time reliability monitoring systems. ATCN makes fundamental improvements over the mainstream temporal convolutional neural networks, including residual connections to increase the network depth and accuracy and the incorporation of separable depth-wise convolution to reduce the computational complexity of the model. As part of the present work, two ATCN families, namely T0 and T1, are also presented and evaluated on different ranges of embedded processors: Cortex-M7 and Cortex-A57 processors. An evaluation of the ATCN models against the best-in-class InceptionTime and MiniRocket shows that ATCN almost maintains accuracy while improving the execution time on a broad range of embedded and cyber-physical applications with demand for real-time processing on the embedded edge. At the same time, in contrast to existing solutions, ATCN is the first time series classifier based on deep learning that can be run bare-metal on embedded microcontrollers (Cortex-M7) with limited computational performance and memory capacity while delivering state-of-the-art accuracy.
Appendix

A Bar Graph Comparison of Models

Figure 14 compares the accuracy and latency of the T0, T1, and MiniRocket models across all 70 benchmarks. As explained in Section 5, latency values are obtained by running benchmarks on Cortex-A57. In Figure 14, we sorted the results based on T0 latency.
Fig. 14.
Fig. 14. Comparison of accuracy and latency of three models on a Cortex-A57 processor. Results are sorted by T0 latency.

References

[1]
M. Baharani, M. Biglarbegian, B. Parkhideh, and H. Tabkhi. 2019. Real-time deep learning at the edge for scalable reliability modeling of Si-MOSFET power electronics converters. IEEE Internet of Things Journal 6, 5 (2019), 7375–7385. DOI:
[2]
Mohammadreza Baharani, Ushma Sunil, Kaustubh Manohar, Steven Furgurson, and Hamed Tabkhi. 2021. DeepDive: An integrative algorithm/architecture co-design for deep separable convolutional neural networks. In Proceedings of the 2021 on Great Lakes Symposium on VLSI (GLSVLSI’21). Association for Computing Machinery, New York, NY, 247–252. DOI:
[3]
Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. CoRR abs/1803.01271 (2018). arXiv:1803.01271 http://arxiv.org/abs/1803.01271.
[4]
M. Biglarbegian, M. Baharani, N. Kim, H. Tabkhi, and B. Parkhideh. 2018. Scalable reliability monitoring of GaN power converter through recurrent neural networks. In 2018 IEEE Energy Conversion Congress and Exposition (ECCE’18). 7271–7277. DOI:
[5]
M. Carreras, G. Deriu, L. Raffo, L. Benini, and P. Meloni. 2020. Optimizing temporal convolutional network inference on FPGA-based accelerators. IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2020), 1–1.
[6]
J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical evaluationof gated recurrent neural networks on sequence modeling. In NIPS DeepLearn Workshop.
[7]
Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, and Eamonn Keogh. 2019. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica 6, 6 (2019), 1293–1305. DOI:
[8]
Angus Dempster, François Petitjean, and Geoffrey I. Webb. 2020. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery 34, 5 (Sept. 2020), 1454–1495. DOI:
[9]
Angus Dempster, Daniel F. Schmidt, and Geoffrey I. Webb. 2021. MiniRocket: A very fast (almost) deterministic transform for time series classification. InProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’21). Association for Computing Machinery, New York, NY, 248–257. DOI:
[10]
Janez Demšar. 2006. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7 (2006), 1–30.
[11]
Nachiket Deo and Mohan M. Trivedi. 2018. Convolutional social pooling for vehicle trajectory prediction. In 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops’18). IEEE Computer Society, 1468–1476. DOI:
[12]
Milton Friedman. 1940. A comparison of alternative tests of significance for the problem of m rankings. Annals of Mathematical Statistics 11, 1 (1940), 86–92.
[13]
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. CoRR abs/1705.03122 (2017). arXiv:1705.03122 http://arxiv.org/abs/1705.03122.
[14]
Sebastian D. Goodfellow, Andrew Goodwin, Robert Greer, Peter C. Laussen, Mjaye Mazwi, and Danny Eytan. 2018. Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings. InProceedings of Machine Learning Research, Finale Doshi-Velez, Jim Fackler, Ken Jung, David Kale, Rajesh Ranganath, Byron Wallace, and Jenna Wiens (Eds.), Vol. 85. PMLR, Palo Alto, CA, 83–101. http://proceedings.mlr.press/v85/goodfellow18a.html.
[15]
Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In International Conference on Learning Representations (ICLR’16).
[16]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). arXiv:1512.03385 http://arxiv.org/abs/1512.03385.
[17]
Yihui He, Xiangyu Zhang, and Jian Sun. 2017. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17).
[18]
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2014. Distilling the knowledge in a neural network. InNIPS abs/1503.02531 (2014).
[19]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computing 9, 8 (Nov. 1997), 1735–1780. DOI:
[20]
Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 2 (1979), 65–70. http://www.jstor.org/stable/4615733.
[21]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: A review. Data Mining and Knowledge Discovery 33, 4 (2019), 917–963.
[22]
Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F. Schmidt, Jonathan Weber, Geoffrey I. Webb, Lhassane Idoumghar, Pierre-Alain Muller, and François Petitjean. 2020. InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery 34, 6 (Nov. 2020), 1936–1962. DOI:
[23]
Brian Kenji Iwana and Seiichi Uchida. 2021. An empirical survey of data augmentation for time series classification with neural networks. PLOS ONE 16, 7 (July 2021), e0254841. DOI:
[24]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18).
[25]
Arthur Le Guennec, Simon Malinowski, and Romain Tavenard. 2016. Data augmentation for time series classification using convolutional neural networks. In ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data.
[26]
C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager. 2017. Temporal convolutional networks for action segmentation and detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 1003–1012.
[27]
Qin Li, Xiaofan Zhang, JinJun Xiong, Wen-mei Hwu, and Deming Chen. 2019. Implementing neural machine translation with bi-directional GRU and attention mechanism on FPGAs using HLS. In Proceedings of the 24th Asia and South Pacific Design Automation Conference (ASPDAC’19). Association for Computing Machinery, New York, NY, 693–698. DOI:
[28]
Y. Li, Z. Xia, and Y. Zhang. 2020. Standalone systolic profile detection of non-contact SCG signal with LSTM network. IEEE Sensors Journal 20, 6 (2020), 3123–3131. DOI:
[29]
Jason Lines, Sarah Taylor, and Anthony Bagnall. 2016. HIVE-COTE: The hierarchical vote collective of transformation-based ensembles for time series classification. In 2016 IEEE 16th International Conference on Data Mining (ICDM’16). 1041–1046. DOI:
[30]
Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, and Guillermo Pita Gil. 2020. Multi-head attention for multi-modal joint vehicle motion forecasting. In 2020 IEEE International Conference on Robotics and Automation (ICRA’20). IEEE, 9638–9644. DOI:
[31]
Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2017. Pruning convolutional neural networks for resource efficient transfer learning. ICLR 14, 192 (2017), 192.
[32]
Kewei Ouyang, Yi Hou, Shilin Zhou, and Ye Zhang. 2021. Convolutional neural network with an elastic matching mechanism for time series classification. Algorithms 14, 7 (2021), 6875–6879. DOI:
[33]
A. Pandey and D. Wang. 2019. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19). 6875–6879. DOI:
[34]
S. Saadatnejad, M. Oveisi, and M. Hashemi. 2020. LSTM-based ECG classification for continuous monitoring on personal wearable devices. IEEE Journal of Biomedical and Health Informatics 24, 2 (2020), 515–523. DOI:
[35]
Rajat Sen, Hsiang-Fu Yu, and Inderjit S. Dhillon. 2019. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. In Advances in Neural Information Processing Systems. 4837–4846.
[36]
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi. 2017. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI’17). AAAI Press, 4278–4284.
[37]
Terry T. Um, Franz M. J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić. 2017. Data augmentation of wearable sensor data for Parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI’17). Association for Computing Machinery, New York, NY, 216–220. DOI:
[38]
Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. ArXiv abs/1609.03499 (2016).
[39]
D. van Kuppevelt, C. Meijer, F. Huber, A. van der Ploeg, S. Georgievska, and V. T. van Hees. 2020. Mcfly: Automated deep learning on time series. SoftwareX 12 (2020), 100548. DOI:
[40]
A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang. 1989. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech, and Signal Processing 37, 3 (1989), 328–339. DOI:
[41]
Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in Statistics. Springer, 196–202.
[42]
Xu Xie, Chi Zhang, Yixin Zhu, Ying Nian Wu, and Song-Chun Zhu. 2021. Congestion-aware multi-agent trajectory prediction for collision avoidance. CoRR abs/2103.14231 (2021). arXiv:2103.14231 https://arxiv.org/abs/2103.14231.
[43]
B. Zhang, D. Xiong, J. Xie, and J. Su. 2020. Neural machine translation with GRU-gated attention model. IEEE Transactions on Neural Networks and Learning Systems (2020), 1–11. DOI:
[44]
Y. Zhang, R. Xiong, H. He, and M. G. Pecht. 2018. Long short-term memory recurrent neural network for remaining useful life prediction of lithium-Ion batteries. IEEE Transactions on Vehicular Technology 67, 7 (2018), 5695–5705. DOI:
[45]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2921–2929.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Embedded Computing Systems
ACM Transactions on Embedded Computing Systems  Volume 21, Issue 5
September 2022
526 pages
ISSN:1539-9087
EISSN:1558-3465
DOI:10.1145/3561947
  • Editor:
  • Tulika Mitra
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 08 October 2022
Online AM: 21 March 2022
Accepted: 03 March 2022
Revised: 25 February 2022
Received: 15 July 2021
Published in TECS Volume 21, Issue 5

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Temporal Convolutional Networks (TCN)
  2. Recurrent Neural Networks (RNN)
  3. real-time edge computing

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • National Science Foundation (NSF)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)57
  • Downloads (Last 6 weeks)4
Reflects downloads up to 07 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media