skip to main content
10.1145/3677525.3678671acmconferencesArticle/Chapter ViewAbstractPublication PagesgooditConference Proceedingsconference-collections
research-article

Utilizing Self-Supervised Learning for Recognizing Human Activity in Older Adults through Labeling Applications in Real-World Smart Homes

Published: 04 September 2024 Publication History

Abstract

Deep learning models have significantly contributed to recognizing older adults’ daily activities for telemonitoring and assistance. However, recognizing human activities in real-world smart homes over the long term presents substantial challenges. Obtaining the ground truth is time-consuming and costly, yet it is crucial for training and improving deep learning models. Inspired by the impressive performance of self-supervised learning models, this paper utilizes a model based on the SimCLR framework and a self-attention mechanism for downstream human activity recognition. The model leverages the limited and intermittent labeled activities collected by the Label Older Adults’ Daily Activities (LOADA) application, which was deployed and used to acquire activity labels in the real-world, uncontrolled smart homes of three young people and two older adults for over one month. The experimental results demonstrate significant performance in activity recognition, employing semi-supervised learning with limited labels, and transfer learning scenarios where representations learned from one smart home are transferred to another. This research could inspire other human activity recognition community researchers to overcome labeling challenges for monitoring older adults in real-world scenarios.

References

[1]
American Association for Retired People. 2021. Where we live, where we age: Trends in home and community preferences. Retrieved May 05, 2024 from https://livablecommunities.aarpinternational.org/
[2]
Aikaterini Bourazeri and Simone Stumpf. 2018. Co-designing smart home technology with people with dementia or Parkinson’s disease. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (Oslo, Norway) (NordiCHI ’18). Association for Computing Machinery, New York, NY, USA, 609–621. https://doi.org/10.1145/3240167.3240197
[3]
Hui Chen, Charles Gouin-Vallerand, Kévin Bouchard, Sébastien Gaboury, Mélanie Couture, Nathalie Bier, and Sylvain Giroux. 2023. Leveraging Self-Supervised Learning for Human Activity Recognition with Ambient Sensors. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good (Lisbon, Portugal) (GoodIT ’23). ACM, New York, NY, USA, 324–332. https://doi.org/10.1145/3582515.3609551
[4]
Hui Chen, Charles Gouin-Vallerand, Kévin Bouchard, Sébastien Gaboury, Mélanie Couture, Nathalie Bier, and Sylvain Giroux. 2024. Enhancing Human Activity Recognition in Smart Homes with Self-Supervised Learning and Self-Attention. Sensors 24, 3 (2024). https://doi.org/10.3390/s24030884
[5]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A Simple Framework for Contrastive Learning of Visual Representations. arxiv:2002.05709 [cs.LG]
[6]
Diane J. Cook. 2020. AL Activity Learning - Smart Home. Retrieved April 28, 2023 from https://github.com/WSU-CASAS/AL-Smarthome
[7]
Diane J. Cook, Aaron S. Crandall, Brian L. Thomas, and Narayanan C. Krishnan. 2013. CASAS: A Smart Home in a Box. Computer 46, 7 (2013), 62–69. https://doi.org/10.1109/MC.2012.328
[8]
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. arxiv:1911.05722 [cs.CV]
[9]
Yu Min Hwang, Sangjun Park, Hyung Ok Lee, Seok-Kap Ko, and Byung-Tak Lee. 2021. Deep Learning for Human Activity Recognition Based on Causality Feature Extraction. IEEE Access 9 (2021), 112257–112275. https://doi.org/10.1109/ACCESS.2021.3103211
[10]
Muralidharan K, Anirudh Ramesh, Rithvik G, Saket Prem, Reghunaath A A, and Dr. M.P. Gopinath. 2021. 1D Convolution approach to human activity recognition using sensor data and comparison with machine learning algorithms. International Journal of Cognitive Computing in Engineering 2 (2021), 130–143. https://doi.org/10.1016/j.ijcce.2021.09.001
[11]
Bulat Khaertdinov, Esam Ghaleb, and Stylianos Asteriadis. 2021. Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition. In 2021 IEEE International Joint Conference on Biometrics (IJCB). 1–8. https://doi.org/10.1109/IJCB52358.2021.9484410
[12]
Daniele Liciotti, Michele Bernardini, Luca Romeo, and Emanuele Frontoni. 2020. A sequential deep learning application for recognising human activities in smart homes. Neurocomputing 396 (2020), 501–513. https://doi.org/10.1016/j.neucom.2018.10.104
[13]
Alex Mihailidis, Jennifer Boger, Jesse Hoey, and Tizneem Jiancaro. 2011. Zero Effort Technologies: Considerations, Challenges, and Use in Health, Wellness, and Rehabilitation. Vol. 1. Springer Cham. https://doi.org/10.2200/S00380ED1V01Y201108ARH002
[14]
Gadelhag Mohmed, Ahmad Lotfi, and Amir Pourabdollah. 2020. Employing a deep convolutional neural network for human activity recognition based on binary ambient sensor data. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’20). Association for Computing Machinery, New York, NY, USA, Article 56, 7 pages. https://doi.org/10.1145/3389189.3397991
[15]
Hubert Ngankam, Célia Lignon, Maxime Lussier, Aline Aboujaoudé, Renée-pier Filiou, Hélène Pigot, Sébastien Gaboury, Kevin Bouchard, Guy Paré, Carolina Bottari, Mélanie Couture, Nathalie Bier, and Sylvain Giroux. 2023. My Iliad: A Ludic Interface Using Ambient Assistive Technology to Promote Aging in Place. In Human Aspects of IT for the Aged Population, Qin Gao and Jia Zhou (Eds.). Springer Nature Switzerland, Cham, 31–46. https://doi.org/10.1007/978-3-031-34917-1_3
[16]
Jiho Park, Kiyoung Jang, and Sung-Bong Yang. 2018. Deep neural networks for activity recognition with multi-sensor data in a smart home. In 2018 IEEE 4th World Forum on Internet of Things (WF-IoT). IEEE, Singapore, 155–160. https://doi.org/10.1109/WF-IoT.2018.8355147
[17]
Jorge Reyes-Ortiz, Davide Anguita, Alessandro Ghio, Luca Oneto, and Xavier Parra. 2012. Human Activity Recognition Using Smartphones. UCI Machine Learning Repository. https://doi.org/10.24432/C54S4K
[18]
Worrakit Sanpote, Ponnipa Jantawong, Narit Hnoohom, Anuchit Jitpattanakul, and Sakorn Mekruksavanich. 2023. Deep Learning Approaches for Recognizing Daily Human Activities Using Smart Home Sensors. In 2023 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON). IEEE, 469–473. https://doi.org/10.1109/ECTIDAMTNCON57770.2023.10139507
[19]
Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, and Cecilia Mascolo. 2021. Exploring Contrastive Learning in Human Activity Recognition for Healthcare. arxiv:2011.11542 [cs.LG]
[20]
United Nations. 2023. World Social Report 2023: Leaving No One Behind In An Ageing World. Retrieved May 02, 2024 from https://www.un.org/development/desa/dspd/wp-content/uploads/sites/22/2023/01/WSR_2023_Chapter_Key_Messages.pdf
[21]
Yonatan Vaizman, Katherine Ellis, Gert Lanckriet, and Nadir Weibel. 2018. Extrasensory app: Data collection in-the-wild with rich user interface to self-report behavior. Conference on Human Factors in Computing Systems - Proceedings 2018-April (2018), 1–12. https://doi.org/10.1145/3173574.3174128
[22]
Jinqiang Wang, Tao Zhu, Jingyuan Gan, Liming Luke Chen, Huansheng Ning, and Yaping Wan. 2022. Sensor Data Augmentation by Resampling in Contrastive Learning for Human Activity Recognition. IEEE Sensors Journal 22, 23 (2022), 22994–23008. https://doi.org/10.1109/JSEN.2022.3214198

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GoodIT '24: Proceedings of the 2024 International Conference on Information Technology for Social Good
September 2024
481 pages
ISBN:9798400710940
DOI:10.1145/3677525
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 September 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. SimCLR framework
  2. human activity recognition
  3. self-supervised learning
  4. semi-supervised learning
  5. smart home
  6. transfer learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Videotron Inc
  • AGE-WELL NCE
  • Fondation Luc Maurice
  • MEDTEQ
  • NSERC
  • the Réseau Québécois de Recherche sur le Vieillissement (RQRV)
  • Fonds de la recherche du Québec ð Santé

Conference

GoodIT '24
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 69
    Total Downloads
  • Downloads (Last 12 months)69
  • Downloads (Last 6 weeks)10
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media