skip to main content
10.1145/3552484.3555753acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

AI-Assisted Food Intake Activity Recognition Using 3D mmWave Radars

Published: 24 October 2022 Publication History

Abstract

The automatic recognition of when and for how long a person is eating a certain food or drinking has applications in telecare, smarthome data monetization, and diet control. Existing food recognition systems either recognize the type of the food, but not when and for how long the person was eating and drinking, or use invasive sensors or privacy-intruding cameras which users are hesitant to install in their homes. In this paper, we propose a non-invasive system, using Artificial Intelligence to process 3D point cloud data collected from a 3D mmWave radar, that can distinguish a person's eating and drinking activities from other daily activities. This is challenging because eating and drinking activities are much more fine-grained than activities that existing systems can detect, such as sitting, running, walking, etc. Performance evaluations show that our proposed system significantly outperforms a representative state-of-the-art activity recognition system, RadHAR, by at least 27% and can reach 96.56% and 96.73% accuracy for two different training/testing split setups.

References

[1]
N. Ahmed, J. Rafiq, and M. Islam. 2020. Enhanced human activity recognition based on smartphone sensor data using hybrid feature selection model. Sensors 20, 1 (2020), 317:1--317:19.
[2]
S. An and U. Ogras. 2021. MARS: mmWave-based Assistive Rehabilitation System for Smart Healthcare. ACM Transactions on Embedded Computing Systems 20, 5s (2021), 1--22.
[3]
S. Balli, E. Saugbacs, and M. Peker. 2019. Human activity recognition from smart watch sensor data using a hybrid of principal component analysis and random forest algorithm. Measurement and Control 52, 1--2 (2019), 37--45.
[4]
F. Baradel, C. Wolf, and J. Mille. 2018. Human activity recognition with pose driven attention to RGB. In Proc. of British Machine Vision Conference (BMVC). 1--14.
[5]
S. Bhalla, M. Goel, and R. Khurana. 2021. IMU2Doppler: Cross-Modal Domain Adaptation for Doppler-based Activity Recognition Using IMU Data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (2021), 145:1--145:20.
[6]
D. Cook, K. Feuz, and N. Krishnan. 2013. Transfer learning for activity recognition: A survey. Knowledge and Information Systems 36, 3 (2013), 537 --556.
[7]
Dfintech. 2022. Cisco Visual Networking Index: Forecast and Methodology, 2016- 2021. Retrieved March 19, 2022 from https://dfintech.ch/en/
[8]
M. Farooq and E. Sazonov. 2018. Accelerometer-based detection of food intake in free-living individuals. IEEE Sensors Journal 18, 9 (2018), 3752--3758.
[9]
A. Franco, A. Magnani, and D. Maio. 2020. A multimodal approach for human activity recognition based on skeleton and RGB data. Pattern Recognition Letters 131 (2020), 293--299.
[10]
F. Fuchs, D. Worrall, V. Fischer, and M. Welling. 2020. SE(3)-Transformers: 3D rototranslation equivariant attention networks. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Vol. 33. 1970--1981.
[11]
P. Gong, C. Wang, and L. Zhang. 2021. Mmpoint-GNN: Graph neural network with dynamic edges for human activity recognition through a millimeter-wave radar. In Proc. of International Joint Conference on Neural Networks (IJCNN). 1--7.
[12]
L. Harnack, L. Steffen, D. Arnett, S. Gao, and R. Luepker. 2004. Accuracy of estimation of large food portions. Journal of the American Dietetic Association 104, 5 (2004), 804--806.
[13]
M. Hassan, M. Uddin, A. Mohamed, and A. Almogren. 2018. A robust human activity recognition system using smartphone sensors and deep learning. Future Generation Computer Systems 81 (2018), 307--313.
[14]
S. He, S. Li, A. Nag, S. Feng, T. Han, S. Mukhopadhyay, and W. Powel. 2020. A comprehensive review of the use of sensors for food intake detection. Sensors and Actuators A: Physical 315 (2020), 112318:1--112318:16.
[15]
J. Hu, W. Zheng, J. Lai, and J. Zhang. 2015. Jointly learning heterogeneous features for RGB-D activity recognition. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5344--5352.
[16]
A. Iosifidis, E. Marami, A. Tefas, and I. Pitas. 2012. Eating and drinking activity recognition based on discriminant analysis of fuzzy distances and activity volumes. In Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2201--2204.
[17]
H. Liu and T. Schultz. 2019. A wearable real-time human activity recognition system using biosensors integrated into a knee bandage. In Proc. of International Conference on Biomedical Electronics and Devices. 47--55.
[18]
S. Mekruksavanich and A. Jitpattanakul. 2020. Smartwatch-based human activity recognition using hybrid lstm network. (2020), 1--4.
[19]
W. Min, S. Jiang, L. Liu, Y. Rui, and R. Jain. 2019. A survey on food computing. Comput. Surveys 52, 5 (2019), 1--36.
[20]
W. Min, L. Liu, Z. Luo, and S. Jiang. 2019. Ingredient-guided cascaded multi attention network for food recognition. In Proc. ACM International Conference on Multimedia (MM). 1331--1339.
[21]
A. Moin, A. Zhou, A. Rahimi, A. Menon, S. Benatti, G. Alexandrov, S. Tamakloe, J. Ting, N. Yamamoto, Y. Khan, et al. 2021. A wearable biosensing system with in sensor adaptive machine learning for hand gesture recognition. Nature Electronics 4, 1 (2021), 54--63.
[22]
C. Qi, L. Yi, H. Su, and L. Guibas. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems 30 (2017).
[23]
J. Qi, G. Jiang, G. Li, Y. Sun, and B. Tao. 2019. Intelligent human-computer interaction based on surface EMG gesture recognition. IEEE Access 7 (2019), 61378--61387.
[24]
N. Rashid, M. Dautta, P. Tseng, and M. Faruque. 2020. HEAR: Fog-enabled energy aware online human eating activity recognition. IEEE Internet of Things Journal 8, 2 (2020), 860--868.
[25]
G. Riegler, A. Osman Ulusoy, and A. Geiger. 2017. Octnet: Learning deep 3D representations at high resolutions. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3577--3586.
[26]
A. Salehzadeh, A. Calitz, and J. Greyling. 2020. Human activity recognition using deep electroencephalography learning. Biomedical Signal Processing and Control 62 (2020), 102094.
[27]
N. Selamat and S. Ali. 2020. Automatic food intake monitoring based on chewing activity: A survey. IEEE Access 8 (2020), 48846--48869.
[28]
A. Singh, S. Sandha, L. Garcia, and M. Srivastava. 2019. Radhar: Human activity recognition from point clouds generated through a millimeter-wave radar. In Proc. of the ACM Workshop on Millimeter-wave Networks and Sensing Systems (mmNets). 51--56.
[29]
T. Singh and D. Vishwakarma. 2021. A deeply coupled ConvNet for human activity recognition using dynamic and RGB images. Neural Computing and Applications 33, 1 (2021), 469--485.
[30]
A. Stisen, H. Blunck, S. Bhattacharya, T. Prentow, M. Kjaergaard, A. Dey, T. Sonne, and M. Jensen. 2015. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proc. of ACM Conference on Embedded Networked Sensor Systems (SenSys). 127--140.
[31]
K. Verma and B. Singh. 2021. Deep multi-Model fusion for human activity recognition using evolutionary algorithms. International Journal of Interactive Multimedia & Artificial Intelligence 7, 2 (2021).
[32]
C. Wang, Z. Lin, Y. Xie, X. Guo, Y. Ren, and Y. Chen. 2020. WiEat: Fine-grained device-free eating monitoring leveraging Wi-Fi signals. (2020), 1--9.
[33]
Y. Wang, H. Liu, K. Cui, A. Zhou, W. Li, and H. Ma. 2021. m-Activity: Accurate and Real-Time Human Activity Recognition Via Millimeter Wave Radar. In ICASSP 2021--2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 8298--8302.
[34]
G. Weiss, K. Yoneda, and T. Hayajneh. 2019. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access 7 (2019), 133190--133202.
[35]
A. Wellnitz, J. Wolff, C. Haubelt, and T. Kirste. 2019. Fluid intake recognition using inertial sensors. In Proc. of international Workshop on Sensor-based Activity Recognition and Interaction (iWOAR). 1--7.
[36]
Z. Wharton, A. Behera, Y. Liu, and N. Bessis. 2021. Coarse temporal attention network (cta-net) for driver's activity recognition. In Proc. of IEEE Winter Conference on Applications of Computer Vision (WACV). 1279--1289.
[37]
Y. Xie, R. Jiang, X. Guo, Y. Wang, J. Cheng, and Y. Chen. 2022. mmEat: Millimeter wave-enabled environment-invariant eating behavior monitoring. Smart Health 23 (2022), 10023:1--10023:8.
[38]
K. Yatani and K. Truong. 2012. Bodyscope: a wearable acoustic sensor for activity recognition. In Proc. of ACM Conference on Ubiquitous Computing (UbiComp). 341--350

Cited By

View all

Index Terms

  1. AI-Assisted Food Intake Activity Recognition Using 3D mmWave Radars

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MADiMa '22: Proceedings of the 7th International Workshop on Multimedia Assisted Dietary Management
    October 2022
    97 pages
    ISBN:9781450395021
    DOI:10.1145/3552484
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. human activity recognition
    2. mmwave radar
    3. point cloud

    Qualifiers

    • Research-article

    Conference

    MM '22
    Sponsor:

    Acceptance Rates

    MADiMa '22 Paper Acceptance Rate 9 of 10 submissions, 90%;
    Overall Acceptance Rate 16 of 24 submissions, 67%

    Upcoming Conference

    MM '24
    The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)148
    • Downloads (Last 6 weeks)10
    Reflects downloads up to 14 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media