skip to main content
research-article
Open access

M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors

Published: 07 July 2022 Publication History

Abstract

Modern smartwatches or wrist wearables having multiple physiological sensing modalities have emerged as a subtle way to detect different mental health conditions, such as anxiety, emotions, and stress. However, affect detection models depending on wrist sensors data often provide poor performance due to inconsistent or inaccurate signals and scarcity of labeled data representing a condition. Although learning representations based on the physiological similarities of the affective tasks offer a possibility to solve this problem, existing approaches fail to effectively generate representations that will work across these multiple tasks. Moreover, the problem becomes more challenging due to the large domain gap among these affective applications and the discrepancies among the multiple sensing modalities. We present M3Sense, a multi-task, multimodal representation learning framework that effectively learns the affect-agnostic physiological representations from limited labeled data and uses a novel domain alignment technique to utilize the unlabeled data from the other affective tasks to accurately detect these mental health conditions using wrist sensors only. We apply M3Sense to 3 mental health applications, and quantify the achieved performance boost compared to the state-of-the-art using extensive evaluations and ablation studies on publicly available and collected datasets. Moreover, we extensively investigate what combination of tasks and modalities aids in developing a robust Multitask Learning model for affect recognition. Our analysis shows that incorporating emotion detection in the learning models degrades the performance of anxiety and stress detection, whereas stress detection helps to boost the emotion detection performance. Our results also show that M3Sense provides consistent performance across all affective tasks and available modalities and also improves the performance of representation learning models on unseen affective tasks by 5% - 60%.

Supplemental Material

ZIP File - samyoun
Supplemental movie, appendix, image and software files for, M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors

References

[1]
Norah H Alsuraykh, Max L Wilson, Paul Tennent, and Sarah Sharples. 2019. How stress and mental workload are connected. In Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 371--376.
[2]
Anxiety and Heart Disease. 2020. John Hopkins Medicine. https://www.hopkinsmedicine.org/health/conditions-and-diseases/anxiety-and-heart-disease.
[3]
American Psychological Association. Accessed:2021. What's the difference between stress and anxiety? https://www.apa.org/topics/stress/anxiety-difference.
[4]
PR Aylard, JH Gooding, PJ McKenna, and RP Snaith. 1987. A validation study of three anxiety and depression self-assessment scales. Journal of psychosomatic research 31, 2 (1987), 261--268.
[5]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
[6]
Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, and Pascal Poupart. 2017. Variational attention for sequence-to-sequence models. arXiv preprint arXiv:1712.08207 (2017).
[7]
Brinnae Bent, Benjamin A Goldstein, Warren A Kibbe, and Jessilyn P Dunn. 2020. Investigating sources of inaccuracy in wearable optical heart rate sensors. NPJ digital medicine 3, 1 (2020), 1--9.
[8]
Deborah Anne Burton, Keith Stokes, and George M Hall. 2004. Physiological effects of exercise. Continuing Education in Anaesthesia Critical Care & Pain 4, 6 (2004), 185--188.
[9]
C. Chen, R. Jafari, and N. Kehtarnavaz. 2015. UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In 2015 IEEE ICIP. 168--172.
[10]
Youngjun Cho, Simon J Julier, and Nadia Bianchi-Berthouze. 2019. Instant stress: detection of perceived mental stress through smartphone photoplethysmography and thermal imaging. JMIR mental health 6, 4 (2019), e10140.
[11]
Sven-Åke Christianson. 1992. Emotional stress and eyewitness memory: a critical review. Psychological bulletin 112, 2 (1992), 284.
[12]
CNET. Accessed:2021. Apple Watch 7 vs. Samsung Galaxy Watch 4: All the major differences between smartwatch rivals. www.cnet.com/tech/mobile/apple-watch-7-vs-samsung-galaxy-watch-4-all-the-major-differences-between-smartwatch-rivals/.
[13]
Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In Advances in Neural Information Processing Systems. 9712--9724.
[14]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[15]
Empatica. Accessed:2021. Empatica E4 Wristband. https://www.empatica.com/research/e4/.
[16]
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. SlowFast Networks for Video Recognition. In CVPR.
[17]
Pamela J Feldman, Sheldon Cohen, Stephen J Lepore, Karen A Matthews, Thomas W Kamarck, and Anna L Marsland. 1999. Negative emotions and acute physiological responses to stress. Annals of Behavioral Medicine 21, 3 (1999), 216--222.
[18]
Nancy Fiedler, Robert Laumbach, Kathie Kelly-McNeil, Paul Lioy, Zhi-Hua Fan, Junfeng Zhang, John Ottenweller, Pamela Ohman-Strickland, and Howard Kipen. 2005. Health effects of a mixture of indoor air volatile organics, their ozone oxidation products, and stress. Environmental health perspectives 113, 11 (2005), 1542--1548.
[19]
Susan Folkman. 1997. Positive psychological states and coping with severe stress. Social science & medicine 45, 8 (1997), 1207--1221.
[20]
Susan Folkman. 2008. The case for positive emotions in the stress process. Anxiety, stress, and coping 21, 1 (2008), 3--14.
[21]
Forbes. Accessed:2021. Smart Wearables Market To Double By 2022: $27 Billion Industry Forecast. www.forbes.com/sites/paullamkin/2018/10/23/smart-wearables-market-to-double-by-2022-27-billion-industry-forecast.
[22]
Forbes. Accessed:2021. Venture Funding For Mental Health Startups Hits Record High As Anxiety, Depression Skyrocket. www.forbes.com/sites/katiejennings/2021/06/07/venture-funding-for-mental-health-startups-hits-record-high-as-anxiety-depression-skyrocket/.
[23]
Andrea E Frank, Alyssa Kubota, and Laurel D Riek. 2019. Wearable activity recognition for robust human-robot teaming in safety-critical environments via hybrid neural networks. In IROS. IEEE, 449--454.
[24]
Christian Gagné. 2019. A Principled Approach for Learning Task Similarity in Multitask Learning. In IJCAI.
[25]
Giorgos Giannakakis, Dimitris Grigoriadis, Katerina Giannakaki, Olympia Simantiraki, Alexandros Roniotis, and Manolis Tsiknakis. 2019. Review on psychological stress detection using biosignals. IEEE Transactions on Affective Computing (2019).
[26]
Giorgos Giannakakis, Dimitris Grigoriadis, and Manolis Tsiknakis. 2015. Detection of stress/anxiety state from EEG features during video watching. In 2015 International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 6034--6037.
[27]
Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. 2018. Dynamic task prioritization for multitask learning. In Proceedings of the European Conference on Computer Vision (ECCV). 270--287.
[28]
Pengsheng Guo, Chen-Yu Lee, and Daniel Ulbricht. 2020. Learning to branch for multi-task learning. In International Conference on Machine Learning. PMLR, 3854--3863.
[29]
W. Guo, J. Wang, and S. Wang. 2019. Deep Multimodal Representation Learning: A Survey. IEEE Access 7 (2019), 63373--63394.
[30]
Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, and Mohammed (Ehsan) Hoque. 2019. UR-FUNNY: A Multimodal Language Dataset for Understanding Humor. In EMNLP.
[31]
Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587 (2016).
[32]
Seyedmajid Hosseini, Satya Katragadda, Ravi Teja Bhupatiraju, Ziad Ashkar, Christoph W Borst, Kenneth Cochran, and Raju Gottumukkala. 2021. A multi-modal sensor dataset for continuous stress detection of nurses in a hospital. arXiv (2021).
[33]
Md Mofijul Islam and Tariq Iqbal. 2020. HAMLET: A Hierarchical Multimodal Attention-based Human Activity Recognition Algorithm. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 10285--10292.
[34]
Md Mofijul Islam and Tariq Iqbal. 2021. Multi-GAT: A Graphical Attention-based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition. In IEEE Robotics and Automation Letters (RA-L).
[35]
Md Mofijul Islam and Tariq Iqbal. 2022. MuMu: Cooperative Multitask Learning-based Guided Multimodal Fusion. In AAAI.
[36]
Md Tamzeed Islam and Shahriar Nirjon. 2021. Sound-Adapter: Multi-Source Domain Adaptation for Acoustic Classification Through Domain Discovery. In Proceedings of the International Conference on Information Processing in Sensor Networks. 176--190.
[37]
Wall Street Journal. Accessed:2021. Apple Is Working on iPhone Features to Help Detect Depression, Cognitive Decline. https://www.wsj.com/articles/apple-wants-iphones-to-help-detect-depression-cognitive-decline-sources-say-11632216601.
[38]
Hamid Reza Vaezi Joze, Amirreza Shaban, Michael L Iuzzolino, and Kazuhito Koishida. 2020. MMTM: Multimodal Transfer Module for CNN Fusion. In CVPR.
[39]
Ozcan Kayikcioglu, Sinan Bilgin, Goktug Seymenoglu, and Artuner Deveci. 2017. State and trait anxiety scores of patients receiving intravitreal injections. Biomedicine hub 2, 2 (2017), 1--5.
[40]
Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, and Dima Damen. 2019. Epic-fusion: Audio-visual temporal binding for egocentric action recognition. In ICCV. 5492--5501.
[41]
Hye-Geum Kim, Eun-Jin Cheon, Dai-Seg Bai, Young Hwan Lee, and Bon-Hoon Koo. 2018. Stress and heart rate variability: A meta-analysis and review of the literature. Psychiatry investigation 15, 3 (2018), 235.
[42]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[43]
Clemens Kirschbaum, Karl-Martin Pirke, and Dirk H Hellhammer. 1993. The 'Trier Social Stress Test'-a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28, 1-2 (1993), 76--81.
[44]
Quan Kong, Ziming Wu, Ziwei Deng, Martin Klinkigt, Bin Tong, and Tomokazu Murakami. 2019. MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding. In ICCV. 8658--8667.
[45]
Alyssa Kubota, Tariq Iqbal, Julie A Shah, and Laurel D Riek. 2019. Activity recognition in manufacturing: The roles of motion capture and sEMG+ inertial wearables in detecting fine vs. gross motion. In ICRA. IEEE.
[46]
Richard S Lazarus, Joseph C Speisman, and Arnold M Mordkoff. 1963. The relationship between autonomic indicators of psychological stress: Heart rate and skin conductance. Psychosomatic medicine (1963).
[47]
Ying Lean and Fu Shan. 2012. Brief review on physiological and biochemical evaluations of human mental workload. Human Factors and Ergonomics in Manufacturing & Service Industries 22, 3 (2012), 177--187.
[48]
Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu. 2018. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. IJCAI (2018).
[49]
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A Simple and Performant Baseline for Vision and Language. In NeurIPS.
[50]
Russell Li and Zhandong Liu. 2020. Stress detection using deep neural networks. BMC Medical Informatics and Decision Making 20, 11 (2020), 1--10.
[51]
Alexandros Liapis, Christos Katsanos, Dimitris G Sotiropoulos, Nikos Karousos, and Michalis Xenos. 2017. Stress in interactive applications: analysis of the valence-arousal space based on physiological signals and self-reported data. Multimedia Tools and Applications 76, 4 (2017), 5051--5071.
[52]
G. Liu, J. Qian, F. Wen, X. Zhu, R. Ying, and P. Liu. 2019. Action Recognition Based on 3D Skeleton and RGB Frame Fusion. In IROS. 258--264.
[53]
Xialei Liu, Joost Van De Weijer, and Andrew D Bagdanov. 2019. Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE transactions on pattern analysis and machine intelligence 41, 8 (2019), 1862--1878.
[54]
Xiang Long, Chuang Gan, Gerard De Melo, Xiao Liu, Yandong Li, Fu Li, and Shilei Wen. 2018. Multimodal keyless attention fusion for video classification. In AAAI.
[55]
Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. In ICLR.
[56]
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS.
[57]
Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In CVPR. 375--383.
[58]
Rosan Luijcks, Hermie J Hermens, Lonneke Bodar, Catherine J Vossen, Jim van Os, and Richel Lousberg. 2014. Experimentally induced stress validated by EMG activity. PloS one 9, 4 (2014), e95215.
[59]
Wenjie Luo, Zhenyu Yan, Qun Song, and Rui Tan. 2021. PhyAug: Physics-Directed Data Augmentation for Deep Sensing Model Transfer in Cyber-Physical Systems. In Proceedings of the 20th International Conference on Information Processing in Sensor Networks (co-located with CPS-IoT Week 2021). 31--46.
[60]
Yun Luo, Li-Zhen Zhu, Zi-Yu Wan, and Bao-Liang Lu. 2020. Data augmentation for enhancing EEG-based emotion recognition with deep generative models. Journal of Neural Engineering 17, 5 (2020), 056021.
[61]
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP.
[62]
Linda Mah, Claudia Szabuniewicz, and Alexandra J Fiocco. 2016. Can anxiety damage the brain? Current opinion in psychiatry 29, 1 (2016), 56--63.
[63]
Muharram Mansoorizadeh and Nasrollah Moghaddam Charkari. 2010. Multimodal information fusion application to human emotion recognition from face and speech. Multimedia Tools and Applications 49, 2 (2010), 277--297.
[64]
Elba Mauriz, Sandra Caloca-Amber, and Ana M Vázquez-Casares. 2020. Effect of Facial Skin Temperature on the Perception of Anxiety: A Pilot Study. In Healthcare, Vol. 8. Multidisciplinary Digital Publishing Institute, 206.
[65]
Sebastian Münzner, Philip Schmidt, Attila Reiss, Michael Hanselmann, Rainer Stiefelhagen, and Robert Dürichen. 2017. CNN-Based Sensor Fusion Techniques for Multimodal Human Activity Recognition. In ACM ISWC (Maui, Hawaii). 158--165.
[66]
Michael Neumann and Ngoc Thang Vu. 2019. Improving speech emotion recognition with unsupervised representation learning on unlabeled speech. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 7390--7394.
[67]
Kanal Paul Nigam. 2001. Using unlabeled data to improve text classification. Carnegie Mellon University.
[68]
Raymond W Novaco. 1978. Anger and coping with stress. In Cognitive behavior therapy. Springer, 135--173.
[69]
National Institute of Mental Health. 2021. COVID-19 and Mental Health. https://www.nimh.nih.gov/about/director/messages/2021/one-year-in-covid-19-and-mental-health.
[70]
World Health Organization. 2020. Mental Health. https://www.who.int/news/item/27-08-2020-world-mental-health-day-an-opportunity-to-kick-start-a-massive-scale-up-in-investment-in-mental-health.
[71]
Cheul Young Park, Narae Cha, Soowon Kang, Auk Kim, Ahsan Habib Khandoker, Leontios Hadjileontiadis, Alice Oh, Yong Jeong, and Uichin Lee. 2020. K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations. Scientific Data 7, 1 (2020), 293.
[72]
Rosalind W Picard and Jennifer Healey. 1997. Affective wearables. Personal technologies 1, 4 (1997), 231--240.
[73]
Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion. Elsevier, 3--33.
[74]
PLUX. Accessed: 2021. respiBAN Professional. https://biosignalsplux.com/products/wearables/respiban-pro.html.
[75]
Juan Carlos Quiroz, Elena Geangu, and Min Hooi Yong. 2018. Emotion recognition using smart watch sensor data: Mixed-design study. JMIR mental health 5, 3 (2018), e10153.
[76]
Martin Ragot, Nicolas Martin, Sonia Em, Nico Pallamin, and Jean-Marc Diverrez. 2017. Emotion recognition using physiological signals: laboratory vs. wearable sensors. In International Conference on Applied Human Factors and Ergonomics. Springer, 15--22.
[77]
Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. 2007. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning. 759--766.
[78]
D. Ramachandram and G. W. Taylor. 2017. Deep Multimodal Learning: A Survey on Recent Advances and Trends. IEEE Signal Processing Magazine 34, 6 (2017), 96--108.
[79]
Nafiul Rashid, Luke Chen, Manik Dautta, Abel Jimenez, Peter Tseng, and Mohammad Abdullah Al Faruque. 2021. Feature Augmented Hybrid CNN for Stress Recognition Using Wrist-based Photoplethysmography Sensor. arXiv preprint arXiv:2108.03166 (2021).
[80]
Steven Reiss. 1991. Expectancy model of fear, anxiety, and panic. Clinical psychology review 11, 2 (1991), 141--153.
[81]
Alina Roitberg, Nikhil Somani, Alexander Perzylo, Markus Rickert, and Alois Knoll. 2015. Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic Workcells. In ICMI.
[82]
Tobias Ross, David Zimmerer, Anant Vemuri, Fabian Isensee, Manuel Wiesenfarth, Sebastian Bodenstedt, Fabian Both, Philip Kessler, Martin Wagner, Beat Müller, et al. 2018. Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. International journal of computer assisted radiology and surgery 13, 6 (2018), 925--933.
[83]
Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017).
[84]
James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology 39, 6 (1980), 1161.
[85]
Mario Salai, István Vassányi, and István Kósa. 2016. Stress detection using low cost heart rate sensors. Journal of healthcare engineering 2016 (2016).
[86]
Asif Salekin, Jeremy W Eberle, Jeffrey J Glenn, Bethany A Teachman, and John A Stankovic. 2018. A weakly supervised learning framework for detecting social anxiety and depression. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 2, 2 (2018), 1--26.
[87]
Samsung. Accessed:2021. Samsung Teams up with Calm to Provide Better Mindfulness and Wellness Experiences. https://news.samsung.com/global/samsung-teams-up-with-calm-to-provide-better-mindfulness-and-wellness-experiences.
[88]
Sirat Samyoun, Abu Sayeed Mondol, and John A Stankovic. 2020. Stress detection via sensor translation. In 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE, 19--26.
[89]
Akane Sano and Rosalind W Picard. 2013. Stress recognition using wearable sensors and mobile phones. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE, 671--676.
[90]
Francesco Sartor, Jos Gelissen, Ralph Van Dinther, David Roovers, Gabriele B Papini, and Giuseppe Coppola. 2018. Wrist-worn optical and chest strap heart rate comparison in a heterogeneous sample of healthy individuals and in coronary artery disease patients. BMC Sports Science, Medicine and Rehabilitation 10, 1 (2018), 10.
[91]
Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven. 2018. Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection. In Proceedings of the 2018 on ICMI. ACM, 400--408.
[92]
Simone Schnall. 2010. Affect, mood and emotions. Social and emotional aspect of learning (2010), 59--64.
[93]
Dongmin Shin, Dongil Shin, and Dongkyoo Shin. 2017. Development of emotion recognition interface using complex EEG/ECG bio-signal for interactive contents. Multimedia Tools and Applications 76, 9 (2017), 11449--11470.
[94]
Lin Shu, Jinyan Xie, Mingyue Yang, Ziyi Li, Zhenqi Li, Dan Liao, Xiangmin Xu, and Xinyi Yang. 2018. A review of emotion recognition using physiological signals. Sensors 18, 7 (2018), 2074.
[95]
Mohammad Soleymani, Sadjad Asghari-Esfeden, Yun Fu, and Maja Pantic. 2015. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing 7, 1 (2015), 17--28.
[96]
Charles D Spielberger. 1983. State-trait anxiety inventory for adults. (1983).
[97]
Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. 2020. Which tasks should be learned together in multi-task learning?. In International Conference on Machine Learning. PMLR, 9120--9132.
[98]
Chang Su, Zhenxing Xu, Jyotishman Pathak, and Fei Wang. 2020. Deep learning in mental health outcome research: a scoping review. Translational Psychiatry 10, 1 (2020), 1--26.
[99]
Phillip D Tomporowski and Norman R Ellis. 1986. Effects of exercise on cognitive processes: A review. Psychological bulletin (1986).
[100]
Goran Udovičić, Jurica Đerek, Mladen Russo, and Marjan Sikora. 2017. Wearable emotion recognition system based on GSR and PPG signals. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care. 53--59.
[101]
Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Dengxin Dai, and Luc Van Gool. 2020. Revisiting multi-task learning in the deep learning era. arXiv preprint arXiv:2004.13379 (2020).
[102]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NeurIPS (2017), 5999--6009.
[103]
Sergio A Velastin. 2009. CCTV video analytics: Recent advances and limitations. In International Visual Informatics Conference. Springer.
[104]
Christiaan H Vinkers, Renske Penning, Juliane Hellhammer, Joris C Verster, John HGM Klaessens, Berend Olivier, and Cor J Kalkman. 2013. The effect of stress on core and peripheral body temperature in humans. Stress 16, 5 (2013), 520--530.
[105]
Ajai Vyas and Sumantra Chattarji. 2004. Modulation of different states of anxiety-like behavior by chronic stress. Behavioral neuroscience 118, 6 (2004), 1450.
[106]
Robert Wang, Gordon Blackburn, Milind Desai, Dermot Phelan, Lauren Gillinov, Penny Houghtaling, and Marc Gillinov. 2017. Accuracy of wrist-worn heart rate monitors. Jama cardiology 2, 1 (2017), 104--106.
[107]
Devy Widjaja, Michele Orini, Elke Vlemincx, and Sabine Van Huffel. 2013. Cardiorespiratory dynamic response to mental stress: a multivariate time-frequency analysis. Computational and mathematical methods in medicine 2013 (2013).
[108]
Fanyi Xiao, Yong Jae Lee, Kristen Grauman, Jitendra Malik, and Christoph Feichtenhofer. 2020. Audiovisual SlowFast Networks for Video Recognition. arXiv preprint arXiv:2001.08740 (2020).
[109]
Xiaofen Xing, Zhenqi Li, Tianyuan Xu, Lin Shu, Bin Hu, and Xiangmin Xu. 2019. SAE+ LSTM: A New framework for emotion recognition from multi-channel EEG. Frontiers in neurorobotics 13 (2019), 37.
[110]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. 2048--2057.
[111]
Megha Yadav, Md Nazmus Sakib, Ehsanul Haque Nirjhar, Kexin Feng, Amir Behzadan, and Theodora Chaspari. 2020. Exploring individual differences of public speaking anxiety in real-life and virtual presentations. IEEE Transactions on Affective Computing (2020).
[112]
Jung-Yi Yoo and Jang-Han Lee. 2015. The effects of valence and arousal on time perception in individuals with social anxiety. Frontiers in psychology 6 (2015), 1208.
[113]
Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. 2008. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE transactions on pattern analysis and machine intelligence 31, 1 (2008), 39--58.
[114]
Zhihong Zeng, Jilin Tu, Ming Liu, Thomas S Huang, Brian Pianfetti, Dan Roth, and Stephen Levinson. 2007. Audio-visual affect recognition. IEEE Transactions on multimedia 9, 2 (2007), 424--428.
[115]
Y. Zhang, C. Cao, J. Cheng, and H. Lu. 2018. EgoGesture: A New Dataset and Benchmark for Egocentric Hand Gesture Recognition. IEEE Transactions on Multimedia 20, 5 (2018), 1038--1050.
[116]
Yu Zhang and Qiang Yang. 2017. A survey on multi-task learning. arXiv preprint arXiv:1707.08114 (2017).
[117]
Yali Zheng, Tracy CH Wong, Billy HK Leung, and Carmen CY Poon. 2016. Unobtrusive and multimodal wearable sensing to quantify anxiety. IEEE Sensors Journal 16, 10 (2016), 3689--3696.
[118]
Fan Zhou, Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, and Christian Gagné. 2020. Task Similarity Estimation Through Adversarial Multitask Neural Network. IEEE Transactions on Neural Networks and Learning Systems (2020).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 2
June 2022
1551 pages
EISSN:2474-9567
DOI:10.1145/3547347
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 July 2022
Published in IMWUT Volume 6, Issue 2

Check for updates

Author Tags

  1. Affect Recognition
  2. Domain Adaptation
  3. Health Informatics
  4. Mental Health
  5. Multimodal Learning
  6. Multitask Learning
  7. Representation Learning
  8. Wearable Sensors

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)411
  • Downloads (Last 6 weeks)29
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media