skip to main content
10.1145/3332169.3333569acmotherconferencesArticle/Chapter ViewAbstractPublication Pageschinese-chiConference Proceedingsconference-collections
research-article

A multimodal affective computing approach for children companion robots

Published: 27 June 2019 Publication History

Abstract

This paper describes the approach of multimodal affective computing fusion for children companion robots. Our approach presents the affective computing fusion model that processes both verbal and nonverbal information of users; the advanced methodology can improve social robots, especially children companion robots' emotion recognition and classification ability and enhance the experience of immediate visual interaction between children and robots.

References

[1]
Hamda Al-Ali, Mhd Wael Bazzaza, M. Jamal Zemerly, and Jason W. P. Ng. 2016. MyVision AIR: An augmented interactive reality book mobile application. In IEEE Global Engineering Education Conference.
[2]
Michael Argyle. 1988. Bodily communication (2nd ed.). (1988).
[3]
Marc A. Brackett, Susan E. Rivers, Maria R. Reyes, and Peter Salovey. 2012. Enhancing academic performance and social and emotional competence with the RULER feeling words curriculum. Learning Individual Differences 22 (2012), 218--224.
[4]
José Carlos Castillo, Álvaro Castro-González, Antonio Fernández-Caballero, José Miguel Latorre, José Manuel Pastor, Alicia Fernández-Sotos, and Miguel A. Salichs. 2016. Software Architecture for Smart Emotion Recognition and Regulation of the Ageing Adult. Cognitive Computation 8, 2 (2016), 1--11.
[5]
Hayrettin Gürkök and Anton Nijholt. 2012. Brain-Computer Interfaces for Multimodal Interaction: A Survey and Principles. Vol. 28. 292--307 pages.
[6]
Ibrahim A. Hameed. 2017. Using natural language processing (NLP) for designing socially intelligent robots. (2017).
[7]
Michal Luria, Guy Hoffman, and Oren Zuckerman. 2017. Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control. (2017).
[8]
Laura Mallonee. 2018. Photographing a Robot Isn't Just Point and Shoot. (2018). https://www.wired.com/story/photographing-a-robot/
[9]
Albert Mehrabian. 1996. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament. (1996).
[10]
et al Nathanson, Lori. 2016. Creating Emotionally Intelligent Schools With RULER. Emotion Review 8, 4 (2016), 1--6.
[11]
Hiroki Nishino, Norihidayati Podari, Stefania Sini, Chamari Edirisinghe, and Adrian D. Cheok. 2016. Alice and Her Friend: A Black "Picture Book" of Multisensory Interaction for Visually-Impaired Children. In International Conference on Advances in Computer Entertainment Technology.
[12]
Andrew Ortony, Gerald L. Clore, and Allan Collins. 1988. The Cognitive Structure of Emotion. Contemporary Sociology 18, 6 (1988), 2147--2153.
[13]
D. Osher, Y. Kidron, M. Brackett, A. Dymnicki, S. Jones, and R. P. Weissberg. 2016. Advancing the Science and Practice of Social and Emotional Learning: Looking Back and Moving Forward. Review of Research in Education 40 (2016).
[14]
Michael Pauen. 2006. Emotion, decision, and mental models. ADVANCES IN PSYCHOLOGY -AMSTERDAM-(2006), 173--188.
[15]
Sun Peihong and Tao Linmi. 2008. Emotional distance measurement method in PAD emotional space. In Joint Conference on Harmonious Human-Machine Environment.
[16]
Ahmad Rabie, Britta Wrede, Thurid Vogt, and Marc Hanheide. 2010. Evaluation and Discussion of Multi-modal Emotion Recognition. In International Conference on Computer Electrical Engineering.
[17]
Shinsuke Shimojo and Ladan Shams. 2001. Sensory modalities are not separate modalities: plasticity and interactions. Current Opinion in Neurobiology 11, 4 (2001), 505--509.
[18]
Abigale Stangl, Jeeeun Kim, and Tom Yeh. 2014. 3D printed tactile picture books for children with visual impairments:a design probe. (2014).
[19]
Jianhua Tao. 2010. Multimodal Information Processing for Affective Computing. (2010).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
Chinese CHI '19: Proceedings of the Seventh International Symposium of Chinese CHI
June 2019
128 pages
ISBN:9781450372473
DOI:10.1145/3332169
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affective computing
  2. companion robots
  3. multimodal fusion

Qualifiers

  • Research-article

Conference

Chinese CHI 2019

Acceptance Rates

Overall Acceptance Rate 17 of 40 submissions, 43%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)18
  • Downloads (Last 6 weeks)1
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media