skip to main content
10.1145/2157689.2157797acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction

Published: 05 March 2012 Publication History

Abstract

Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.

Supplementary Material

JPG File (hri200.jpg)
AVI File (hri200.avi)

References

[1]
C. T. Ishi, C. Liu, H. Ishiguro, N. Hagita. 2010. Head motion during dialogue speech and nod timing control in humanoid robots. Proc. of IEEE/RSJ Human Robot Interaction (HRI 2010), 293--300, 2010.
[2]
C. Sidner, C. Lee, L.-P. Morency, C. Forlines. 2006. The effect of head-nod recognition in human-robot conversation. Proc. of IEEE/RSJ Human Robot Interaction (HRI 2006), pp. 290--296, 2006.
[3]
L.-P. Morency, C. Sidner, C. Lee, and T. Darrell. 2007. Head gestures for perceptual interfaces: The role of context in improving recognition. Artificial Intelligence, 171(8-9): 568--585, June 2007.
[4]
H. C. Yehia, T. Kuratate, E. Vatikiotis-Bateson. 2002. Linking facial animation, head motion and speech acoustics. J. of Phonetics, Vol. 30, pp. 555--568, 2002.
[5]
M. E. Sargin, O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez, A. M. Tekalp. 2006. Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis. Proc IEEE International Conference on Multimedia, 2006.
[6]
K. G. Munhall, J. A. Jones, D. E. Callan, T. Kuratate, E. Vatikiotis-Bateson. 2004. Visual prosody and speech intelligibility - Head movement improves auditory speech perception. Psychological Science, Vol. 15, No. 2, pp. 133--137, 2004.
[7]
H. P. Graf., E. Cosatto, V. Strom, F. J. Huang. 2002. Visual prosody: Facial movements accompanying speech. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition (FGR'02), 2002.
[8]
J. Beskow, B. Granstrom, D. House. 2006. Visual correlates to prominence in several expressive modes. Proc. Interspeech 2006 - ICSLP, pp. 1272--1275, 2006.
[9]
C. Busso, Z. Deng, M. Grimm, U. Neumann, S. Narayanan. 2007. Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis. IEEE Trans. on Audio, Speech and Language Processing, March 2007.
[10]
Y. Iwano, S. Kageyama, E. Morikawa, S. Nakazato, K. Shirai. 1996. Analysis of head movements and its role in spoken dialogue. Proc. International Conference on Spoken Language Processing (ICSLP'96), pp. 2167--2170, 1996.
[11]
M. E. Foster and J. Oberlander. 2007. Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation, 41(3-4), 305--323, Dec. 2007.
[12]
C. T Ishi, J. Haas, F. P. Wilbers, H. Ishiguro, and N. Hagita. 2007. Analysis of head motions and speech, and head motion control in an android. Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2007), 548--553, 2007.
[13]
C. T. Ishi, H. Ishiguro, N. Hagita. 2006. Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts. Proc. Interspeech'2006 - ICSLP, pp. 2006--2009, 2006.
[14]
T. Minato, M. Shimada, H. Ishiguro, S. Itakura. 2004. Development of an android robot for studying human-robot interaction. Innovations in Applied Artificial Intelligence, Springer Verlag, pp. 424--434, 2004.
[15]
M. DeBoer and A. M. Boxer. 1979. Signal functions of infant facial expression and gaze direction during mother-infant face-to-face play. Child Development, vol. 50, no. 4, pp. 1215--1218, 1979.
[16]
S. R. H. Langton, R. J. Watt, and V. Bruce. 2000. Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, vol. 4, no. 2, pp. 50--59, 2000.
[17]
F. Kaplan and V. Hafner. 2004. The challenges of joint attention. Interaction Studies, pp. 67--74, 2004. {Online}. Available: http://cogprints.org/4067/
[18]
Y. Nagai, M. Asada, and K. Hosoda. 2006. Learning for joint attention helped by functional development. Advanced Robotics, vol. 20, pp. 1165--1181(17), October 2006.
[19]
C. T. Ishi, C. Liu, H. Ishiguro, N. Hagita. 2011. Speech-driven lip motion generation for tele-operated humanoid robots. International Conference on Auditory-Visual Speech Processing, 2011

Cited By

View all

Index Terms

  1. Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '12: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
    March 2012
    518 pages
    ISBN:9781450310635
    DOI:10.1145/2157689
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • IEEE-RAS: Robotics and Automation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 March 2012

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. dialogue acts
    2. eye gazing
    3. head motion
    4. motion generation

    Qualifiers

    • Research-article

    Conference

    HRI'12
    Sponsor:
    HRI'12: International Conference on Human-Robot Interaction
    March 5 - 8, 2012
    Massachusetts, Boston, USA

    Acceptance Rates

    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)71
    • Downloads (Last 6 weeks)18
    Reflects downloads up to 06 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media