skip to main content
10.1007/978-3-031-48060-7_15guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Training Comic Dialog in VR to Improve Presentation Skills

Published: 19 November 2023 Publication History

Abstract

Manzai is traditional style of comedy in Japanese culture, in which two comedians—a straight man tsukkomi and a funny man boke—engage in humorous conversation to make the audience laugh. We developed a VR application that simulates Manzai using first-person VR to improve the public speaking skills and gain confidence while exchanging opinions. This system was created based on body movements and conversations recorded in advance by motion capture and the user’s performance was evaluated in terms of multiple factors. In this system, the user dons the avatar of a human tsukkomi of a famous Japanese comic duo and performs Manzai with the VR image of his boke partner. The Manzai performance is evaluated based on several axes and presented to the user after the completion of Manzai. The evaluation axes include measurements of pauses, movements, and smoothness, and the index is the degree of deviation from the movements of a professional comic actor, which serves as the ground truth. These factors are also important when exchanging opinions and speaking in front of an audience. We aim to enhance the user experience and abilities by enabling the users to objectively control these factors.

References

[1]
Lam, K.Y., Yang, L., Alhilal, A., Lee, L.H., Tyson, G., Hui, P.: Human-avatar interaction in metaverse: framework for full-body interaction. In: Proceedings of the 4th ACM International Conference on Multimedia in Asia, Article 10, pp. 1–7 (2022)
[2]
Hayashi K, Kanda T, Miyashita T, Ishiguro H, and Hagita N ROBOT MANZAI: robot conversation as a passive-social medium Int. J. Hum.Rob. 2008 5 1 67-86
[3]
Umetani T, Mashimo R, Nadamoto A, Kitamura T, and Nakayama H Manzai robots: entertainment robots based on auto-created manzai scripts from web news articles J. Robot. Mechatron. 2014 26 5 662-664
[4]
Amemiya, T., Aoyama, K., Ito, K.: Effect of face appearance of a teacher avatar on active participation during online live class. In: Proceedings of HCI International 2022: Human Interface and the Management of Information: Applications in Complex Technological Environments, pp. 99–110 (2022)
[5]
Okamoto M, Nakano YI, Nishida T, et al. Bolc L et al. Toward enhancing user involvement via empathy channel in human-computer interface design Lecture Notes in Computer Science 2005 Intelligent Media Technology for Communicative Intelligence Springer-Verlag 111-121
[6]
Watanabe T, Okubo M, Nakashige M, and Danbara R InterActor: speech-driven embodied interactive actor Int. J. Hum. Comput. Interact. 2004 17 1 43-60
[7]
Sakata, M.: Quantification of multimodal interactions as open communication in Manzai Duo-Comic Acts. In: Proceedings 2017 International Conference on Culture and Computing (Culture and Computing), pp. 65–66 (2017)
[8]
Tanikawa, T., et al.: Case study of low-code vr platform for training and evaluating employee’s service skills. In: Proceedings of HCII 2023 (2023)
[9]
Tanikawa, T., Ban, Y., Aoyama, K., Shinbori, E., Komatsubara, S., Hirose, M.: Service VR training system: VR simulator of man-to-man service with mental/emotional sensing and intervention. In: Proceedings of IDW 2019, pp. 993–994 (2019)
[11]
Mizuho, T., Amemiya, T., Narumi, T., Kuzuoka, H.: Virtual omnibus lecture: investigating the effects of varying lecturer avatars as environmental context on audience memory. In: Proceedings of Augmented Humans International Conference 2023, pp. 55–65 (2023)
[12]
Zhang, H., Shoda, H., Aoyagi, S., Yamamoto. M.: A study on the back and forth Manzai of Milkboy by focusing on embodied motions and actions for liven-up. In: Rauterberg, M., Fui-Hoon Nah, F., Siau, K., Krömker, H., Wei, J., Salvendy, G. (eds.) HCI International 2022 – Late Breaking Papers: HCI for Today's Community and Economy. HCII 2022. LNCS, vol. 13520. Springer, Cham.
[13]
Okamoto, M., Ohba, M., Enomoto, M., Iida, H.: Multimodal analysis of Manzai dialogue toward constructing a dialogue-based instructional agent model. J. Japan Soc. Fuzzy Theory Intell. Inform. 20(4), 526–539 (2008) (in Japanese)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
HCI International 2023 – Late Breaking Papers: 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, July 23–28, 2023, Proceedings, Part VII
Jul 2023
596 pages
ISBN:978-3-031-48059-1
DOI:10.1007/978-3-031-48060-7
  • Editors:
  • Panayiotis Zaphiris,
  • Andri Ioannou,
  • Robert A. Sottilare,
  • Jessica Schwarz,
  • Fiona Fui-Hoon Nah,
  • Keng Siau,
  • June Wei,
  • Gavriel Salvendy

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 19 November 2023

Author Tags

  1. Communication
  2. Avatar
  3. First-person perspective
  4. Metaverse

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media