skip to main content
10.1145/3610978.3640685acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
short-paper
Open access

Multi-modal Language Learning: Explorations on learning Japanese Vocabulary

Published: 11 March 2024 Publication History

Abstract

We explore robot-assisted language learning with a social robot, in which the robot teaches Japanese vocabulary. Specifically, we study if the mode of presentation of referents of nouns influences learning outcomes, and hypothesise that multimodal presentation of referents leads to improved learning outcomes. Three conditions are tested: referents are either presented as Japanese audio only, referents are visually presented, or referents are presented as actual objects that learners could pick up and manipulate. The learners were taught 4 words per condition and were distracted between the conditions with general questions related to the robot. There was a significant difference in the number of learned words between the audio-only and visual conditions, as well as between the audio-only and tactile conditions. No significant difference was found between the visual and tactile conditions.
However, from our study, it follows that both these conditions are preferred over learning through only audio.

Supplemental Material

MP4 File
Supplemental video

References

[1]
Minoo Alemi, Ali Meghdari, and Maryam Ghazisaedy. 2014. Employing humanoid robots for teaching English language in Iranian junior high-schools. International Journal of Humanoid Robotics, Vol. 11, 03 (2014), 1450022.
[2]
Minoo Alemi, Ali Meghdari, and Maryam Ghazisaedy. 2015. The impact of social robotics on L2 learners' anxiety and attitude in English vocabulary acquisition. International Journal of Social Robotics, Vol. 7, 4 (2015), 523--535.
[3]
Florence Bara and Gwenael Kaminski. 2019. Holding a real object during encoding helps the learning of foreign vocabulary. Acta Psychologica, Vol. 196 (2019), 26--32.
[4]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, Vol. 1, 1 (2009), 71--81.
[5]
Tony Belpaeme, James Kennedy, Aditi Ramachandran, Brian Scassellati, and Fumihide Tanaka. 2018a. Social robots for education: A review. Science robotics, Vol. 3, 21 (2018), eaat5954.
[6]
Tony Belpaeme, Paul Vogt, Rianne Van den Berghe, Kirsten Bergmann, Tilbe Göksun, Mirjam De Haas, Junko Kanero, James Kennedy, Aylin C Küntay, Ora Oudgenoeg-Paz, et al. 2018b. Guidelines for designing social robots as second language tutors. International Journal of Social Robotics, Vol. 10, 3 (2018), 325--341.
[7]
Chih-Wei Chang, Jih-Hsien Lee, Po-Yao Chao, Chin-Yeh Wang, and Gwo-Dong Chen. 2010. Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school. Journal of Educational Technology & Society, Vol. 13, 2 (2010), 13--24.
[8]
Jan de Wit, Thorsten Schodde, Bram Willemsen, Kirsten Bergmann, Mirjam de Haas, Stefan Kopp, Emiel Krahmer, and Paul Vogt. 2018. The effect of a robot's gestures and adaptive tutoring on children's acquisition of second language vocabularies. In 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 50--58.
[9]
Olov Engwall, José Lopes, and Ronald Cumbal. 2022. Is a Wizard-of-Oz Required for Robot-Led Conversation Practice in a Second Language? International Journal of Social Robotics (2022), 1--19.
[10]
Jaap Ham, Raymond H Cuijpers, and John-John Cabibihan. 2015. Combining robotic persuasive strategies: the persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics, Vol. 7, 4 (2015), 479--487.
[11]
Trevor A Harley. 2013. The psychology of language: From data to theory. Psychology press.
[12]
Takayuki Kanda, Takayuki Hirano, Daniel Eaton, and Hiroshi Ishiguro. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human-Computer Interaction, Vol. 19, 1--2 (2004), 61--84.
[13]
Takayuki Kanda, Rumi Sato, Naoki Saiwaki, and Hiroshi Ishiguro. 2007. A two-month field trial in an elementary school for long-term human--robot interaction. IEEE Transactions on robotics, Vol. 23, 5 (2007), 962--971.
[14]
James Kennedy, Paul Baxter, and Tony Belpaeme. 2015. Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics, Vol. 7, 2 (2015), 293--308.
[15]
Farzad Mashhadi and Golnaz Jamalifar. 2015. Second language vocabulary learning through visual and textual representation. Procedia-Social and Behavioral Sciences, Vol. 192 (2015), 298--307.
[16]
Omar Mubin, Catherine J Stevens, Suleman Shahid, Abdullah Al Mahmud, and Jian-Jie Dong. 2013. A review of the applicability of robots in education. Journal of Technology in Education and Learning, Vol. 1, 209-0015 (2013), 13.
[17]
Rebecca L Oxford. 2003. Language learning styles and strategies: An overview. Gala Oxford.
[18]
Karim Sadeghi and Bahareh Farzizadeh. 2013. The effect of visually-supported vocabulary instruction on beginner EFL learners' vocabulary gain. Mextesol Journal, Vol. 37, 1 (2013), 1--12.
[19]
Alexander Skulmowski and Günter Daniel Rey. 2017. Measuring cognitive load in embodied learning settings. Frontiers in psychology, Vol. 8 (2017), 1191.
[20]
MAJ Vlaar, Josje Verhagen, Ora Oudgenoeg-Paz, and PPM Leseman. 2017. Comparing L2 Word Learning through a Tablet or Real Objects: What Benefits Learning Most?
[21]
Hansol Woo, Gerald K LeTendre, Trang Pham-Shouse, and Yuhan Xiong. 2021. The use of social robots in classrooms: A review of field-based studies. Educational Research Review, Vol. 33 (2021), 100388.

Index Terms

  1. Multi-modal Language Learning: Explorations on learning Japanese Vocabulary

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      HRI '24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
      March 2024
      1408 pages
      ISBN:9798400703232
      DOI:10.1145/3610978
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 March 2024

      Check for updates

      Author Tags

      1. human-robot interaction
      2. multi-modal interaction
      3. robot-assisted language learning

      Qualifiers

      • Short-paper

      Funding Sources

      Conference

      HRI '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 268 of 1,124 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 127
        Total Downloads
      • Downloads (Last 12 months)127
      • Downloads (Last 6 weeks)30
      Reflects downloads up to 06 Nov 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media