- Research article
- Open access
- Published:
Beyond learning with cold machine: interpersonal communication skills as anthropomorphic cue of AI instructor
International Journal of Educational Technology in Higher Education volume 21, Article number: 27 (2024)
Abstract
Prior research has explored the impact of diverse anthropomorphic interventions on the effectiveness of AI (artificial intelligence) instructors. However, the exploration of interpersonal communication skills (e.g., self-disclosure) as anthropomorphic conversational cues for AI instructors is rare. Considering the positive impact of the self-disclosure of human instructors and guided by the social penetration theory (Altman & Taylor, 1973) and computers are social actors (CASA) paradigm (Nass & Moon, 2000), this study explores the role of self-disclosure by AI instructors and the mediating role of emotional attachment between AI instructors’ self-disclosure and students’ learning experiences (learning interest and knowledge gain). Additionally, it examines the differences in students’ emotional attachment, learning interest, and knowledge gain between AI and human instructors. Through a 2 (AI instructor vs. human instructor) × 2 (self-disclosure: yes or no) experiment, this study concluded that 1) consistent with human instructors, self-disclosure by AI instructors led to higher emotional attachment, learning interest, and knowledge gain; 2) emotional attachment played an important mediating role in AI instructor self-disclosure and students’ learning interest and knowledge gain; and 3) in the context of self-disclosure, students exhibited similar levels of emotional attachment to both AI and human instructors, with no significant differences observed. Regarding learning outcomes, while students demonstrated a greater interest in learning during courses taught by AI instructors, the difference in knowledge gained from AI and human instructors was not significant. The results of this study contribute to the understanding of the anthropomorphic cues of AI instructors and provide recommendations and insights for the future use of AI instructors in educational settings.
Introduction
AI-based education is continuing to drive reform and innovation in educational settings. One novel approach involves the introduction of AI and robot instructors to provide high-quality, large-scale instruction to students (Chen et al., 2023; Edwards et al., 2016; Kim et al., 2020) in the classroom. AI instructors are being considered as AI affordances that can gradually replace human instructors (Kim, Merrill Jr, et al., 2022) and improve educational accessibility (Li et al., 2016). As stated by Go and Sundar (2019), if robotic agents are to assume the roles that have been played by humans in the past, it is necessary to make their interactions as human-like as possible.
In education-related research, much effort has been devoted to the anthropomorphism of AI instructors to improve the effectiveness of AI instructor–student interactions. Specifically, voice cues (Edwards et al., 2019; Kim, Merrill Jr, et al., 2022) and identity cues (human identities and relationships) (Kim, Merrill Jr, et al., 2022; Kim et al., 2021) are interventions that have proven to be useful in infusing human qualities into AI instructors. Drawing on the principles of social agency theory (Moreno et al., 2001), AI instructors can enhance their effectiveness by displaying a greater range of interactive social cues. However, from the perspective of human–machine communication, the dialogue cues (i.e., mimicking human language and interpersonal communication skills) of AI instructors have rarely been studied. According to the relational teaching approach (Graham et al., 1992), teaching is a process that draws on effective interpersonal communication skills. Perhaps the most promising line of research related to pedagogical agents is improving their communication skills (Johnson & Lester, 2018). Therefore, we consider interpersonal communication skills to improve the effectiveness of AI instructors.
Generally, research on instructional communication has focused primarily on the interpersonal communication practices of the instructors, such as self-disclosure (Cayanus, 2004; Saylag, 2013). Instructor self-disclosure has been documented to influence student learning outcomes positively (Cayanus et al., 2009; Mazer et al., 2009; Song et al., 2016). As stated by Conaway et al. (2005), students need to perceive their instructor as a real instructor with feelings (e.g., establishing an emotional attachment to the instructor) rather than a cold computer that does nothing more than process and score their assignments. In the context of AI instructor, driven by the computers are social actors (CASA) paradigm (Nass & Moon, 2000), we suspect that the self-disclosure of an AI instructor may be one technique that enables students to perceive the instructor’s personhood, even though it is a computer. However, owing to the uncanny valley effect (Mori, 1970), we are also concerned about whether self-disclosure by AI instructors can trigger more serious disgust or creepiness among students.
Despite the importance of instructor self-disclosure, presently, minimal attention has been paid to the importance of self-disclosure by AI instructors, particularly the effect of self-disclosure by AI instructors on students’ perception (emotional attachment) and learning experiences (learning interest and knowledge gain). In this study, we argue that the self-disclosure of AI instructors does not have a direct impact on the student learning outcomes but rather an indirect impact on the student learning outcomes through the emotional attachment of the students with the AI instructor. Moreover, when comparing AI and human instructors, significant differences were observed in terms of their credibility, learning outcomes, and social presence of the instructor (Edwards et al., 2016; Kim et al., 2022a, 2022b, 2022c). Therefore, we predict that the self-disclosure of AI and human instructors will lead to different emotional attachment, learning interests and knowledge gain.
This study has three objectives in this regard. First, we explored the role of self-disclosure by AI instructors in the emotional attachment and learning experiences of students. Second, the mediating role of emotional attachment between AI instructors’ self-disclosure and students’ learning experiences was analyzed. Finally, the investigation compared student perceptions and learning experiences of AI and human instructors. By exploring these questions, this study aimed to expand the anthropomorphic cues of AI instructors to better improve their teaching effectiveness. Moreover, we aim to contribute to the exploration of how interpersonal communication skills can facilitate AI instructor–student emotional attachment and interactions.
The reminder of this article is organized as follows. In Literature review section, we review the relevant definitions and literature and establish a foundation for our research. Section 3 details the 2 × 2 experimental design and describes the methodology and approach used in our study. Section 4 presents the results of the data analysis and the findings of the empirical research. Section 5 involves a comprehensive discussion of the results, in which we interpret the data and explore their implications. Finally, we summarize the research findings and propose effective recommendations and strategies for the application of AI in education.
Literature review
AI instructors and anthropomorphism
AI instructors are machine instructors powered by AI technology (Kim et al., 2020). AI instructors come in several forms, both embodied and disembodied. Embodied AI instructors are presented in a tangible form or physical form, for example, Little Sophia and DragonBot (Kim et al., 2021). In contrast, disembodied AI instructors lack a physical presence and can only be displayed on a screen. Presently, disembodied machine instructors are being employed in educational environments, such as Cognitive Tutor, Duolingo, and AI-driven SnatchBot (Kim et al., 2021). In this study, the AI instructor was a virtual instructor on the screen without a physical presence.
Anthropomorphism has been widely conceptualized as the tendency to attribute human or human-like characteristics to non-human agents (Li & Suh, 2022). Previous studies have extensively explored the diverse anthropomorphic cues of AI instructors. For example, Edwards et al. (2019) and Kim, Merrill Jr, et al. (2022) explored the effects of various vocal characteristics on students’ learning outcomes. Kim, Merrill Jr, et al. (2022) compared the difference between machine voices and human voices. The results showed that human voices enhanced the perceived credibility of the students towards the AI instructors. In addition to making AI instructors sound more human-like, Edwards et al. (2019) provided an age-specific breakdown of their voice characteristics. From the perspective of social identity theory (Tajfel, 1978), the results reveal the importance of age identity. The identification of age and voice positively influenced the learning experience of students. Moreover, Kim et al. (2021) focused on the communication styles of AI instructors. More positive learning outcomes were observed when the AI instructors were relational rather than functional.
Instructor self-disclosure
Self-disclosure refers to the revealing of personal information to others (Cozby, 1973). According to the social penetration theory (Altman & Taylor, 1973), self-disclosure is regarded as a crucial first step in creating interpersonal relationships (Song et al., 2016). Research has repeatedly supported the premise that sharing private information with others promotes intimacy (Berg & Archer, 1983; Collins & Miller, 1994; Mou et al., 2023). In the field of human–computer interaction (HCI), the self-disclosure of computers (chatbots and robots) has always been a topic of interest for researchers. For example, chatbots and robots offering self-disclosure or empathy have been widely shown to alleviate psychological problems, such as stress, and provide people with emotional support (Lee et al., 2020; Liu & Sundar, 2018; Meng & Dai, 2021).
In educational research, instructor self-disclosure is defined as the conscious and deliberate disclosure of personal information by instructors. The content of self-disclosure includes, but is not limited to, personal history and experiences, interests, opinions about the world or individuals, and evaluations and perceptions of ongoing classroom events (Cayanus & Martin, 2008). Because textbooks often fall short of providing current and genuine examples, instructors can effectively illustrate and explain the content of their classes by sharing their backgrounds, experiences, and perspectives (Cayanus & Martin, 2008; Goldstein & Benassi, 1994). Instructor self-disclosure is a rich personal source of communication between students and instructors in classroom (Fusani, 1994). Recently, considerable effort has been put into existing research to explore the role of self-disclosure by human instructors. Some studies have observed that instructor self-disclosure helps to reduce the psychological distance between instructors and students, thereby increasing the closeness of class participants (Downs et al., 1988; Song et al., 2019). Moreover, some studies have revealed that students’ learning experiences, for example, engagement (Jebbour & Mouaid, 2019) and motivation (Goldstein & Benassi, 1994) are influenced by instructors’ level of self-disclosure.
In the current study, AI instructor self-disclosure was conceptualized as the proactive conveyance of information by the AI instructor to learners regarding its own characteristics and functionalities. This self-disclosure includes detailed information on the AI’s technical background, its developmental history, teaching methodologies and strategies, as well as comprehensive descriptions of how it interacts with learners and provides feedback. As AI instructors are a new technology for most students, we believe that self-disclosure by AI instructors may be an important instant behavior. Despite the possible uncanny valley effect (Mori, 1970), previous meta-analyses (Blut et al., 2021; Roesler et al., 2021; Schneider et al., 2019) have consistently shown a positive effect of anthropomorphism. Therefore, based on prior studies and the CASA paradigm (Nass & Moon, 2000), this study hypothesizes that self-disclosure by AI instructors leads to more positive students’ perception and learning outcomes. Accordingly, we formulated the following research hypotheses:
-
H1: Compared to non-self-disclosure, self-disclosure by AI instructors increases emotional attachment of students.
-
H2: Compared to non-self-disclosure, self-disclosure by AI instructors increases student learning interest.
-
H3: Compared to non-self-disclosure, self-disclosure by AI instructors increases knowledge gain of students.
Mediating role of emotional attachment
Emotional attachment (emotional bond) is based on emotional empathy, understanding, and emotional responsiveness and can facilitate emotional closeness and connection between individuals (Loureiro et al., 2012). Scholars commonly perceive emotional attachment as an inherent human requirement that develops instinctively and without conscious effort (You & Robert, 2018). Emotional bonds can form between humans and various entities, encompassing brands, products, and even work endeavors (Ahmadi & Ataei, 2024; Mamun et al., 2023; Na et al., 2023). In the field of HCI, when a machine, such as a computer, is the initiator of an interaction, individuals instinctively employ preconceived notions associated with computers, such as being mechanical, impartial, lacking emotions, and distant. This subsequently affects the results of the interaction. Therefore, the importance of emotional attachment has been widely emphasized in both humans and machines (Laestadius et al., 2022; Zhang & Rau, 2023).
Prior research has consistently reported on the importance of self-disclosure for affective bonding (Davidson, 2011; Ladany et al., 2001; Myers, 1998) because self-disclosure is closely related to trust, liking, and intimacy (Reis, 2007). In accordance with attachment theory (Bretherton, 1985), emotional attachment is recognized as a key element in creating a deep connections and empathy between teachers and students (Frymier & Houser, 2000). Extensive pedagogical research has demonstrated that emotional attachment between teachers and students as a predictor or inducer has a positive impact on learning outcomes (Hagenauer & Volet, 2014; Song et al., 2016; Zhang, Che, Nan, & Kim, 2023). However, emotional attachment may play a more dynamic role beyond being a predictor or outcome variable. As mentioned earlier, the current study predicted that self-disclosure by AI instructors would increase students’ emotional attachment, interest in learning, and knowledge acquisition (H1-3). This section further proposes that emotional attachment mediates this association. Moreover, according to the emotional response theory (Richmond et al., 2015), the way instructors communicate verbally and nonverbally can affect how learners behave by influencing their emotional reactions (Wang et al., 2023).
Although limited, the mediating role of emotional attachment has been tested in HCI studies (Kim et al., 2022a, 2022b, 2022c; Na et al., 2023). Therefore, in the context of AI instructors, based on evidence from empirical studies, we predict that emotional attachment plays the role of a mediating variable.
-
H4: The relationship between AI instructor self-disclosure or non-self-disclosure and students’ learning interest is mediated by the instructor–student emotional attachment.
-
H5: The relationship between AI instructor self-disclosure or non-self-disclosure and students’ knowledge gain is mediated by the instructor–student emotional attachment.
AI instructors versus human instructors
Based on a review of the existing literature on the positive impact of human teacher self-disclosure, we initially explored the potential impact of AI teacher self-disclosure on students’ affective attachment, interest in learning, and knowledge gain by formulating a series of hypotheses, as well as the possible mediating role of affective attachment. However, while the CASA paradigm operates under the assumption that individuals react to computers in a manner akin to interacting with humans (Nass & Moon, 2000), it does not suggest a universal acceptance of computers as genuine individuals. Previous studies have indicated that individuals exhibit varying attitudes or reactions when it comes to computers and humans within the education domain (Edwards et al., 2016; Meng & Dai, 2021).
Therefore, to further enhance the depth and breadth of our study, we introduced supplementary research hypotheses aimed at exploring the comparative effects of self-disclosure between AI and human instructors. Specifically, the following research questions are proposed:
-
H6: In the context of instructor’s self-disclosure, students taught by AI instructors will exhibit a different level of emotional attachment towards their instructors compared to students taught by human instructors.
-
H7: In the context of instructor’s self-disclosure, there will be differences in learning interest and knowledge gain between students taught by AI instructors and those taught by human instructors.
Theoretical model
The research model used in this study is shown in Fig. 1.
Methodology
Overall method
This study featured a between-subjects factorial design: 2 (AI instructor vs. human instructor) × 2 (instructor self-disclosure: yes vs. no). Participants were randomly assigned to one of the experimental conditions. The Institutional Review Board of the university has approved the project.
Participants
The participants in this study were undergraduate students from the same department at a northern university in China. A total of 211 students participated in this study. Participants who completed the experiment were rewarded with a supermarket voucher worth 30 RMB (US$4). Participant information is listed in Table 1.
Instruments
We pre-recorded four sets of instructional videos related to machine learning taught by self-disclosing AI instructors, non-self-disclosing AI instructors, self-disclosing human instructors, and non-self-disclosing human instructors. The class taught by the AI instructor was developed using an existing virtual human development tool (https://zenvideo.qq.com/).
All teaching content was provided by a university professor to ensure that the lesson plans and teaching videos were scientifically sound and feasible. The contents of these four sets of instructional videos were identical, with only two differences: the subject of instruction (AI vs. human) and the language used (self-disclosure vs. non-self-disclosure).
Procedure
This experiment was an online experiment conducted via Tencent Meeting. Participants were asked to stay in a quiet room to participate in the experiment. First, after obtaining informed consent, the participants were randomly assigned to different experimental groups: Group 1(N = 52): AI instructor with self-disclosure; Group 2 (N = 53): AI instructor without self-disclosure; Group 3(N = 53): human instructor with self-disclosure; and Group 4(N = 53): human instructor without self-disclosure. Subsequently, they were asked to join different rooms and turn on cameras throughout the meeting. Before starting the experiment, the participants completed a pre-test designed to test their basic knowledge of machine learning. After completing the pre-test, the students were asked to participate in an online class taught by either a human instructor or an AI instructor. At the end of the class, participants took a brief exam to test their learning outcomes, followed by the completion of a questionnaire. The participants were allowed to turn off the camera and exit the room only after all the experimental processes were completed. Each set of experiments lasted for one and a half hours. The flow of the experiment is depicted in Fig. 2.
Measurements
In this study, objective, and subjective methods (i.e., quizzes and questionnaires) were used to detect the learning outcomes, perception and learning experiences of the students.
Questionnaire
A well-designed questionnaire was used to measure the students’ emotional attachment and interest in learning. All the questions in the survey were cited and modified from existing studies (Nan et al., 2022). These questions were all measured using a 5-point Likert scale (Joshi et al., 2015).
To measure the emotional attachment of students to the instructors, six items (e.g., “I have emotional resonance with the instructor”) modified from Jiménez and Voss. (2014) were employed. To measure student learning interest, five items modified from Fryer et al. (2019) were used (e.g., “I really enjoyed the instructor’s course” and “I am very interested in the content shared by the instructor”).
Quiz
To assess the subjects’ prior knowledge and knowledge acquisition after learning, our research team and two university professors carefully designed the pre-tests and post-tests based on the course content. In designing the quizzes, we used isomorphic items to ensure the validity of our assessments and to avoid measuring reactivity. The questions in the pre- and post-tests were equivalent in terms of content, difficulty, and cognitive demands, but differed in their presentation. Specifically, the questions were designed such that each pre-test question had an equivalent post-test question.
For example, one question in the pre-test was what is one of the goals of machine learning? The options presented for this question were diverse, ranging from the automatic extraction of patterns from data to make predictions, the manufacturing of robots capable of sensing emotions, the facilitation of creating artworks by machines, to the full mimicry of the human brain’s functioning. In the post-test, the corresponding question asked which one is the goal of machine learning. The options were revised to include improving computer program performance through algorithms, envisioning artificial intelligence to replace all human jobs, creating systems that operate without input, and developing autonomous weapon systems. In total, the pre-test and post-test each contained ten single-choice questions.
Data analysis
In this study, SPSS version 29 was used for data analysis. After the descriptive analyses, we performed a t-test and ANOVA test on the four groups. Finally, we used the model 4 of PROCESS (Hayes, 2012) to explore the indirect association between instructor self-disclosure and learning outcomes through emotional attachment. Further, 95% bias-corrected bootstrap confidence intervals (CIs) based on 5000 bootstrap resamples were used to examine the importance of the conditional indirect effects.
Results
Reliability and validity examinations
We calculated Cronbach’s α coefficient, factor loadings, average variance extracted (AVE), and composite reliability (CR). The recommended thresholds for Cronbach’s alpha, factor loadings, AVE, and CR are 0.70, 0.60, 0.50, and 0.70, respectively (Fornell & Larcker, 1981). As shown in Table 2, all values met the recommended levels.
Furthermore, following the suggestions of previous research (Fornell & Larcker, 1981), we confirmed that the square root of each AVE value was greater than the correlation coefficients (as shown in Table 3). Therefore, the questionnaire passed the validity test.
Pre-test and post-test of knowledge gain
Before testing the hypotheses and research questions, we analyzed the results of the pre- and post-tests to ensure that the four groups were comparable. First, we compared the mean scores of the four groups on the pre-test. As listed in Table 4, the mean scores of the four groups were 2.58, 2.57, 2.6, and 2.56. The differences between the groups were not statistically significant (F = 0.02, p = 0.996).
Subsequently, we compared the differences between the pre- and post-tests. As listed in Table 5, the results show that the differences between the pre-test and post-test of all four groups were significant (p = 0.000); therefore, we concluded that all groups of students gained knowledge in our experiment.
Differences between AI instructor self-disclosure and non-self-disclosure
To test H1 to H3, which examined whether students have a better learning experience when AI instructors self-disclose, we compared the values in Group 1 and Group 2. As shown in Fig. 3, compared to non-self-disclosure, self-disclosure by AI instructor resulted in higher emotional attachment (Mean = 3.61, SD = 0.37, and p = 0.000), learning interest (Mean = 4.15, SD = 0.61, and p = 0.000), and knowledge acquisition (Mean = 7.08, SD = 0.71, and p = 0.03). Hence, H1 to H3 are supported.
Mediating role of emotional attachment
We analyzed the mediating effects of emotional attachment. For the mediation analyses, the subgroups (AI instructor self-disclosure vs. AI instructor non-self-disclosure) were set as the independent variables. We used codes 1 (AI instructor self-disclosure) and 2 (AI instructor non-self-disclosure) for the groups as the defined values for the independent variables. The results are shown in Fig. 4 and Table 6.
Self-disclosure or non-disclosure by the AI instructor directly affects students’ learning interest (b = -0.38, SE = 0.15, and 95% CI [-0.67, -0.08]). Moreover, the indirect relationship between AI instructor self-disclosure or non-self-disclosure and learning interest through emotional attachment is also significant (b = -0.45, SE = 0.10, and 95% CI [-0.64, -0.23]). Therefore, we conclude that emotional attachment mediates the relationship between an AI instructor’s self-disclosure and students’ learning interest.
In regards to knowledge gained, a suppression effect was observed (MacKinnon et al., 2000). The mediating role of emotional attachment was excessively influential, resulting in a non-significant direct effect between grouping and knowledge acquisition. Self-disclosure or non-disclosure by the AI instructor does not directly affect the knowledge acquisition of students (b = -0.28, SE = 0.13, and 95% CI [-0.54, 0.03]), but indirectly affects the knowledge acquisition of the students through the effect on the emotional attachment (b = -0.58, SE = 0.16, and 95% CI [-0.19, -0.29]). In the case of the suppression effect, we further calculated the effect size, that is, the effect size of the indirect effect on the direct effect: |ab/c’| =|-1.55*0.56/0.44|= 1.97. Therefore, we conclude that emotional attachment mediates the relationship between an AI instructor’s self-disclosure and student knowledge acquisition.
Our findings suggest that the mediating role of emotional attachment is significant for both learning interest (affective learning) and knowledge acquisition (behavioral learning). AI instructor self-disclosure triggers a higher level of emotional attachment than non-self-disclosure, which improves students’ learning outcomes. Therefore, H4 and H5 are supported.
Comparison of differences between AI and human instructors
To test H6 and H7, we compared the values of the human instructor and AI instructor groups, and the results are shown in Fig. 5. All four groups differed significantly in terms of emotional attachment (F = 49.932 and p < 0.001), learning interest (F = 65.583 and p < 0.001), and knowledge acquisition (F = 2.965 and p = 0.033).
Further analysis was conducted to explore the specific differences in emotional attachment, learning interest, and knowledge gain between the groups. The results are shown in Fig. 6–8. Figure 6 compares the emotional attachments of students and instructors in the different groups. Students had higher emotional attachment values for self-disclosing human instructors (Mean = 3.67, SD = 0.62) than for self-disclosing AI instructors (Mean = 3.61, SD = 0.37). However, this difference was not statistically significant (p = 0.456).
Figure 7 compares the learning interests of different groups. Compared to the self-disclosure human instructor, students had a higher level of learning interest with self-disclosure AI instructors (Mean = 4.15, SD = 0.61). The difference was statistically significant (p < 0.001).
Figure 8 compares the students’ knowledge gain in the different groups. The highest value of knowledge gain was for the self-disclosing AI instructor (Mean = 7.08 and SD = 0.71), followed closely by the self-disclosing human instructor (Mean = 7.00 and SD = 0.65). No significant differences were observed between the AI and human instructors (p = 0.536).
Our findings suggest that in the case of self-disclosure, although there were differences in emotional attachments to self-disclosure between AI and human instructors, such differences were insignificant. Furthermore, in response to differences in teaching effectiveness, only a statistical difference was found between learning interest.
Discussion
This study explored the role of self-disclosure by AI instructors. Additionally, the mediating role of the emotional attachment between AI instructors and students’ learning experiences was explored. Finally, the differences in emotional attachment, learning interest and knowledge gain between AI and human instructors were explored. This study is one of the first to explore and confirm the important role of self-disclosure by AI instructors. The findings of this study revealed that, similar to a human instructor, AI instructor self-disclosure also had a positive impact on the emotional attachment, learning interest, and knowledge gain of students. Moreover, emotional attachment played a mediating role in the relationship between AI self-disclosure (yes/no) and student learning outcomes (learning interest and knowledge gain). Furthermore, an additional analysis suggeted that students do not always respond and behave in the same way towards humans and AI in all settings (a significant difference was found in learning interest), which is in line with previous studies (Edwards et al., 2016; Meng & Dai, 2021; Sundar & Kim, 2019).
Theoretical implications
This study explores the role of self-disclosure in the context of AI instructors, which is an innovative research direction and provides an important theoretical contributions in the field of educational technology.
First, by innovatively introducing self-disclosure from human instructors to AI instructors, this study not only expands the application of the CASA paradigm (Nass & Moon, 2000) in the field of educational technology, but also enriches the anthropomorphic cues of AI instructors. Rather than simply being a medium for information delivery, AI instructors become social actors capable of establishing emotional connections, facilitating learners’ emotional engagement, and improving their learning efficiency.
Second, this study significantly expands the understanding and practice of the relational teaching approach (Graham et al., 1992) and social penetration theory (Altman & Taylor, 1973) in educational-related human-AI interactions. By examining how the self-disclosure by AI instructors affects learners’ emotional attachment, interest in learning, and knowledge acquisition, this study revealed ways to construct effective interpersonal communication in technology-mediated learning environments. Additionally, this study contributes to a better understanding of how interpersonal communication skills facilitate AI instructor–student emotional attachment and learning outcomes in an AI-driven machine education setting.
Third, this study suggests the importance of establishing an emotional attachment between students and AI educators. According to social identity theory (Ashforth & Mael, 1989), students seek empathy and connection with others in a learning environment. Self-disclosure by an AI instructor can be viewed as a method of establishing an emotional attachment (Davidson, 2011; Ledbetter et al., 2011), which may motivate students to identify with and develop a closer bond with the AI instructor. This emotional attachment further influences student motivation and engagement in knowledge acquisition as students may be more inclined to interact deeply with entities to which they feel close (Song et al., 2016; Zhang, Che, Nan, Li, et al., 2023). The findings of this study suggest that even non-human lecturers can successfully simulate this process of emotional communication through self-disclosure, a humane communication strategy that promotes learners’ emotional attachment and enhances their interest in the content and efficiency of knowledge absorption.
Finally, this study demonstrates that people do not always behave in the same way towards humans and computers in all settings (e.g., learning interest), which is in line with previous studies (Edwards et al., 2016; Meng & Dai, 2021; Sundar & Kim, 2019). Regarding student learning interest, the students always showed more interest in learning from the AI instructors. This finding is consistent with previous studies, revealing that technology stimulates greater student interest in learning (Fryer et al., 2019; Liu et al., 2022). What surprised us, however, was that although algorithm-based affective cues may appear to be business-as-usual and insincere (Mou & Xu, 2017), self-disclosure by AI instructors can also lead to similar affective attachments to self-disclosure by human instructors.
Practical implications
The results provide valuable insights for educational technology developers and educators. Incorporating human elements, such as self-disclosure, into AI can significantly enhance the educational value of AI technology. Therefore, edtech developers should design interactive experiences that are more anthropomorphic and emotional, such as through storytelling, sharing experiences, or expressing emotional responses, to enhance learners’ emotional attachment and improve their overall learning experiences. For educational practitioners, it is crucial to select AI teaching tools that provide highly humanized and emotionally interactive experiences to build more interactive and engaging learning environments. Finally, a close partnership and interdisciplinary collaboration between edtech developers and educators is needed to deeply explore the emotional dynamics of the teaching and learning processes and the specific needs of learners. The use of AI technology in education should focus not only on technological advancements but also on the user experience to ensure that AI solutions provide an engaging and effective learning experience and follow the principles of educational psychology.
Limitations and suggestions for future studies
Despite the contributions of this study, the following limitations should be addressed in future research. First, this study used an online experiment in which all video lessons were recorded in advance. Although this is a common method of online learning, it limits the interactions between students and instructors. Therefore, future research should consider offline real-time lessons, while exploring the role of reciprocal self-disclosure between students and AI instructors. Furthermore, previous studies have claimed that instructor self-disclosure contributes to a more satisfying teacher–student relationship (Song et al., 2016); therefore, the establishment of an AI instructor–student relationship is anticipated. However, relationship building is a long-term process, and our experimental setup does not fulfill this condition; therefore, we encourage researchers to conduct longitudinal studies to explore the effect of AI instructor self-disclosure on the establishment of AI instructor–student relationships. Moreover, this study was conducted with the participation of students from a university in China, and the findings may not be generalizable. Future research can explore the role of AI instructor self-disclosure with a sample of university students from different countries.
Finally, in our study, we designed two different sets of questions for the pre-test and post-test, aiming to minimize the impact of measurement reactivity on assessing students’ knowledge acquisition. However, there remains a risk that our measurement results may not be entirely accurate. As suggested by Chevalier et al. (2022), we can mitigate this risk by randomly yet equitably distributing the two different sets of quizzes among students participating in the pre-test. Then, by comparing the equivalent average scores from students who took the different quizzes, we can validate the comparable difficulty level of the two sets of questions. For the post-test phase, each participant will complete an additional questionnaire. Given the rigor of this method, we recommend that future research adopts this approach to ensure that test designs can more accurately and effectively evaluate students’ knowledge acquisition.
Conclusion
This study explores the impact of AI instructor self-disclosure on students’ emotional attachment, learning interest and knowledge gain; subsequently, the mediating role of emotional attachment between AI instructor’s self-disclosure and students’ learning experience is explored. Finally, the differences in student perceptions and learning experiences between AI and human instructors are compared. The findings of this study suggest that an AI instructor self-disclosure promotes greater emotional attachment, learning interest, and knowledge gain. Emotional attachment plays an important mediating role between AI instructors’ self-disclosure and students’ learning experience (learning interest and knowledge acquisition). Moreover, in the context of self-disclosure, the students exhibited similar levels of emotional attachment to both AI and human instructors, with no significant differences observed. Regarding learning outcomes, while students demonstrated a greater interest in learning during courses taught by AI instructors, the difference in knowledge gained from AI and human instructors was not significant.
Our study opens several avenues for future research. Given the promising applications of AI technology in education, investigating how advances in anthropomorphic cues in AI can enhance the learning experiences is becoming increasingly important. Future research should continue to enrich the representation of anthropomorphic cues and explore the impact of these anthropomorphic cues on learning outcomes. Another promising direction, and potentially a groundbreaking area of research, is to explore the potential of anthropomorphic and personalized AI instructors to adapt to individual learning styles and needs. Finally, understanding the ethical implications of AI in educational settings and ensuring its reasonable use remain important areas for future research. These directions not only promise to expand our understanding of AI in education but also aim to realize its full potential to contribute to a higher quality of education.
Availability of data and materials
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Ahmadi, A., & Ataei, A. (2024). Emotional attachment: A bridge between brand reputation and brand advocacy. Asia-Pacific Journal of Business Administration, 16(1), 1–20. https://doi.org/10.1108/APJBA-11-2021-0579
Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston. https://psycnet.apa.org/record/1973-28661-000
Ashforth, B. E., & Mael, F. (1989). Social identity theory and the organization. Academy of Management Review, 14(1), 20–39. https://doi.org/10.5465/amr.1989.4278999
Berg, J. H., & Archer, R. L. (1983). The disclosure-liking relationship: Effects of self-perception, order of disclosure, and topical similarity. Human Communication Research, 10(2), 269–281. https://doi.org/10.1111/j.1468-2958.1983.tb00016.x
Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49, 632–658. https://doi.org/10.1007/s11747-020-00762-y
Bretherton, I. (1985). Attachment theory: Retrospect and prospect. Monographs of the Society for Research in Child Development, 50(1/2), 3–35. https://doi.org/10.2307/3333824
Cayanus, J. L. (2004). Effective instructional practice: Using teacher self-disclosure as an instructional tool. Communication Teacher, 18(1), 6–9. https://doi.org/10.1080/1740462032000142095
Cayanus, J. L., & Martin, M. M. (2008). Teacher self-disclosure: Amount, relevance, and negativity. Communication Quarterly, 56(3), 325–341. https://doi.org/10.1080/01463370802241492
Cayanus, J. L., Martin, M. M., & Goodboy, A. K. (2009). The relation between teacher self-disclosure and student motives to communicate. Communication Research Reports, 26(2), 105–113. https://doi.org/10.1080/08824090902861523
Chen, S., Qiu, S., Li, H., Zhang, J., Wu, X., Zeng, W., & Huang, F. (2023). An integrated model for predicting pupils’ acceptance of artificially intelligent robots as teachers. Education and Information Technologies, 28(9), 11631-11654. https://doi.org/10.1007/s10639-023-11601-2
Chevalier, M., Giang, C., El-Hamamsy, L., Bonnet, E., Papaspyros, V., Pellet, J.-P., Audrin, C., Romero, M., Baumberger, B., & Mondada, F. (2022). The role of feedback and guidance as intervention methods to foster computational thinking in educational robotics learning activities for primary school. Computers & Education, 180, 104431. https://doi.org/10.1016/j.compedu.2022.104431
Collins, N. L., & Miller, L. C. (1994). Self-disclosure and liking: A meta-analytic review. Psychological Bulletin, 116(3), 457. https://doi.org/10.1037/0033-2909.116.3.457
Conaway, R. N., Easton, S. S., & Schmidt, W. V. (2005). Strategies for enhancing student interaction and immediacy in online courses. Business Communication Quarterly, 68(1), 23–35. https://doi.org/10.1177/1080569904273300
Cozby, P. C. (1973). Self-disclosure: A literature review. Psychological Bulletin, 79(2), 73. https://doi.org/10.1037/h0033950
Davidson, C. (2011). The relation between supervisor self-disclosure and the working alliance among social work students in field placement. Journal of Teaching in Social Work, 31(3), 265–277. https://doi.org/10.1080/08841233.2011.580248
Downs, V. C., Javidi, M. M., & Nussbaum, J. F. (1988). An analysis of teachers’ verbal communication within the college classroom: Use of humor, self-disclosure, and narratives. Communication Education, 37(2), 127–141. https://doi.org/10.1080/03634528809378710
Edwards, A., Edwards, C., Spence, P. R., Harris, C., & Gambino, A. (2016). Robots in the classroom: Differences in students’ perceptions of credibility and learning between “teacher as robot” and “robot as teacher.” Computers in Human Behavior, 65, 627–634. https://doi.org/10.1016/j.chb.2016.06.005
Edwards, C., Edwards, A., Stoll, B., Lin, X., & Massey, N. (2019). Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior, 90, 357–362. https://doi.org/10.1016/j.chb.2018.08.027
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104
Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93, 279–289. https://doi.org/10.1016/j.chb.2018.12.023
Frymier, A. B., & Houser, M. L. (2000). The teacher-student relationship as an interpersonal relationship. Communication Education, 49(3), 207–219. https://doi.org/10.1080/03634520009379209
Fusani, D. S. (1994). “Extra‐class” communication: Frequency, immediacy, self‐disclosure, and satisfaction in student‐faculty interaction outside the classroom. Journal of Applied Communication Research, 22(3), 232–255. https://doi.org/10.1080/00909889409365400
Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020
Goldstein, G. S., & Benassi, V. A. (1994). The relation between teacher self-disclosure and student classroom participation. Teaching of Psychology, 21(4), 212–217. https://doi.org/10.1207/s15328023top2104_2
Graham, E. E., West, R., & Schaller, K. A. (1992). The association between the relational teaching approach and teacher job satisfaction. Communication Reports, 5(1), 11–22. https://doi.org/10.1080/08934219209367539
Hagenauer, G., & Volet, S. E. (2014). Teacher–student relationship at university: An important yet under-researched field. Oxford Review of Education, 40(3), 370–388. https://doi.org/10.1080/03054985.2014.921613
Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable mediation, moderation, and conditional process modeling. In: University of Kansas, KS. http://www.claudiaflowers.net/rsch8140/Hayesprocess.pdf
Jebbour, M., & Mouaid, F. (2019). The impact of teacher self-disclosure on student participation in the university English language classroom. International Journal of Teaching and Learning in Higher Education, 31(3), 424–436. https://files.eric.ed.gov/fulltext/EJ1244991.pdf
Jiménez, F. R., & Voss, K. E. (2014). An alternative approach to the measurement of emotional attachment. Psychology & Marketing, 31(5), 360–370. https://doi.org/10.1002/mar.20700
Johnson, W. L., & Lester, J. C. J. A. M. (2018). Pedagogical agents: Back to the future. AI Magazine, 39(2), 33–44. https://doi.org/10.1609/aimag.v39i2.2793
Joshi, A., Kale, S., Chandel, S., & Pal, D. K. (2015). Likert scale: Explored and explained. British Journal of Applied Science & Technology, 7(4), 396. https://urlzs.com/Ptjef
Kim, J., Merrill, K., Kun, X., & Sellnow, D. D. (2022). Embracing AI-based education: Perceived social presence of human teachers and expectations about machine teachers in online education. Human-Machine Communication, 4, 169–184. https://doi.org/10.3316/informit.461477131588157
Kim, J., Kang, S., & Bae, J. (2022a). Human likeness and attachment effect on the perceived interactivity of AI speakers. Journal of Business Research, 144, 797–804. https://doi.org/10.1016/j.jbusres.2022.02.047
Kim, J., Merrill, K., Jr., Xu, K., & Kelly, S. (2022b). Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Computers in Human Behavior, 136, 107383. https://doi.org/10.1016/j.chb.2022.107383
Kim, J., Merrill, K., Xu, K., & Sellnow, D. D. (2020). My teacher is a machine: Understanding students’ perceptions of AI teaching assistants in online education. International Journal of Human-Computer Interaction, 36(20), 1902–1911. https://doi.org/10.1080/10447318.2020.1801227
Kim, J., Merrill, K., Jr., Xu, K., & Sellnow, D. D. (2021). I like my relational machine teacher: An AI instructor’s communication styles and social presence in online education. International Journal of Human-Computer Interaction, 37(18), 1760–1770. https://doi.org/10.1080/10447318.2021.1908671
Ladany, N., Walker, J. A., & Melincoff, D. S. (2001). Supervisory style: Its relation to the supervisory working alliance and supervisor self-disclosure. Counselor Education and Supervision, 40(4), 263–275. https://doi.org/10.1002/j.1556-6978.2001.tb01259.x
Laestadius, L., Bishop, A., Gonzalez, M., Illenčík, D., & Campos-Castillo, C. (2022). Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society. https://doi.org/10.1177/14614448221142007
Ledbetter, A. M., Mazer, J. P., DeGroot, J. M., Meyer, K. R., Mao, Y., & Swafford, B. (2011). Attitudes toward online social connection and self-disclosure as predictors of Facebook communication and relational closeness. Communication Research, 38(1), 27–53. https://doi.org/10.1177/0093650210365537
Lee, Y.-C., Yamashita, N., Huang, Y., & Fu, W. (2020). " I Hear You, I Feel You": encouraging deep self-disclosure through a chatbot. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp.1–12) https://doi.org/10.1145/3313831.3376175
Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers for video instruction. Computers in Human Behavior, 55, 1222–1230. https://doi.org/10.1016/j.chb.2015.04.005
Li, M., & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32(4), 2245–2275. https://doi.org/10.1007/s12525-022-00591-7
Liu, B., & Sundar, S. S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110
Liu, C.-C., Liao, M.-G., Chang, C.-H., & Lin, H.-M. (2022). An analysis of children’interaction with an AI chatbot and its impact on their interest in reading. Computers & Education, 189, 104576. https://doi.org/10.1016/j.compedu.2022.104576
Loureiro, S. M. C., Ruediger, K. H., & Demetris, V. (2012). Brand emotional attachment and loyalty. Journal of Brand Management, 20, 13–27. https://doi.org/10.1057/bm.2012.3
MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1, 173–181. https://doi.org/10.1023/A:1026595011371
Mamun, M. R. A., Prybutok, V. R., Peak, D. A., Torres, R., & Pavur, R. J. (2023). The role of emotional attachment in IPA continuance intention: An emotional attachment model. Information Technology & People, 36(2), 867–894. https://doi.org/10.1108/ITP-09-2020-0643
Mazer, J. P., Murphy, R. E., & Simonds, C. J. (2009). The effects of teacher self-disclosure via Facebook on teacher credibility. Learning, Media and Technology, 34(2), 175–183. https://doi.org/10.1080/17439880902923655
Meng, J., & Dai, Y. (2021). Emotional support from AI chatbots: Should a supportive partner self-disclose or not? Journal of Computer-Mediated Communication, 26(4), 207–222. https://doi.org/10.1093/jcmc/zmab005
Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177–213. https://doi.org/10.1207/S1532690XCI1902_02
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33. https://cir.nii.ac.jp/crid/1370013168736887425
Mou, Y., Zhang, L., Wu, Y., Pan, S., & Ye, X. (2023). Does self-disclosing to a robot induce liking for the robot? Testing the disclosure and liking hypotheses in human–robot interaction. International Journal of Human–Computer Interaction, 1–12. https://doi.org/10.1080/10447318.2022.2163350
Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432–440. https://doi.org/10.1016/j.chb.2017.02.067
Myers, S. A. (1998). Sibling communication satisfaction as a function of interpersonal solidarity, individualized trust, and self-disclosure. Communication Research Reports, 15(3), 309–317. https://doi.org/10.1080/08824099809362127
Na, Y., Kim, Y., & Lee, D. (2023). Investigating the effect of self-congruity on attitudes toward virtual influencers: mediating the effect of emotional attachment. International Journal of Human–Computer Interaction, 1–14. https://doi.org/10.1080/10447318.2023.2238365
Nan, D., Shin, E., Barnett, G. A., Cheah, S., & Kim, J. H. (2022). Will coolness factors predict user satisfaction and loyalty? Evidence from an artificial neural network–structural equation model approach. Information Processing & Management, 59(6), 103108. https://doi.org/10.1016/j.ipm.2022.103108
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Reis, H. T. (2007). Steps toward the ripening of relationship science. Personal Relationships, 14(1), 1–23. https://doi.org/10.1111/j.1475-6811.2006.00139.x
Richmond, V. P., Mccroskey, J. C., & Mottet, T. (2015). Handbook of instructional communication: Rhetorical and Relational Perspectives. Routledge. https://urlzs.com/Mxv3p
Roesler, E., Manzey, D., & Onnasch, L. (2021). A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Science Robotics, 6(58), eabj542. https://doi.org/10.1126/scirobotics.abj5425
Saylag, R. (2013). Facebook as a tool in fostering EFL teachers’ establishment of interpersonal relations with students through self-disclosure. Procedia-Social and Behavioral Sciences, 82, 680–685. https://doi.org/10.1016/j.sbspro.2013.06.329
Schneider, S., Häßler, A., Habermeyer, T., Beege, M., & Rey, G. D. (2019). The more human, the higher the performance? Examining the effects of anthropomorphism on learning with media. Journal of Educational Psychology, 111(1), 57. https://doi.org/10.1037/edu0000273
Song, H., Kim, J., & Luo, W. (2016). Teacher–student relationship in online classes: A role of teacher self-disclosure. Computers in Human Behavior, 54, 436–443. https://doi.org/10.1016/j.chb.2015.07.037
Song, H., Kim, J., & Park, N. (2019). I know my professor: Teacher self-disclosure in online education and a mediating role of social presence. International Journal of Human-Computer Interaction, 35(6), 448–455. https://doi.org/10.1080/10447318.2018.1455126
Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems (pp.1–9) https://doi.org/10.1145/3290605.3300768
Tajfel, H. (1978). Social categorization, social identity and social comparison. Differentiation Between Social Group, 61–76. https://cir.nii.ac.jp/crid/1571980075816748032
Wang, Y., Gong, S., Cao, Y., & Fan, W. (2023). The power of affective pedagogical agent and self-explanation in computer-based learning. Computers & Education, 195, 104723. https://doi.org/10.1016/j.compedu.2022.104723
You, S., & Robert, L. . P. (2018). Emotional attachment, performance, and viability in teams collaborating with embodied physical action (EPA) robots. Journal of the Association for Information Systems, 19(5), 377–407. https://doi.org/10.17705/1jais.00496
Zhang, S., Che, S., Nan, D., & Kim, J. H. (2023). How does online social interaction promote students’ continuous learning intentions? Frontiers in Psychology, 14:1098110, https://doi.org/10.3389/fpsyg.2023.1098110
Zhang, S., Che, S., Nan, D., Li, Y., & Kim, J. H. (2023). I know my teammates: The role of group member familiarity in computer-supported and face-to-face collaborative learning. Education and Information Technologies, 28(10), 12615–12631. https://doi.org/10.1007/s10639-023-11704-w
Zhang, A., & Rau, P.-L.P. (2023). Tools or peers? Impacts of anthropomorphism level and social role on emotional attachment and disclosure tendency towards intelligent agents. Computers in Human Behavior, 138, 107415. https://doi.org/10.1016/j.chb.2022.107415
Acknowledgements
Not applicable.
Funding
This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (RS-2023-00208278).
Author information
Authors and Affiliations
Contributions
Shunan Zhang: conceptualization, methodology and writing- original draft preparation; Xiangying Zhao: conceptualization and investigation; Dongyan Nan: writing- reviewing and editing; Jang Hyun Kim: conceptualization, supervision, funding acquisition and writing- reviewing and editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, S., Zhao, X., Nan, D. et al. Beyond learning with cold machine: interpersonal communication skills as anthropomorphic cue of AI instructor. Int J Educ Technol High Educ 21, 27 (2024). https://doi.org/10.1186/s41239-024-00465-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41239-024-00465-2