Abstract
Integrating Artificial Intelligence (AI) into learning activities is an essential opportunity to develop students' varied thinking skills. On the other hand, design-based learning (DBL) can more effectively foster creative design processes with AI technologies to overcome real-world challenges. In this context, AI-supported DBL activities have a significant potential for teaching and developing thinking skills. However, there is a lack of experimental interventions in the literature examining the effects of integrating AI into learner-centered methods on active engagement and thinking skills. The current study aims to explore the effectiveness of AI integration as a guidance and collaboration tool in a DBL process. In this context, the effect of the experimental application on the participants’ design thinking mindset, creative self-efficacy (CSE), and reflective thinking (RT) self-efficacy levels and the relationship between them were examined. The participants used ChatGPT and Midjourney in the digital story development process as part of the experimental treatment. The only difference between the control and experimental groups in the digital storytelling process is the AI applications used in the experimental treatment (ChatGPT and Midjourney). In this quasi-experimental method study, participants were randomly assigned to treatment, an AI integration intervention, at the departmental level. 87 participants (undergraduate students) in the experimental group and 99 (undergraduate students) in the control group. The implementation process lasted five weeks. Partial Least Squares (PLS), Structural Equation Modeling (SEM), and Multi-Group Analysis (MGA) were made according to the measurements made at the T0 point before the experiment and at the T1 point after the experiment. According to the research result, the intervention in both groups contributed to the creative self-efficacy, critical reflection, and reflection development of the participants. On the other hand, the design thinking mindset levels of both groups did not show a significant difference in the comparison of the T0 point and the T1 point.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Developments such as artificial intelligence are followed by theoretical and applied studies on integrating these new technologies into learning processes (Aksu Dünya & Yıldız Durak, 2023; Durak & Onan, 2023). Technological developments change how businesses do (Kandlhofer et al., 2016) and the ways of learning and teaching. Chatbot platforms have components that will profoundly affect learning-teaching processes, including various threats and opportunities (Yildiz Durak, 2023a). Although preparing student homework through such environments is a threat, these environments have essential advantages, such as accessing information in the learning-teaching process and providing integration of favorable aspects with methods that support various student activities and creativity. There is a lack of experimental intervention in the literature examining the effects of the integration of artificial intelligence into learner-centered methods on learner active participation and creativity (Lund & Wang, 2023). In this context, this study includes an experimental intervention to address the shortcomings mentioned in the literature.
Design thinking is a skill teachers should have for the effective use of technology in education (Beckwith, 1988; Tsai & Chai, 2012). Teachers’ lack of design thinking skills is defined as one obstacle in technology integration. These barriers are classified in the literature as primary, secondary, and tertiary (Ertmer, 1999; Ertmer et al., 2012; Tsai & Chai, 2012). Primary barriers are related to a lack of infrastructure, training, and support (Snoeyink & Ertmer, 2001). Secondary barriers generally include teachers’ affective perceptions (e.g., belief, openness to change, self-confidence, and attitude) toward technology integration (Ertmer et al., 2012; Keengwe et al., 2008). Removing primary and secondary barriers does not guarantee that technology integration will provide meaningful learning (Saritepeci, 2021; Yildiz Durak, 2021). Tsai and Chai (2012) explained this situation with tertiary barriers. The learning process is not static; it is dynamic and constantly changing. Therefore, teachers need to have design thinking skills to transform this variable nature of the learning process (Tsai & Chai, 2012; Yildiz Durak et al., 2023). Overcoming tertiary barriers significantly facilitates the effective use of technology in education. Beckwith’s (1988) Educational Technology III perspective, which expresses the most effective form of technology use in education, is a flexible structure to provide learners with more meaningful experiences instead of following a systematic process strictly dependent on instructional design, methods, and techniques in educational environments. The Educational Technology III perspective refers to design-based learning practices.
The dizzying developments that occur with technological innovations in today’s business, social, and economic life make our predictions about what kind of job a K12 student will do in the future (Darling-Hammond, 2000; Saritepeci, 2021). In this case, removing the educational technology III perspective and the tertiary barriers to technology integration is essential. Teachers and pre-service teachers should have the skills to be successful in the coming years, which are uncertain in many ways, and to create opportunities to support these learners. The design-based learning approach has remarkable importance in developing the design-oriented thinking skills of the pre-service teacher. In this context, a structure in which artificial intelligence applications are integrated into the digital storytelling method application processes, one of the most effective applications of the design-based learning approach in learning processes, will support the design-oriented thinking skills of pre-service teachers.
2 Related works
Studies on the use of artificial intelligence in education focus on various areas such as intelligent tutoring system (ITS), (Chen, 2008; Rastegarmoghadam & Ziarati, 2017), personalized learning (Chen & Hsu, 2008; Narciss et al., 2014; Zhou et al., 2018), assessment-feedback (Cope et al., 2021; Muñoz-Merino et al., 2018; Ramnarain-Seetohul et al., 2022; Ramesh & Sanampudi, 2022; Samarakou et al., 2016; Wang et al., 2018), educational data mining (Chen & Chen, 2009; Munir et al., 2022) and adaptive learning (Arroyo et al., 2014; Wauters et al., 2010; Kardan et al., 2015). These studies aim to improve the quality of the learning-teaching process by providing individualized learning experiences and increasing the effectiveness of teaching methods.
The intelligent tutoring system is the most prominent study subject in studies on the use of AI in education (Tang et al., 2021). ITS focuses on using AI to provide learners with personalized and automated feedback and guide them through the learning process. Indeed, there is evidence in the literature that using ITS in various teaching areas can improve learning outcomes. Huang et al. (2016) reported that using ITS in mathematics teaching reduces the gaps between advantaged and disadvantaged learners.
Personalized learning environments, another prominent use of AI in education, aim to provide an experience where the learning process is shaped within the framework of learner characteristics. In addition, supporting the learning of individuals who are disadvantaged in subjects such as learning disabilities is a promising field of study. Indeed, Walkington (2013) noted that personalized learning experience provides more positive and robust learning outcomes. Similarly, Ku et al. (2007) investigated the effect of a personalized learning environment on solving math problems. The study results show that the experimental group learners, especially those with lower-level mathematics knowledge, performed better than the control group.
Assessment and feedback, one of the forms of AI in education, is another area where the number of studies on the COVID-19 epidemic has increased (Ahmad et al., 2022; Hooda et al., 2022). Ahmad et al. (2022) compared artificial intelligence and machine learning techniques for assessment, grading, and feedback and found that accuracy rates ranged from 71 to 84%. Shermis and Burstein (2016) stated that the automatic essay evaluation system gave similar scores to student work with human evaluators, but the system had difficulties in studies that were different in terms of creativity and structure organization. Accordingly, more development and research should be done to help AI systems produce more effective results in assessment and grading. In another study, AI-supported constructive and personalized feedback on the texts created by learners effectively improved reflective thinking skills (Liu et al., 2023). In the same study, this intervention reduced the cognitive load of the learners in the experimental group and improved self-efficacy and self-regulated learning levels.
The use of AI in educational data mining and machine learning has been increasing in recent years to discover patterns in students’ data, such as navigation and interaction in online learning environments, to predict their future performance or to provide a personalized learning experience (Baker et al., 2016; Munir et al., 2022; Rienties et al., 2020). Sandra et al. (2021) conducted a literature review of machine learning algorithms used to predict learner performance and they examined 285 studies published in the IEEE Access and Science Direct databases between 2019–2021. The study results show that the most frequently used machine learning algorithm to predict learner performance is the classification machine learning algorithm, followed by NN, Naïve Bayes, Logistic Regression, SVM, and Decision Tree algorithms.
The main purpose of artificial intelligence studies in the field of AI is to create an independent learning environment by reducing the supervision and control of any pedagogical entity by providing learners with a personalized learning process within the framework of the learner and subject area characteristics (Cui, 2022; Zhe, 2021). To achieve this, system designs for predicting learner behaviors with intelligent systems, providing automatic assessment, feedback, and personalized learning experiences, and intervention studies examining their effectiveness come first. This study develops a different perspective and experiences of the learner’s create-to-learn process in collaboration with AI. There are predictions in various studies that AI and collaborative learning processes can support the creativity of learners (Kafai & Burke, 2014; Kandlhofer et al., 2016; Lim & Leinonen, 2021; Marrone et al., 2022). In this regard, Lund and Wang (2023) emphasized that the focus should be on developing creativity and critical thinking skills by enabling learners to use AI applications in any learning task (Fig. 1).
3 Focus of study
This study investigates the effectiveness of artificial intelligence integration (Chat GPT and Midjourney application) as a guidance and collaboration tool in the design-based process integrated into educational environments in a design-based learning process. In this context, whether the experimental application was effective in the design thinking mindset levels of the participants and their relationship with creative, reflective thinking self-efficacy was examined.
Participants were tasked with developing a digital story in a design-based process. In the context of experimental treatment, participants were systematically encouraged to use Chat GPT and Midjourney as guidance tools in the digital story development process. Apart from this treatment, the design-based learning process of the control group is very similar to the experimental group.
Therefore, all participants were exposed to the same environment at the university where the application was made, and they did not enroll in any additional technology education courses. This pretest–posttest experimental method study with a control group continued for four weeks, during which the student-produced an active product in design-based learning. In the current research context, the following research questions were addressed:
-
RQ1: Is the integration of artificial intelligence in a design-based learning process effective on the levels of design thinking mindset, and creative and reflective thinking self-efficacy?
-
RQ2: Do the relationships between design thinking mindset and creative and reflective thinking self-efficacy levels differ in the context of the experimental process?
In line with these research questions, the following hypotheses were tested:
-
H1a. Creative self-efficacy for 5 weeks is greater for the experimental group.
-
H1b. Influence of creative self-efficacy on the design thinking mindset is similar for two groups.
-
H1c. Influence of creative self-efficacy after 5 weeks on the design thinking mindset is similar for two groups.
-
H1d. Influence of creative self-efficacy after 5 weeks on the design thinking mindset is greater for the experimental group.
-
H2a. Influence of critical reflection on the design thinking mindset is similar for two groups.
-
H2b Influence of critical reflection on the design thinking mindset after 5 weeks is greater for the experimental group.
-
H2c. Critical reflection for 5 weeks is greater for the experimental group.
-
H2d. Influence of critical reflection after 5 weeks on the design thinking mindset is greater for the experimental group.
-
H3a. Influence of reflection on the design thinking mindset is similar for two groups.
-
H3b. Influence of reflection on the design thinking mindset after 5 weeks is greater for the experimental group.
-
H3c. Reflection for 5 weeks is greater for the experimental group.
-
H3d. Influence of reflection after 5 weeks on the design thinking mindset is greater for the experimental group.
-
H4. Design thinking mindset for 5 weeks is greater for the experimental group.
4 Methods
4.1 Research design
This study is a quasi-experimental method study with the pretest–posttest control group (Fig. 2). In this experimental methodology study, participants were randomly assigned to treatment, an AI integration intervention, at the departmental level. There were 87 (46.8%) participants in the experimental group and 99 (53.2%) participants in the control group. The participants were pre-service teachers studying in the undergraduate program of the faculty of education.
The treatment in this study also served the purposes of the educational technology course as the application of design-based learning activity as an important tool in educational technology that participants (pre-service teachers) might consider using in their future teaching careers.
In addition, all participants have been exposed to the same opportunities regarding the use of digital technologies in education and none of them attended an additional course. Therefore, the prior knowledge of both groups was similar. Participation in the surveys is completely voluntary. For this reason, although 232 and 260 participants participated in the pretest and posttest, respectively, 186 students who filled in both questionnaires and participated in the application were included in the study. However, both groups were given the same input on design-based learning activities and tasks. Therefore, there is no learning loss for the control group.
4.2 Participants
The participants were 186 pre-service teachers studying at a state university in Turkey. All participants are enrolled in an undergraduate instructional technology course and study in five different departments. The ages of the participants vary between 17–28 years, with an average age of 19.12. 74.2% of the participants were female and 25.8% were male. The high rate of women is because the education faculties in Turkey have a similar demographic structure. The majority of the participants are first-year and second-year students.
The daily use of social technology (social media, etc.) is 3.89 (in hours). Technology usage time for entertainment (watching movies and series, listening to music, etc.) is 2.7 h. While the daily use of technology for gaming (mobile, computer, console games, etc.) is 0.81, the period of use of technology for educational purposes is 1.74. The participants use technology primarily for social and entertainment purposes.
4.3 Procedure
4.3.1 Experimental group
In this group, students performed the DST task as a DBL activity using ChatGPT and MidJourney artificial intelligence applications. These tasks include selecting topics, collaborative story writing with ChatGPT, scripting, creating scripted scenes with MidJourney, and voice acting, as well as integrating them. Examples of multimedia items prepared by the students in this group are shown in Fig. 3.
The artificial intelligence applications they will use in this task were introduced one week before the application. Students did various free activities with these applications. In the first week of the application, students were asked to choose a topic within a specific context. The students researched their chosen topic and chatted with ChatGPT to deepen their knowledge. The students created the stories within the steps of the instruction presented by the instructor in collaboration with ChatGPT. (1) ChatGPT should be asked three questions while creating the story setup. Each question should contribute to the formation of the story. (2) A story should be created by organizing ChatGPT's answers. (3) At least 20% and a maximum of 50% of the story must belong to the student. To assess whether the students executed these three steps accurately and to offer feedback when needed, they shared the link to the page containing their conversations with the questions and answers they used to create their stories with the course instructor. The instructor compared the text accessed from this page with the final text of the student's story. He scanned the final versions of the student stories on Turnitin to check if the student's contribution to the story creation was no more than 50%.
In the next stage (weeks 2 and 3), students created each scene using MidJourney artificial intelligence bots in line with the storyboards they created by scripting their stories. The most important challenge for the students was to ensure continuity in interrelated and successive scenes using MidJourney bots, and they created the audio files by voicing the texts related to each scene. In the fourth week, students combined elements such as scenarios, scenes, and sound recordings using digital story development tools (Canva, Animaker, etc.). The final version of the digital stories was shared on the Google Classroom platform.
Learners sent the product they created for each application step and information about the process from the activity links on the Google Classroom course page. The course instructor reviewed these posts and provided corrective feedback to the students.
4.3.2 Control group
In this group, students were tasked with preparing a digital story on a topic as DBL activities. This task includes choosing a subject, writing a story, scripting, preparing multimedia elements, and integrating them. Products such as storyboards and videos produced by students in DBL activities carried out in this group are shown in Fig. 4.
In the first week of the application, the participants were asked to choose a topic within a context, as in the experimental group. The students researched the determined topic, created a story related to the subject, then scripted the story and prepared the storyboards. In the second and third weeks of the application, the students created the audio files by vocalizing the texts related to each scene (according to the scenario) in line with the storyboard. Furthermore, pictures, backgrounds, and characters were created in line with the scenario (usually compiled from ready-made pictures and characters). In the fourth week, digital story development tools combined scenarios, pictures, backgrounds, sound recordings, and characters. The final version of the stories was shared on the Google Classroom platform.
4.4 Data collection and analysis
Data were collected at two-time points via the online form. Personal Information Form and three different data collection tools were used in this study.
4.4.1 Instrumentation
Self-description Form
There are 8 questions in the personal information form. These were created to collect information about gender, age, department, class, and total hours spent using digital technologies for different purposes.
Design Thinking Mindset Scale
The scale was developed by Ladachart et al. (2021) and consists of six sub-dimensions: being comfortable with problems, user empathy, mindfulness of the process, collaborative working with diversity, orientation to learning, and creative confidence. The rating is in a 5-point Likert type. The validity and reliability values of the scale are presented in Sect. 5.
Reflective Thinking Scale
Kember et al. (2000) developed this scale to measure students’ belief in their ability to be creative; the Turkish adaptation of this scale was created by Başol and Evin Gencel (2013). Although the scale consists of four sub-dimensions, two were included in the study because they were suitable for the study, and the rating is in a 5-point Likert type. The validity and reliability values of the scale are presented in Sect. 5.
Creative Self-Efficacy Scale
The original scale, developed by Tierney and Farmer (2011) to measure their belief in their ability to be creative, was adapted into Turkish by Atabek (2020). The scale consists of three items, and the rating is a 7-point Likert type. In the context of this study, the data before the analysis was converted into a 5-point Likert structure, and the validity and reliability values of the scale are presented in Sect. 5.
4.4.2 Analysis
The effect of design-based learning activities integrated with artificial intelligence as a teaching intervention was tried to be measured by repeated measurement. Data collection tools were applied in the first week (T0) and the fifth week (T1) in the experimental and control groups. For analysis, only the responses (survey data) provided by students who fully participated in the application and answered the data collection tools at both T0 and T1 points were included. Partial Least Squares-Structural Equation Modeling (PLS-SEM) was used to analyze the data and test the hypotheses. SmartPLS 4 was used in the analysis (Ringle et al., 2022). The PLS-SEM method allowed the parameters of complex models to be estimated without making any distribution assumptions on the data. In addition, the differences between the experimental and control groups were examined using the Multiple Group Analysis (MGA) features in PLS-SEM, and it was tested whether there was a significant difference between MGA and group-specific external loads and path coefficients.
5 Results
In the first stage, the measurement model was tested. In the second stage, the structural model was evaluated in the context of MGA.
5.1 Measurement model
When the measurement and structural models were evaluated, the indicator loads were higher than the recommended value of 0.7 (See Appendix Table 7).
Internal consistency reliability is represented by Cronbach’s alpha, composite reliability (CR), and rho_a (See Table 1). All values are above the threshold value of 0.70 by default. For convergent validity, the average variance extracted (AVE) value is used and this value is expected to be above 0.5. The values in the model were found to be higher than this threshold value.
Heterotrait-monotrait ratio (HTMT) and the Fornell-Larcker criterion were used for discriminant validity. The values found indicate that discriminant validity has been achieved, as seen in Tables 2 and 3.
Considering all the data obtained, the measurement model of the proposed model is suitable for testing hypotheses.
5.2 SEM
The structural model of the PLS-SEM was examined as it provides the measurement model assumptions. PLS-SEM was run using 1000 bootstrapping. The significant differences in the path coefficients of the assumed relationships between design thinking mindset levels and creative and reflective thinking self-efficacy between the experimental and control groups were examined, and the findings are presented in Table 4.
According to Table 4, the structural model was examined in terms of significant differences in the path coefficients of the assumed relationships to test the research hypotheses, and the creative self-efficacy and reflective thinking dimensions for the students in the experimental and control groups differed after the treatment process.
R2 values indicate the explanatory power of the structural model and these values show moderate to significant power (See Table 5).
To examine whether there is a significant difference between the path coefficients for the experimental and control groups, the PLS-MGA Parametric test values were examined and the results are presented in Table 6.
According to Table 6, the findings show that there is no significant difference in the effect of creative self-efficacy, and reflective thinking on design thinking mindset between the two groups. After the treatment process, there is no significant difference in the relationships between creative self-efficacy, reflective thinking, and design thinking mindset. The significance levels of the path coefficients showed that the hypotheses were not supported.
6 Discussion and conclusion
This study examined the effect of AI integration, which is integrated into the digital storytelling process, a design-based learning method, on design thinking mindset and whether it is effective in its relations with creative, reflective thinking self-efficacy. The participants used ChatGPT and Midjourney applications in the digital story development process as part of the experimental treatment. The only difference in the digital storytelling process between the control and experimental groups is the AI applications used in the experimental treatment. The experimental intervention covers four weeks. Data were collected from the participants before (T0) and after the application (T1) with data collection tools. There is a significant difference at the T1 point compared to the T0 point in both groups' creative self-efficacy, critical reflection, and reflection levels. Accordingly, the intervention in both groups contributed to the participants' creative self-efficacy, critical reflection, and reflection development. On the other hand, the design thinking mindset levels of both groups did not show a significant difference in the comparison of the T0 point and the T1 point.
According to the multigroup comparison of the creative self-efficacy level at T0 and T1 points, there was no significant difference between the groups. When compared to T0 at the T1 point, creative self-efficacy improvement was achieved in both groups. This is valuable as it shows that the creative self-efficacy contribution of intensive use of AI support in a design-based learning environment is similar. Indeed, creativity, recognized as one of the core competencies in education, is part of CSE, which includes the belief that an individual is capable of producing creative results (Yildiz Durak, 2023b). There are predictions in various studies that AI and collaborative learning processes can support the creativity of learners (Kafai & Burke, 2014; Kandlhofer et al., 2016; Lim & Leinonen, 2021; Marrone et al., 2022). Marrone et al. (2022) provided eight-week training sessions on creativity and AI to middle school students. In their subsequent interviews with the students, the most dominant opinion was that AI support had a crucial role in supporting their creativity. In support of this, the experimental treatment in our study requires various creative interventions from the students: (1) Students asked at least three questions to ChatGPT while creating a story. (2) Each question contained abstracting from the previous AI answer and directions on how to continue. (3) they also created their constructs by creating connecting sentences and paragraphs to gather the answers given by ChatGPT. In addition, the second part where creativity came into play was creating scenes related to the story in the Midjourney environment. (4) While creating these scenes, the student had to plan scenes by abstracting the story he had created in collaboration with AI, create those scenes, and provide detailed parameters to the Midjourney bot to ensure continuity between the scenes. It may be that, relatively, in the expectation control group, the realization of this whole process by the students through various creative practices will further support creativity and self-efficacy. Regarding this situation, Riedl and O’Neill (2009) highlighted that although these tools (Canva, Animaker, etc.) make it possible to develop creative content, the user may not get significant results. In this context, they pose an essential question: “Can an intelligent system augment non-expert creative ability?”. Lim and Leinonen (2021) argued that AI-powered structures can effectively support creativity and that humans and machines can learn from each other to produce original works. Taking this one step further, AI will contribute to students’ creativity in learning and teaching processes (Kafai & Burke, 2014). Indeed, Wang et al. (2023) found a significant relationship between students' AI capability levels and their creativity, explaining 28.4% of the variance in creativity.
According to the research findings, all ways between reflective thinking scale sub-dimensions critical reflection and reflection and design thinking mindset are insignificant (H2a, H2b, H2d, H3a, H3b, H3d). In addition, there is no significant difference between the groups according to the multi-group comparison at T0—T1 points for reflection and critical reflection. On the other hand, there is a significant improvement in the critical reflection and reflection levels at the T1 point of both groups compared to the T0 point. Accordingly, AI collaboration has a similar effect to the process in the control group on the learners’ reflective thinking levels in the design-based learning process. In support of this, we have evidence that incorporating AI in various forms in educational processes has essential outcomes for reflective thinking. Indeed, Liu et al. (2023) reported that an intervention involving incorporating AI into the learning process as a feedback tool to support reflective thinking in foreign language teaching resulted in remarkable improvements in learning outcomes and student self-efficacy.
DBL involves learners assimilating new learning content to overcome authentic problems and creating innovative products and designs to showcase this learning in the simplest way possible. In this study, DST processes, which allow the application of DBL to different learning areas, are included in both interventions. In the literature, DST helps learners reflect on what they have learned (Ivala et al., 2014; Jenkins & Lonsdale, 2007; Nam, 2017; Robin, 2016; Sandars and Murray, 2011) and develop reflective thinking skills (Durak, 2018; Durak, 2020; Malita & Martin, 2010; Sadik, 2008; Sarıtepeci, 2017) is a method with critical elements. The critical implication here is that all processes of AI collaboration on reflection and critical reflection have a similar effect as the DST process planned by the learners. The similar effect of AI collaboration allowed learners to understand the benefits of AI in the DST process and to develop in-depth learning by combining their thought processes with AI and finding creative ways to reflect on their learning. Indeed, Shum and Lucas (2020) claims that AI can help individuals think more deeply about challenging experiences. The DST process includes stages (story writing, scenario creation, planning scenes, etc.) that allow learners to embody their reflections on their learning (Ohler, 2006; Sarıtepeci, 2017).
The multi-group analysis results of the road between the design thinking mindset T0 – T1 points are insignificant (H4). In addition, there was no significant improvement in design thinking mindset scores in both groups compared to T0 at the T1 point. Accordingly, the effect of the design-based learning process carried out in the experimental and control groups on the learners’ design thinking mindset scores was limited. The study’s expectation was the development of the design thinking levels of the learners and, as a result, meaningful improvements in the design thinking mindset levels. This result may be because the application process is not long enough to develop versatile skills such as design thinking. Razzouk and Shute (2012) emphasized that design thinking is challenging to acquire in a limited context. However, they argue that students can learn to design thinking skills together with scaffolding, feedback, and sufficient practice opportunities. The DST process included scaffolding and feedback processes in both groups. Although there are different stages for acquiring and developing design thinking skills during the application process, the similar characteristics of the design thinking mindset level may indicate the need for more extended practice. However, the fact that the design thinking mindset is a self-reporting tool limits our predictions about individuals' design thinking skill acquisition and development in the process.
7 Conclusion
In conclusion, the intensive use of AI support in a design-based learning environment similarly impacts the development of participants' creative self-efficacy, reflective thinking, and design thinking mindset levels. The AI collaboration process showed a similar effect to the planned design-based learning process by allowing learners to understand the benefits of AI in the design thinking mindset and to develop in-depth learning by combining their thought processes with AI. However, it is essential to note that the study's expectation of meaningful improvements in the design thinking mindset levels was unmet. This suggests that more extended practice periods and more support and feedback processes may be necessary to effectively develop versatile skills such as design thinking.
The research contributes to our understanding of the impact of AI collaboration on learners' levels of creative self-efficacy, reflective thinking, and design thinking mindset. Further studies with extended practice periods and additional scaffolding and feedback processes could provide valuable insights into the effective development of design thinking skills in AI-supported design-based learning environments.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Ahmad, S. F., Alam, M. M., Rahmat, M., Mubarik, M. S., & Hyder, S. I. (2022). Academic and administrative role of artificial intelligence in education. Sustainability, 14(3), 1101.
Aksu Dünya, B., & Yıldız Durak, H. (2023). Hi! Tell me how to do it: Examination of undergraduate students’ chatbot-integrated course experiences. Quality & Quantity, 1–16. https://doi.org/10.1007/s11135-023-01800-x
Atabek, O. (2020). Adaptation of creative self-efficacy scale into Turkish language. WOrld Journal on Educational Technology: Current Issues., 12(2), 084–097.
Arroyo, I., Woolf, B. P., Burelson, W., Muldner, K., Rai, D., & Tai, M. (2014). A multimedia adaptive tutoring system for mathematics that addresses cognition, metacognition and affect. International Journal of Artificial Intelligence in Education, 24, 387–426. https://doi.org/10.1007/s40593-014-0023-y#Sec17
Baker, R. S., Martin, T., & Rossi, L. M. (2016). Educational data mining and learning analytics. The Wiley handbook of cognition and assessment: Frameworks, methodologies, and applications, 379–396. https://doi.org/10.1002/9781118956588.ch16
Başol, G., & Evin Gencel, İ. (2013). Yansıtıcı düşünme düzeyini belirleme ölçeği: Geçerlik ve güvenirlik çalışması. Kuram Ve Uygulamada Eğitim Bilimleri, 13(2), 929–946.
Beckwith, D. (1988). The future of educational technology. Canadian Journal of Educational Comunication 17(1), 3–20.
Chen, C. M. (2008). Intelligent web-based learning system with personalized learning path guidance. Computers & Education, 51(2), 787–814. https://doi.org/10.1016/j.compedu.2007.08.004
Chen, C. M., & Hsu, S. H. (2008). Personalized intelligent mobile learning system for supportive effective English learning. Educational Technology and Society Journal, 11(3), 153–180.
Chen, C. M., & Chen, M. C. (2009). Mobile formative assessment tool based on data mining techniques for supporting web-based learning. Computers & Education, 52(1), 256–273. https://doi.org/10.1016/j.compedu.2008.08.005
Cope, B., Kalantzis, M., & Searsmith, D. (2021). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies. Educational Philosophy and Theory, 53(12), 1229–1245. https://doi.org/10.1080/00131857.2020.1728732
Cui, K. (2022). Artificial intelligence and creativity: Piano teaching with augmented reality applications. Interactive Learning Environments, 31(10), 7017–7028. https://doi.org/10.1080/10494820.2022.2059520
Darling-Hammond, L. (2000). Teacher quality and student achievement. Education Policy Analysis Archives, 8, 1.
Durak, H. Y. (2018). Digital story design activities used for teaching programming effect on learning of programming concepts, programming self-efficacy, and participation and analysis of student experiences. Journal of Computer Assisted Learning, 34(6), 740–752.
Durak, H. Y. (2020). The effects of using different tools in programming teaching of secondary school students on engagement, computational thinking and reflective thinking skills for problem solving. Technology, Knowledge and Learning, 25(1), 179–195.
Durak, H.Y. & Onan, A. (2023). Adaptation of behavioral intention to use and learn chatbot in education scale into Turkish. Ahmet Keleşoğlu Eğitim Fakültesi Dergisi (AKEF) Dergisi, 5(2), 1162-1172.
Ertmer, P. A. (1999). Addressing first-and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development, 47(4), 47–61.
Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012). Teacher beliefs and technology integration practices: A critical relationship. Computers & Education, 59(2), 423–435.
Hooda, M., Rana, C., Dahiya, O., Rizwan, A., & Hossain, M. S. (2022). Artificial intelligence for assessment and feedback to enhance student success in higher education. Mathematical Problems in Engineering, 1–19. https://doi.org/10.1155/2022/5215722
Huang, X., Craig, S. D., Xie, J., Graesser, A., & Hu, X. (2016). Intelligent tutoring systems work as a math gap reducer in 6th grade after-school program. Learning and Individual Differences, 47, 258–265. https://doi.org/10.1016/j.lindif.2016.01.012
Ivala, E., Gachago, D., Condy, J., & Chigona, A. (2014). Digital Storytelling and Reflection in Higher Education: A Case of Pre-Service Student Teachers and Their Lecturers at a University of Technology. Journal of Education and Training Studies, 2(1), 217–2273. https://doi.org/10.11114/jets.v2i1.286
Jenkins, M., & Lonsdale, J. (2007). Evaluating the effectiveness of digital storytelling for student reflection. ASCILITE conference (pp. 440–444). Singapore.
Kardan, A. A., Aziz, M., & Shahpasand, M. (2015). Adaptive systems: A content analysis on technical side for e-learning environments. Artificial Intelligence Review, 44(3), 365–391. https://doi.org/10.1007/s10462-015-9430-1
Kafai, Y. B., & Burke, Q. (2014). Connected Code: Why Children Need to Learn Programming. MIT Press.
Kandlhofer, M., Steinbauer, G., Hirschmugl-Gaisch, S., & Huber, P. (2016). Artificial intelligence and computer science in education: From kinder-garten to university. In IEEE Frontiers in Education Conference (pp. 1–9). https://doi.org/10.1109/FIE.2016.7757570
Keengwe, J., Onchwari, G., & Wachira, P. (2008). Computer technology integration and student learning: Barriers and promise. JOurnal of Science Education and Technology, 17(6), 560–565.
Kember, D., Leung, D. Y., Jones, A., Loke, A. Y., McKay, J., Sinclair, K., ... & Yeung, E. (2000). Development of a questionnaire to measure the level of reflective thinking. Assessment & Evaluation in Higher Education, 25(4), 381–395.
Ku, H. Y., Harter, C. A., Liu, P. L., Thompson, L., & Cheng, Y. C. (2007). The effects of individually personalized computer-based instructional program on solving mathematics problems. Computers in Human Behavior, 23(3), 1195–1210.
Ladachart, L., Ladachart, L., Phothong, W., & Suaklay, N. (2021, March). Validation of a design thinking mindset questionnaire with Thai elementary teachers. In Journal of physics: conference series (vol. 1835, No. 1, p. 012088). IOP Publishing. https://doi.org/10.1088/1742-6596/1835/1/012088
Lim, J., & Leinonen, T. (2021). Creative peer system an experimental design for fostering creativity with artificial intelligence in multimodal and sociocultural learning environments. In CEUR workshop proceedings (vol. 2902, pp. 41–48). RWTH Aachen University. https://research.aalto.fi/en/publications/creative-peer-system-an-experimental-design-for-fostering-creativ
Liu, C., Hou, J., Tu, Y. F., Wang, Y., & Hwang, G. J. (2023). Incorporating a reflective thinking promoting mechanism into artificial intelligence-supported English writing environments. Interactive Learning Environments, 31(9), 5614–5632. https://doi.org/10.1080/10494820.2021.2012812
Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29.
Malita, L., & Martin, C. (2010). Digital storytelling as web passport to success in the 21st century. Procedia-Social and Behavioral Sciences, 2(2), 3060–3064.
Marrone, R., Taddeo, V., & Hill, G. (2022). Creativity and Artificial Intelligence—A Student Perspective. Journal of Intelligence, 10(3), 65. https://doi.org/10.3390/jintelligence10030065
Munir, H., Vogel, B., & Jacobsson, A. (2022). Artificial intelligence and machine learning approaches in digital education: A systematic revision. Information, 13(4), 203. https://doi.org/10.3390/info13040203
Muñoz-Merino, P. J., Novillo, R. G., & Kloos, C. D. (2018). Assessment of skills and adaptive learning for parametric exercises combining knowledge spaces and item response theory. Applied Soft Computing, 68, 110–124. https://doi.org/10.1016/j.asoc.2018.03.045
Nam, C. W. (2017). The effects of digital storytelling on student achievement, social presence, and attitude in online collaborative learning environments. Interactive Learning Environments, 25(3), 412–427. https://doi.org/10.1080/10494820.2015.1135173
Narciss, S., Sosnovsky, S., Schnaubert, L., Andrès, E., Eichelmann, A., Goguadze, G., & Melis, E. (2014). Exploring feedback and student characteristics relevant for personalizing feedback strategies. Computers & Education, 71, 56–76. https://doi.org/10.1016/j.compedu.2013.09.011
Ohler, J. (2006). The world of digital storytelling. Educational Leadership, 63(4), 44–47.
Ramesh, D., & Sanampudi, S. K. (2022). An automated essay scoring systems: A systematic literature review. Artificial Intelligence Review, 55(3), 2495–2527. https://doi.org/10.1007/s10462-021-10068-2
Ramnarain-Seetohul, V., Bassoo, V., & Rosunally, Y. (2022). Similarity measures in automated essay scoring systems: A ten-year review. Education and Information Technologies, 27(4), 5573–5604. https://doi.org/10.1007/s10639-021-10838-z
Rastegarmoghadam, M., & Ziarati, K. (2017). Improved modeling of intelligent tutoring systems using ant colony optimization. Education and Information Technologies, 22(10), 67–1087. https://doi.org/10.1007/s10639-016-9472-2
Razzouk, R., & Shute, V. (2012). What is design thinking and why is it important? Review of Educational Research, 82(3), 330–348.
Riedl, M. O., & O’Neill, B. (2009). Computer as audience: A strategy for artificial intelligence support of human creativity. In Proc. CHI workshop of computational creativity support. https://www.academia.edu/download/35796332/riedl.pdf
Rienties, B., Køhler Simonsen, H., & Herodotou, C. (2020). July). Defining the boundaries between artificial intelligence in education, computer-supported collaborative learning, educational data mining, and learning analytics: A need for coherence. Frontiers in Education, 5, 1–5. https://doi.org/10.3389/feduc.2020.00128
Ringle, C. M., Wende, S., & Becker, J.-M. (2022). SmartPLS 4. Oststeinbek: SmartPLS GmbH, http://www.smartpls.com.
Robin, B. R. (2016). The power of digital storytelling to support teaching and learning. Digital Education Review, 30, 17–29.
Sadik, A. (2008). Digital storytelling: A meaningful technology-integrated approach for engaged student learning. Educational Technology Research and Development, 56, 487–506. https://doi.org/10.1007/s11423-008-9091-8
Samarakou, M., Fylladitakis, E. D., Karolidis, D., Früh, W. G., Hatziapostolou, A., Athinaios, S. S., & Grigoriadou, M. (2016). Evaluation of an intelligent open learning system for engineering education. Knowledge Management & E-Learning, 8(3), 496.
Sandars, J., & Murray, C. (2011). Digital storytelling to facilitate reflective learning in medical students. Medical Education, 45(6), 649–649. https://doi.org/10.1111/j.1365-2923.2011.03991.x
Sandra, L., Lumbangaol, F., & Matsuo, T. (2021). Machine learning algorithm to predict student’s performance: a systematic literature review. TEM Journal, 10(4), 1919–1927. https://doi.org/10.18421/TEM104-56
Sarıtepeci, M. (2017). An experimental study on the investigation of the effect of digital storytelling on reflective thinking ability at middle school level. Bartın University Journal of Faculty of Education, 6(3), 1367–1384. https://doi.org/10.14686/buefad.337772
Saritepeci, M. (2021). Students’ and parents’ opinions on the use of digital storytelling in science education. Technology, Knowledge and Learning, 26(1), 193–213. https://doi.org/10.1007/s10758-020-09440-y
Shermis, M. D., & Burstein, J. (2016). Handbook of Automated Essay Evaluation: Current applications and new directions. Routledge.
Shum, S. B., & Lucas, C. (2020). Learning to reflect on challenging experiences: An AI mirroring approach. In Proceedings of the CHI 2020 workshop on detection and design for cognitive biases in people and computing systems.
Snoeyink, R., & Ertmer, P. A. (2001). Thrust into technology: How veteran teachers respond. Journal of Educational Technology Systems, 30(1), 85–111. https://doi.org/10.2190/YDL7-XH09-RLJ6-MTP1
Tang, K. Y., Chang, C. Y., & Hwang, G. J. (2021). Trends in artificial intelligence-supported e-learning: A systematic review and co-citation network analysis (1998–2019). Interactive Learning Environments, 1–19. https://doi.org/10.1080/10494820.2021.1875001.
Tierney, P., & Farmer, S. M. (2011). Creative self-efficacy development and creative performance over time. Journal of Applied Psychology, 96(2), 277–293. https://doi.org/10.1037/a0020952
Tsai, C.-C., & Chai, C. S. (2012). The” third”-order barrier for technology-integration instruction: Implications for teacher education. Australasian Journal of Educational Technology, 28(6). https://doi.org/10.14742/ajet.810.
Walkington, C. A. (2013). Using adaptive learning technologies to personalize instruction to student interests: The impact of relevant contexts on performance and learning outcomes. Journal of Educational Psychology, 105(4), 932–945. https://doi.org/10.1037/a0031882
Wang, Z., Liu, J., & Dong, R. (2018). Intelligent auto-grading system. In 2018 5th IEEE international conference on cloud computing and intelligence systems (CCIS) (pp. 430–435). IEEE. https://doi.org/10.1109/CCIS.2018.8691244
Wang, S., Sun, Z., & Chen, Y. (2023). Effects of higher education institutes’ artificial intelligence capability on students’ self-efficacy, creativity and learning performance. EDucation and Information Technologies, 28(5), 4919–4939. https://doi.org/10.1007/s10639-022-11338-4#Sec2
Wauters, K., Desmet, P., & Van Den Noortgate, W. (2010). Adaptive item-based learning environments based on the item response theory: Possibilities and challenges. Journal of Computer Assisted Learning, 26(6), 549–562. https://doi.org/10.1111/j.1365-2729.2010.00368.x
Yildiz Durak, H. (2021). Preparing pre-service teachers to integrate teaching technologies into their classrooms: Examining the effects of teaching environments based on open-ended, hands-on and authentic tasks. Education and Information Technologies, 26(5), 5365–5387.
Yildiz Durak, H. (2023a). Conversational agent-based guidance: Examining the effect of chatbot usage frequency and satisfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy. Education and Information Technologies, 28, 471–488. https://doi.org/10.1007/s10639-022-11149-7
Yildiz Durak, H. (2023b). Examining various variables related to authentic learning self-efficacy of university students in educational online social networks: Creative self-efficacy, rational experiential thinking, and cognitive flexibility. Current Psychology, 42(25), 22093–22102.
Yildiz Durak, H., Atman Uslu, N., Canbazoğlu Bilici, S., & Güler, B. (2023). Examining the predictors of TPACK for integrated STEM: Science teaching self-efficacy, computational thinking, and design thinking. Education and Information Technologies, 28(7), 7927–7954.
Zhe, T. (2021). Research on the model of music sight-singing guidance system based on artificial intelligence. Complexity, 2021, 1–11. https://doi.org/10.1155/2021/3332216
Zhou, Y., Huang, C., Hu, Q., Zhu, J., & Tang, Y. (2018). Personalized learning full-path recommendation model based on LSTM neural networks. Information Sciences, 444, 135–152. https://doi.org/10.1016/j.ins.2018.02.053
Funding
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Access of data
Our data are not yet available online in any institutional database. However, we will send the whole data package by request. The request should be sent to Assoc. Professor [email protected].
Ethical statement
The research was conducted in a school in Turkey and approved by the school administration. Participation was voluntary and anonymous. Informed consent was obtained from all participants.
Conflict of interests
We have not received any funding or other support to present the views expressed in this paper. The authors declare no conflicts of interest with respect to the authorship or the publication of this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Table 7
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Saritepeci, M., Yildiz Durak, H. Effectiveness of artificial intelligence integration in design-based learning on design thinking mindset, creative and reflective thinking skills: An experimental study. Educ Inf Technol 29, 25175–25209 (2024). https://doi.org/10.1007/s10639-024-12829-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-024-12829-2