1 Introduction
Complex dynamic systems, such as supply chains, comprise a network of interdependent actors that constantly interact with each other [
71]. The behavior of these actors results in emergent dynamics and influences how they interact with the system over time [
7,
62]. Some actors show distinct decision patterns [
67], and some even exhibit irrational behavior, such as panic-buying and hoarding, in the context of supply chains [
84]. Such behaviors and varying decisions have been shown to be the outcome of different mental models about the underlying system [
11,
38]. Therefore, it is vital to investigate how decision-makers form mental models of such complex dynamic systems over time and undisturbed in the decision context. Such investigation, however, calls for methods for eliciting mental models and evaluating mental model development over time [
91].
In this paper, we present such an elicitation method that we refer to as
thought bubbles. This method evokes thought processes in situ and diegetically through multiple open-ended prompts over time as part of an interactive virtual environment, and thus seeks to elicit
mental model development in the context of complex dynamic systems. We use the term mental model development to refer to a temporal process that happens during the interaction with complex dynamic systems and over time. The need for a process view stems from mental models not being static constructs [
19,
60]. They dynamically change depending on new information or perceptions of the system. Previous studies provide evidence that the interaction forms behavior [
67], and that behavior is the outcome of cognitive constructs [
30,
58]. Hence, it is not unreasonable to think that the interaction affects mental model development. In addition, we leverage interactive virtual environments because they have been demonstrated to be useful for studying and learning about complex dynamic systems [
8,
62,
68] and allow for eliciting mental models diegetically, which mitigates the effect of any out-of-context effects such as pre/post-interaction information or experimenter bias [
23].
Eliciting mental models is only the first step in gaining insight into the cognitive aspect of human actions [
41]. We also need reliable methods for analyzing the elicited mental model concepts. As our elicited mental models result from textual verbalization of thought processes, we can leverage qualitative analysis to make sense of our elicitation [
40,
41,
56]. In grounding our qualitative analysis, we must align our outcomes with existing theories to gain a theoretical understanding of mental model development. We rely on the Situation Awareness (SA) model, introduced by Endsley [
28], as a guiding framework for our qualitative analysis. SA framework is a well-established model that describes people’s
perception of the elements of the environment, the
comprehension of their meaning, and the
projection of future states [
26]; hence providing a window into their mental model development [
29].
We test thought bubbles and our qualitative analysis framework using an interactive virtual environment called
gamettes [
68] and conduct two experimental studies (Study 1:
n=115 and Study 2:
n=135) in a supply chain context. These studies are exploratory in nature and serve two purposes. First, we test whether elicitation via thought bubbles can offer us a meaningful outlook on the cognitive aspects of decision-making. In both studies, we examine how experimental manipulations such as disruption location in the supply chain or the level of information sharing affect mental model development. Study 2 also investigates mental model development to help explain cognitive aspects of decisions made by players with different behavioral profiles. Second, we use the results of our qualitative coding in Study 1 as a codebook for the qualitative analysis of Study 2, aiming to test the reliability of SA for analyzing elicited mental model concepts over time. The following are the contributions of this work:
•
We present thought bubbles as a method for collecting qualitative data on human thought processes diegetically and temporally to elicit mental model development;
•
We show a mixed-method approach for analyzing elicited mental models to gain a theoretical understanding of cognitive aspects of human decisions and their mental model formation; and
•
We provide evidence on the effect of disruption location and information sharing on mental model development (Study 1), as well as the effect of information sharing on the mental model development of players with distinct behavioral profiles (Study 2).
3 Gamettes
We introduced gamettes in our previous study [
68], as a serious game approach for collecting data on behavioral aspects of human decision-making in supply chain experiments. A gamette is a short game-based scenario that immerses human decision-makers into a specific situation, requiring them to make decisions by responding to a dialog or taking actions. The term gamette is a contraction of “game” with “vignette”, and similar to a vignette, a gamette aims to provide a
brief description of a situation, as well as to
portray someone. Here, we utilize gamettes to understand an individual’s mental model supporting their decision-making in a drug delivery supply chain. For this, we use the integrated simulation framework proposed by Doroudi et al. [
21]. This framework comprises a
Flow Simulator for simulating the supply chain dynamics and a
gamette environment for engaging human decision-makers with the simulation by immersing them in a specific role and particular state of the supply chain (see Figure
1).
Within this framework, the Flow Simulator is a multi-agent simulation and the central hub that controls the dynamics of a drug delivery supply chain, including the flow of information and physical products over time. These information and physical flows are driven by the decisions and actions taken by the agents of the system (i.e., manufacturers, wholesalers, and health centers). The Flow Simulator can run in a
standalone mode and without any human agents, which means that the Flow Simulator simulates the evolution of the supply chain system and also controls the decisions and actions of the agents through predefined policies. Alternatively, the Flow Simulator can simulate the evolution of the supply chain by fetching information from a gamette client that captures the decisions of human players. Following the approach in our prior work [
68], we created a gamette with StudyCrafter
2, where players take the role of a wholesaler in a drug delivery supply chain. Details of the gamette design are described in Section
5. The same gamette is used for Study 1 and Study 2; the difference is in their experimental design (i.e., Study 1 considers the disruption location and various forms of information sharing; Study 2 considers different behavioral profiles for decision-makers and how they respond to information sharing).
4 Thought Bubbles
We designed thought bubbles with three main characteristics: (1) diegetic, (2) verbal and open-ended input, and (3) over time. Figure
2 demonstrates the design of thought bubbles and its characteristics. To ensure diegetic data collection, we included thought bubbles (see
in Figure
2) as part of a recurrent meeting scene in the gamette where players would first review their performance (see
in Figure
2). This feedback is provided to players through interaction with other Non-Player Characters (NPCs) during the
gameplay phase and includes factual information (i.e., historical graphs and the current state of supply chain parameters), without introducing any form of bias into the players’ performance. After performance review, the Boss NPC would deliver the thought bubble prompt (see
in Figure
2). In terms of aesthetics, we did not design a thought bubble in literal terms. Instead, we chose to request players’ thoughts via NPC dialogue as it felt natural to request this prompt as part of the meeting scenario, where the results are discussed, helping keep players in the context of the game.
To evoke players to reflect on their experience, we prompted them by asking, “How do you think we are doing Kate?” (see
in Figure
2) and allowed them to provide open-ended input (see
in Figure
2). We chose an open-ended question and response format to mitigate framing bias [
90] and help players to articulate their thought process by engaging associative memory [
66]. The analyses we present in this paper center around players’ responses to this question. Finally, players experienced the meeting scene once every four weeks (8 times total) throughout the entire game (see
in Figure
2), allowing us to collect data on their thought process over time. We chose a four-week interval because, in doing so, we could frame the experience as a monthly meeting scene which felt more natural and resulted in the minimum distraction of players from their decision-making task. Moreover, our supply chain experiment includes a lead time of two weeks for ordering decisions (one week for orders to be processed by the manufacturer and one week for players to receive shipments). By considering a four-week interval, we allow players to have sufficient experience and observe the short-term outcome of their decisions before each thought bubble. A video preview of the gamette and the thought bubbles is available in the OSF repository (
https://osf.io/btfzx/?view_only=8211d2334d5440a0b75ae947811cb845).
7 Discussion and Conclusion
The results from our studies provided empirical support for the use of thought bubbles. A striking finding was the complete opposite patterns in mental model development of Hoarders and Reactors. Our results not only complemented the behavioral patterns from our prior research [
67], but provided a deeper theoretical understanding of the cognitive aspects of human decisions in their interaction with dynamic systems. Understanding cognitive constructs of mental models is instrumental in making sense of human behavior and has been of interest to the HCI field for decades [
46]. This is an elusive task and requires elicitation of mental models and analyzing elicited concepts. We used thought bubbles for mental model elicitation by evoking players’ thinking and capturing verbalization of their thought processes. Of course, many other researchers have also studied elicitation of mental models [
41,
55]. Our work is distinguished from these studies in the sense that we attempted to elicit mental models in (1) a diegetic setting—in the context and during the interaction, (2) using an open-ended prompt to collect verbalization of thought processes, and (3) over time.
The concept of elicitation is fundamental in studying mental models. In the HCI context, numerous studies on eliciting mental models exist [
6,
40]. However, most of them approach the elicitation process in a non-diegetic form, for example, using interviews before or after the interaction. While a comparison with non-diegetic elicitation techniques was out of the scope of our work, we showed how diegetic elicitation allows for dynamic assessment of mental model development over time, and by having minimal interruption to the user. The idea of diegetic elicitation pertains to prior research studying how elicitation procedures affect mental models. Jones et al. [
55] provided evidence on how the interview process affects mental model representations and found out that out-of-context elicitation pertains to mental models stored in long-term memory. Doyle et al. [
24] argued that interviewers or additional information after the task can impact subjects’ mental models, and thus, subjects should be isolated from any out-of-context influences. Therefore, non-diegetic elicitation may measure mental models that are different from what players relied on during the interaction. In addition, considering the context of our study (i.e., dynamic decision-making), it made sense to elicit mental models in situ and diegetically, as in this context people continuously interact with the decision environment which affects their mental model development. We designed thought bubbles as part of a meeting scene (Figure
2) to ensure keeping players in the context.
We chose to elicit mental models through a verbal process, asking players to articulate their thinking. We found inconsistencies in prior research regarding the value of verbal elicitation. Some researchers advocated for the use of textual data and argued that language is key to understanding mental models [
12]. Others have pointed to the complex nature of mental models as a cognitive construct that makes it difficult for individuals to articulate them [
66]. While we acknowledge that not all aspects of mental models can be verbalized, our results demonstrated that the parts that are verbalized provide much insight, especially about the mental model development of people with different behavioral profiles. We accomplished this by designing thought bubbles with an open-ended prompt to mitigate contextual and framing biases [
66] and to avoid taking players out of context. Although one might be skeptical of players’ engagement with thought bubbles in this format, despite a somewhat negative trend, we found evidence for continuous engagement across the eight prompts (see Appendix
C). An interesting finding was differences in the average number of words per comment depending on behavioral profile and manipulations. For example, hoarders in the Info group and Reactors in the No-info group wrote more words per comment (see Figure
12).
Using thought bubbles, we collected data on players’ thought processes over time and in intervals. Mental model elicitation over time is particularly important, considering that mental models are dynamic constructs [
19]. They dynamically form based on new information and as the result of interaction with the environment. Therefore, elicitation over time allowed us to consider this dynamic nature and opened up an opportunity to study mental model development as a process. Of course, the way mental model measurement should be conceptualized (i.e., as a process or an outcome) depends on the research question and the underlying task. However, prior studies also seem to advocate for the process view on mental models, pointing to their dynamic nature [
83], or because of measurement errors arguing that measuring the change in mental models through repeated elicitation is preferred [
24]. Our results demonstrated the benefit of elicitation over time by showing how experimental manipulations and behavioral profiles influence the mental model development of players over time, as represented by the aspect of their situation model (see Figures
6-
8).
From a methodological perspective, our approach for testing thought bubbles involved three main elements: (1) game environment, (2) dynamic decision-making task, and (3) interaction with a complex dynamic system. Future research can leverage thought bubbles by adapting any of these elements with some considerations. First, while games are particularly useful setting for knowledge elicitation [
91], we can envision implementing thought bubbles in a non-game environment; for example, as part of a simulation interface where subjects provide textual input (in intervals) on their thought process. However, we argue that the characteristics of simulation games (here gamettes)—including their realism in representing the system, ability to foster communication, and active involvement [
62]—can lead players to engage with thought bubbles differently in a game setting compared to a non-game one. Future research can study how well a non-game interface can capture thought processes in relation to the task with thought bubbles.
Second, we studied the use of thought bubbles in a dynamic decision-making task. When studying a different type of task, one must ask if the focus is on measuring mental models as a process or an outcome [
83]. If the focus is on the mental model representation and their content or structure in relation to the task—while we advocate leveraging thought bubbles for diegetic elicitation—querying subjects in intervals might be less relevant. Although considering the studies that cast doubt on the externalization of mental models as a static construct [
24], and prefer the process view on mental models [
83], future research must scrutinize to what extent eliciting mental model development as a process can be justified for other types of tasks. Another key consideration is whether the goal is to understand mental models or improve the mental models of the task. While improving mental models was not the purpose of our study, we showed how experimental manipulations such as information sharing affect mental model development. Therefore, future research should look into how manipulations can improve mental models and enhance performance, perhaps by incorporating smart nudging interventions [
65]. Another way is to leverage AI for different framing of the prompt, especially as prior research argued how wording and framing of questions could affect mental models [
24].
The type of task also affects the choice of framework for analyzing elicitation results. We used the Situation Awareness model introduced by Endsley [
28] as it maps well to mental models theory and dynamic decision-making context, and hence, suited our study in which participants interacted with a dynamic decision task. However, when the task is not dynamic decision-making (e.g., gestural interaction with a display [
82]), SA might be less relevant. While we advocate for grounding the qualitative analysis process in theories, the choice of underlying theoretical framework in non-dynamic context requires more scrutiny. We demonstrated the reliability of our approach by using our generated codebook from Study 1 to a new dataset (Study 2) and providing insight on mental model development of players with distinct behavioral profiles.
Finally, we studied players’ mental model development in their interaction with an interactive environment simulating a complex dynamic system (i.e., supply chain simulation that evolved both spontaneously and as the result of players’ actions). However, HCI encompasses many forms of interaction [
44]. Future research can study other types of interaction in which thought bubbles can be leveraged, perhaps with some adaptions. For example, researchers can investigate the use of thought bubbles for diegetic elicitation through non-verbal queries such as card sorting [
63], or diagrammatic representations [
77]. In addition, recent advances in AI have inspired many researchers to study human-AI interaction [
1,
86] and investigate mental models of AI [
40,
92]. Thought bubbles can be utilized to advance our understanding of users’ mental model development of AI over time. Thought bubbles can also be viewed as a form of reflection, and reflection can be used to improve learning [
45]. Therefore, future research can look into opportunities in which thought bubbles can be utilized for reflection in game-based learning technologies [
43]. For all this to work, we need to think about how to scale up the use of thought bubbles with respect to qualitative analysis, which is a time-consuming process. Future studies can look into leveraging natural language processing (NLP) for qualitative analysis [
20,
42,
61]. Last but not least, thought bubbles can potentially elicit more than mental models of the tasks at hand by uncovering cognitive aspects such as attitudes and motivation, which is another interesting future avenue, as reflected in the following quote:
“This is bad. I guess I was scared about having enough supply. I over ordered and now we are paying for it in surplus inventory. I should have gone to veterinarian school like my family wanted! But no! I was all like ”I’m going to be a great supply chain manager one day””