Next Article in Journal
A Clustering and PL/SQL-Based Method for Assessing MLP-Kmeans Modeling
Next Article in Special Issue
Metaverse Unveiled: From the Lens of Science to Common People Perspective
Previous Article in Journal
Unlocking Blockchain UTXO Transactional Patterns and Their Effect on Storage and Throughput Trade-Offs
Previous Article in Special Issue
Assessing the Legibility of Arabic Road Signage Using Eye Gazing and Cognitive Loading Metrics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications

1
CCG/ZGDV Institute, Campus de Azurém, Building 14, 4800-058 Guimarães, Portugal
2
Department of Engineering, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
3
ALGORITMI Research Centre/LASI, University of Minho, 4800-058 Guimarães, Portugal
4
Department of Information Systems, University of Minho, Azurém Campus, 4800-058 Guimarães, Portugal
5
Magikbee, Lda., São Victor St., n.º 18A 1º Front, 4710-439 Braga, Portugal
6
Centre for the Research and Technology of Agro-Environmental and Biological Sciences, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
7
Institute for Innovation, Capacity Building and Sustainability of Agri-Food Production, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 6 May 2024 / Revised: 27 May 2024 / Accepted: 6 June 2024 / Published: 8 June 2024
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)

Abstract

:
The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair access and intuitive visual representativeness, mobility issue mitigation, and sustainability in crisis situations. Among these technologies, augmented reality (AR) emerges as a particularly promising approach, allowing the visualization of computer-generated interactive data on top of real-world elements, thus enhancing comprehension and intuition regarding educational content, often in mobile settings. While the application of AR to education has been widely addressed, issues related to performance interaction and cognitive performance are commonly addressed, with lesser attention paid to the limitations associated with setup complexity, mostly related to experiences configurating tools, or contextual range, i.e., technical/scientific domain targeting versatility. Therefore, this paper introduces ARPocketLab, a digital, mobile, flexible, and scalable solution designed for the dynamic needs of modern tutorship. With a dual-interface system, it allows both educators and students to interactively design and engage with AR content directly tied to educational outcomes. Moreover, ARPocketLab’s design, aimed at handheld operationalization using a minimal set of physical resources, is particularly relevant in environments where educational materials are scarce or in situations where remote learning becomes necessary. Its versatility stems from the fact that it only requires a marker or a surface (e.g., a table) to function at full capacity. To evaluate the solution, tests were conducted with 8th-grade Portuguese students within the context of Physics and Chemistry subject. Results demonstrate the application’s effectiveness in providing didactic assistance, with positive feedback not only in terms of usability but also regarding learning performance. The participants also reported openness for the adoption of AR in pedagogic contexts.

1. Introduction

The United Nations’ sustainable development agenda prioritizes quality education for all, aiming to make education inclusive and equitable. Digital technologies aim to reduce or eliminate pollution and waste while enhancing production and efficiency. These technologies have demonstrated a significant impact on the education system. Traditional classroom teaching methods lack immediacy in learning, quick assessments, and high engagement. In contrast, digital learning tools and technology bridge this gap by offering efficiencies that are unparalleled compared to traditional teaching methods [1]. Additionally, recent generations are developing in an increasingly digital environment, where contact with computers, smartphones, game consoles, and other similar equipment occurs daily. Therefore, profound adaptations have been witnessed in the last decades, reflected in the way that contents can be delivered to students who are technologically proficient, while responding to the challenges posed by current educative system, aiming at increasing learning/teaching efficiency (e.g., Massive Online Open Curses or MOOC [2] and blended learning and video-based instructional education [3]).
It has become relatively common to find courses and graduations resorting to blended learning or fully remote strategies, which provide effective support for home study activities [4,5]. Moreover, recent experience earned in the relatively recent events associated with COVID-19 pandemics allowed to grasp how important is to develop technological tools capable of effectively extending learning to online platforms [5]. As for the operationalization of such a transition from traditional to digital learning environments, using computers and other devices in conjunction with digital tools allows students to play a more proactive role and to be at the center of the process [6,7,8]. On the one hand, the instructor becomes a process guider, with responsibilities for increasing learning efficiency. On the other hand, using the myriad digital resources, learners may download the required information or upload their content. Web 2.0 technologies (wikis, podcasts, blogs etc.) facilitate learners to generate content, collaborate with others, assess each other’s work, and move toward co-learning. Digital technologies make it easy to use classroom tactics like gamification or approaches like flipped classrooms that optimize learning. Learning landscapes have evolved as a didactic tool that mix several techniques and enable distinct itineraries to be presented to each student. Technology makes the instruction more inspiring and meaningful [9,10]. Within the realms of digital era learning is Virtual Reality (VR) and Augmented Reality (AR) techniques, which have been widely investigated and demonstrate positive effects among teacher and student communities entangled with contexts that may extend beyond face-to-face regimes [11]. However, VR poses some issues when compared to AR. Among them, a few can be enumerated. For example: (i) the rupture with the real world that may impact in learners intuition and (ii) the need for more powerful and, therefore, usually more expensive devices to run compelling and effective experiences [12]. Conversely, Mixed Reality (MR) can be seen as an extension of AR that makes use of specialized equipment integrating specific sensing devices for earning physical environment awareness to articulate the placement, tracking, and visualization of digital content, as well as the interaction with it [13].
Hence, this paper introduces the ARPocketLab platform, a solution that leverages AR technology to enhance the teaching-learning process and addresses individuals’ needs, which is a critical factor for paving the way for mass personalization [14], expediting technological dissemination, and smoothing adoption by general public or across specific communities (e.g., academic circles). This solution not only provides students with mobile and interactive markerless and marker-based AR experiences tailored to the learning of a specific subject within the context of a given course unit but also includes tools to assist teachers in configuring these experiences in an interactive and intuitive manner. Additionally, ARPocketLab is designed to function with minimal physical resources, making it especially valuable in environments where educational materials are scarce or in situations where remote learning becomes necessary. Within the context of this work, the following topics are included in the agenda, encompassing 8th grade students of the Portuguese educative context:
  • Previous works corroboration checking regarding AR’s effectiveness in learning (RQ1);
  • Assessment of the pedagogic impact while employing AR-based learning/teaching technologies (RQ2);
  • Evaluation of digital era students’ aptness for learning-oriented AR-based technologies, through usability, confidence, intuition, and autonomy analysis (RQ3).
In terms of structure, besides this introductory section, this paper revisits some works in the literature that employ AR for pedagogic purposes (Section 2). Afterwards, ARPocketLab is described in detail. With that in mind, system requirements and respective specification, including the general architecture, are outlined in Section 3. Then, in Section 4, the activities related to the development of ARPocketLab’s functional prototype are summarized, including the modern tools and frameworks that supported the implementation of pedagogic-oriented 3D virtual environments, interaction, and AR-related processes (Section 4). Considering case-studies focused on the curricular unit of Physics and Chemistry and tests carried out on the application prototype by students of the 3rd cycle of education are detailed and the results are then shown and discussed in Section 5. Finally, the main conclusions of this work are addressed along with possible future directions (Section 6).

2. Related Work

For many years, VR stood out as an immersive technology that immerses users into completely synthetized virtual environments (VE) oriented for several purposes (e.g., education) [15]. AR evolved from VR/VE, aiming to complement, rather than replace, the physical world by adding digital overlays. This definition aligns with Azuma [16], who characterizes AR as a visualization technique that allows one to see virtual objects superimposed upon the real world. Its ability to integrate real and virtual elements in the same environment is further detailed by Santos et al. [17] based on the virtual continuum concept, which can be seen as an abstract scale system that describes the mixture between virtual or digital elements with the physical world’s tangible entities. Another relevant emerging approach that has been explored by both scientific and professional communities is Mixed Reality (MR) [18], which enables the seamless blending between physical environments and computer generated entities, wherein the manipulation of both realities may occur with a bidirectional influence over each other, by principle. Still on their way to technological democratization, the most promising MR headsets can cost over USD 3000 [19], which is a prohibitive price for consumer grade commercialization. Notwithstanding, works supporting either AR or MR can be found in the literature.
Minjuan Wang [20] provides an insightful overview of AR technologies, highlighting three main categories: wearable, handheld, and fixed. Building on this foundation, Hojoong Kim [21] delves into the specific realm of wearable sensors for VR/AR applications, showcasing a variety of technologies designed to augment virtual and augmented reality experiences. Among these are vision detection devices, motion interactive devices, haptic and force feedback interfaces, and physiological signal monitors, which collectively contribute to creating more engaging, interactive, and personalized AR experiences. The technological landscape of AR is continually evolving, as highlighted by Wang’s discussion [22] on nanowire-based wearable skin sensory input interfaces, which can be characterized by their ultrathin, flexible, and stretchable properties, opening alternative dimensions for interaction. Focusing on a different perspective, Jonas Blattgerste et al. [23] explored interaction concepts for handheld AR in training situations, examining tangible, hand recognition, and on-screen methods. Despite the engagement that can be experienced by tangible approaches, such a family of devices implies scalability issues and learning impacts. With more emphasis on mobile technologies and accessibility, Žilak carried out a review [24], aiming to shed light how AR solutions can be tailored to meet the diverse needs of users with disabilities and thereby promote social and digital inclusion.
At the core of AR’s integration into, but not exclusively to, education are diverse devices that serve as the portals through which digital enhancements merge seamlessly with the physical world, creating immersive and interactive learning environments. These devices, ranging from smartphones and tablets to specialized wearable technologies like head-mounted displays and smart glasses, contribute uniquely to the delivery of AR experiences in educational settings. Particularly, smartphones and tablets, ubiquitous in today’s society, stand as the most accessible gateways to AR, enabling students and educators to engage with augmented content through apps and mobile websites, with remarkable opportunities for pedagogic practices. In corroboration with this perspective is the exploratory study carried out by Wyss et al. [25], in which the benefits of AR are transversally highlighted for various subjects. Moreover, this study also points out the lack of AR-related technological literacy among teachers, especially when concerning wearables such as HoloLens. However, increased motivation and interest are denoted among prospective teachers, who consider AR as an effective supportive tool for learning. Despite technical challenges, the use of AR through devices such as HoloLens in teacher education is viewed positively, suggesting its potential to provide benefits for modern teaching methodologies. Dhar et al. [26] were also able to demonstrate the potential of AR in medical education through the use of devices like headsets, smartphones, and tablets. Still in the scope of the technological progress that has been unlocking devices oriented for AR, near-eye displays gained special attention in the work of Koulieris et al. [27]. This comprehensive review highlights the journey from initial hurdles to modern advancements, which are not yet sufficient to turn near-eye displays into widely adopted devices. Therefore, a multidisciplinary approach combining insights from vision science, perceptual engineering, and technical development is suggested.
Concerning fields of intervention, AR has expanded to areas that include marketing, entertainment, gaming, and healthcare, attesting to its versatility and broad application range [28]. This versatility is significant in education, transforming teaching methods through interactive, digital content that enhances learning across various subjects from elementary to higher education, as illustrated by Lee [29]. According to the author, AR has been applied in classroom-based learning in subjects like Chemistry, Mathematics, Biology, Physics, and Astronomy, and has been employed as digital guider in augmented books. By seamlessly blending digital overlays into real-life scenarios, AR significantly enhances the learning experience, making educational content more interactive and engaging. The benefits of AR in primary school are emphasized in Marín’s research [30], which defines the technology as a didactic resource that brings together various elements of its own to develop children’s abilities and skills, thus enhancing creativity, facilitating factual content learning, and promoting collaborative work among students. Barreira et al. [31] were also able to demonstrate some of these benefits with an AR system to support the teaching of English words in the Portuguese elementary school context. Building on the notion of AR as a powerful educational tool, Afnan [32] exemplifies the broad applicability and effectiveness of AR in primary education across various subjects. The development and implementation of AR applications targeting fundamental educational pillars, such as the English alphabet, decimal numbers, animals, and global geography, not only demonstrates AR’s versatility but also its significant impact on boosting student engagement and motivation. The same stand point seems to be shared by Kerawalla et al. [33], who found out that teachers were more likely to ask the children to watch an AR animation and describe it compared to the role play sessions in which children were encouraged to create and control the roles of the actors. For primary level mathematics, AR shows potential to improve student engagement, motivation, and academic achievements [34].
AR-based technologies have also led to enhanced learning gains, outcomes, and performance in comparison to traditional educational approaches, for example, in middle school, as demonstrated by Chiu et al. [35], who addressed a science subject involving a case-study related to gas behavior. For the visual arts curriculum, AR’s transformative potential was highlighted by Di Serio [36]. By enhancing images of Italian Renaissance masterpieces with interactive, multimodal content, this study not only fostered an engaging learning environment but also catered to different learning styles and preferences. Gains in motivation and engagement with less cognitive efforts were reported. Sahin et al. [37] adopted a quasi-experimental approach in which learning outcomes and attitudes of middle school students engaged in AR-supported lessons and were compared against a control group using traditional learning methods. The AR group, which explored the “Solar System and Beyond” unit, demonstrated notably higher academic achievements and more favorable attitudes towards science. Moreno-Guerrero’s research [38] on high school physical education programs also led to positive conclusions that link AR to increased motivational levels, deeper comprehension, and higher proneness for the exploration of important skills such as autonomy and problem-solving. Therefore, AR seems to positively influence the bridging of theoretical knowledge and those activities more related with physical exercise within educational contexts. Another work that combined AR technology and education was the “Mad City Mystery” project [39], which consisted of a place-based game. With this work, the effectiveness of gamified AR in fostering scientific argumentation skills among students was highlighted. By promoting critical thinking and collaborative problem-solving skills, students are, thus, empowered to develop mental tools for tackling the intricacies of 21st century challenges. The integration of MR environments has also led to comprehensive technology-based advancements across K-12 education levels, as elaborated in a systematic literature review by Pellas et al. [40]. This review unveils the profound impact of both VR and AR technologies in facilitating interactive and immersive educational experiences across a range of subjects, with benefits for scholar performances, showing the need for traditional education system modernization.
Shifting the scope to higher education, Jorge Martín-Gutiérrez’s [41] explored the benefits of AR in electrical engineering programs, addressing relevant classroom issues, such as overcrowded labs and lack of engagement due to the complexity of the pedagogic content involved. The study revealed good results in what concerns usability assessment, in which three AR applications were tested with absence of errors in terms of both effectiveness and efficacy. Moreover, the students provided high feedback scores for comfort while also recognizing the adequateness of AR-based tools for learning both practical and theoretical content.
In agreement with previous studies, Cabero-Almenara [42] underscores AR’s significant benefits across various educational levels, especially in university settings, indicating interactivity, engagement, and the possibility for customizing learning experiences as key-factors for success. Moreover, challenges and limitations associated with AR’s educational integration, such as the need for adequate technological infrastructure and teacher training, were addressed. Assessment instruments, including the Technology Acceptance Model (TAM) and the Instructional Materials Motivation Survey (IMMS), allowed the authors to confirm high acceptance and motivation levels among students, correlating positively with improved academic performance. Among the recommendations of the study was the adoption of innovative teaching methods like flipped classrooms and collaborative learning.
Wasko’s study [43] illustrates how AR can help students to think of themselves in careers they might not have considered, thereby broadening participation and addressing equity concerns in education. As such, AR can be integrated in education as an agent for professional vocation assessment with an emphasis on the jobs of the future. Interdisciplinary collaboration among teachers and flexible scheduling are key elements for AR’s successful implementation in education, as highlighted by Tzima et al. [44]. On the other hand, the navigation of educators in AR integration is still a great challenge. Garzón’s et al. [45] scrutinizes the educational tactics paired with AR, considering a vast compendium of studies and aiming to craft actionable guidelines for future AR-enhanced learning experiences. AR proves to be a significant catalyst in education, propelling students’ learning gains from average to noteworthy. When merged with collaborative learning, problem-based learning, and situated learning strategies, AR helped to increase students’ performances, respectively, from C to an A−, C to B+, and C to B. AR benefits are still observable for other learning settings characterized by informal (e.g., museums, outdoor explorations) and formal (e.g., classrooms and labs) contexts. In terms of time of exposition, while a one-day session was enough for the achievement of positive outcomes, the best results were seen in programs spanning from one week to one month. In [46], the impact of AR in engineering graphics education is assessed. To that end, 50 first-year engineering students were considered, and traditional teaching methods were compared against AR-enhanced learning. Students using AR performed 18.52% better in engineering graphics questions, while, in mental rotation tests, improvements of 28.97% were observed.
Garzón et al. [47] examined the role of AR in education through a systematic review and meta-analysis of 61 articles, focusing on the impact of AR on various educational levels and fields. The meta-analysis included 27 studies using pretest-posttest control design to measure student learning gains with Cohen’s d effect size as the metric. Several education levels and fields were analyzed. In the former context, the most relevant impacts were observed in upper secondary education (d = 0.70 considering 5 studies with 723 participants). Regarding the latter context, the fields benefiting the most from AR incorporation were Arts and Humanities (d = 0.96 from 3 studies with 323 participants) and Health and Welfare (d = 0.81 from 3 studies with 174 participants). Julio Cabero-Almenara’s study [48] assessed the impact of AR on university students’ engagement, performance, and the influence of gender on knowledge acquisition. The study employed an experimental design with 396 participants from the University of Seville, from which 34.85% were males and 65.15% were females. The study validates the usefulness and enjoyment of AR in educational settings, regardless of gender. Structural equation modeling (SEM) revealed that the “attitude towards its use” parameter was 72.69%, with a variance explained by the ease of use, perceived usefulness, and enjoyment. The “Intention to use” parameter led to an agreement of 68.74%, influenced by usefulness, attitude, and enjoyment, demonstrating the interconnectedness of these factors in adopting AR for learning.
Considering all the evidence connecting AR to higher performances in learning, the remainder of this paper will present the ARPocketLab solution as a pedagogic tool proposal for classroom and autodidactic support. While most of the conventional AR applications often require extensive setup or are limited to specific contexts, ARPocketLab stands out as a flexible and scalable platform, designed for the dynamic needs of modern tutorship. Among the noteworthy features is the dual-interface system, which enables both educators and students to interactively design and engage with AR content directly related to educational objectives. This extends authoring tools functionalities, which are an underexplored aspect in current literature [49]. Additionally, ARPocketLab is designed to function adequately with minimal physical resources, making it especially valuable in environments where educational materials are scarce or in situations where remote learning becomes necessary. ARPocketLab system’s specifications and implementation details, as well as tests carried out with students, and respective results are documented in the following sections.

3. AR-Based System Specification Proposal for Enhancing the Teaching-Learning Process

To effectively design a system that enhances the teaching-learning process, powered by AR to fostering pedagogic activities and engaging the target audience, a step for surveying requirements was carried out with the participation of a Portuguese teacher of the 3rd cycle (equivalent to USA’s middle school or UK’s lower secondary education).

3.1. Requirements

The proposed system was outlined, keeping in mind the goal of implementing and providing a flexible tool not only capable of delivering AR-based pedagogic experiences for supporting students learning but also to enable the configuration of such setups by teachers or mentors. Therefore, the definition of both functional and non-functional requirements was carried out. While functional requirements outline the specific needs that the system must encompass for the operationalization of the learning activities supported by AR, non-functional requirements focus on constraints and operational capabilities, delineating aspects such as performance, reliability, security, and usability, essential for an effective utilization. Both are summed up in Table 1 (functional requirements) and Table 2 (non-functional requirements).
To properly address both functional and non-functional requirements with emphasis on the former group, a system architecture is proposed along with unified modelling language (UML)-style use-cases for the delineation of main functionalities. Additionally, an entity-relationship (E-R) model depicts the diversity of logical data classes composing the system to be developed and how they articulate with each other. Finally, a few flowchart diagrams specify the most relevant processes to implement.

3.2. Main Architecture

To adequately start addressing the design of the proposed teaching-learning process enhancement system, a general architecture was delineated, inspired by the wealth of consumer-grade personal digital assistants currently available, such as smartphones or laptops, and considering modern AR software development kits (SDK) and integrative cross-platform solutions (e.g., Unity3D-Unity Technologies, Copenhagen, Denmark). It consists of a structure of several layers that work together seamlessly. At its core is the hardware layer, encompassing the physical devices like cameras, inertial motion units (IMU), global navigation satellite system (GNSS) receivers, storage devices, and displays. Above this lies the abstraction layer, acting as a bridge between hardware and the operative system (OS), which constitutes the foundation for the software layer, managing resources utilization and providing a platform for applications. The application layer, where the proposed teaching-learning process enhancement system conceptually lives, comprises two key components: the AR SDK and a Scene manager. The former empowers developers with tools for creating immersive experiences. It includes essential features like tracking systems, pose estimation algorithms, and reality mixers that blend virtual and real-world elements seamlessly. The latter orchestrates the virtual environment, handling user interactions, coordinating spatial references for virtual objects, and rendering them seamlessly mixed with reality. Within the application layer, two modules coexist: i) the Didactic Experience Configuration Module, which allows educators to design and organize digital educational content, and ii) the Didactic Experience Delivery Module, which presents AR-based educational experiences to students, guiding them through learning seamlessly and engagingly while integrating virtual content with the real world, according to the configurations made by the educators. Figure 1 depicts the described overall architecture.

3.3. Actors and Functionalities

Regarding the functionalities planned for the proposed system, they are made explicit in Figure 2, encompassing the range of actions that can be performed by its main actors, i.e., teacher and student.
For each AR-based pedagogic experience to incorporate, a configuration stage needs to take place. Therefore, the teacher functions as a highly relevant actor in the system, with responsibilities for configuring the AR-based pedagogic experience that will be later delivered to the students. A few major actions profile the activities of this type of actor in the system after loading an existing experience or creating a new one: the selection of virtual components available in a gallery and the respective parameterization. In turn, components’ parameterization includes position, rotation, and scaling definitions, assignment of cause-effect events, and prioritization, i.e., the establishment of the sequence of steps or precedency rules sets to adequately carry out the experience. When the teacher is satisfied and the system validates the configuration (i.e., there are no conflicts to address such as components missing or absence of cause-effect events), the experience in production can be stored to be accessed later for edition or execution. Meanwhile, a student can select AR-based pedagogic experiences made available by a teacher and display and interact with them. More specifically, during the experience of live execution, a user (student) can use and manipulate available components, confront interactions between them, and observe the results.

3.4. Main Entities and Relationships

In Figure 3, an entity-relationship (E-R) model is intentionally presented in a non-normalized format to offer a comprehensive overview of the supporting entities within the proposed system and their interactions. Such structures define guidelines for dynamic handling and real-time manipulation of data, serving, simultaneously, as the bedrock for consistent data storage and management.
The list of entities that support the proposed AR-based system operationalization can be deepen as follows:
  • Component: represents the interactive elements of the solution that provide visual insights during the pedagogic experiences. It may specialize into three specific types, Main_Component, Secondary_Component, and Docking_Zone, which will be detailed below;
  • Main_Component: embodies the primary interactive elements within the AR environment, allowing direct user interactions and, often, acting as triggers for result visualization, when positioned in designated docking areas (Docking_Zone managed objects);
  • Secondary_Component: is linked to a given Main_Component instance and represents the visualizable results (according to specifications managed by a Result object) itself, derived from components’ interaction;
  • Docking_Zone: specific areas of the AR environment wherein Main_Component instances can be placed to trigger a specific result. These zones are characterized by attributes such as name, color, status, and presence of components, serving to regulate docking zones within the environment;
  • Creation_Zone: refers to areas of the AR environment designated for component creation, functioning akin to docking zones and ensuring controlled arrangement for components that, during the configuration stage, are selected to be part of an AR experience;
  • Model: categorized by types such as 2D/3D models or UI elements, this entity associates a visualizable aspect to the different components and zones within the AR environment;
  • Action: enables the assignment of action types to models upon result achievement;
  • Condition: stipulates the requisite conditions for result achievement within a given AR-based environment, encompassing collision-involved objects and the targeted result application. For condition validity, eventual set up precedencies must be attended to;
  • Precedence: establishes a sequential interrelation between Condition and Docking_Zone instances, defining a regulated story-telling-like line of events that must be iterated by the user;
  • Result: specifies the type of result to be applied to specific instance of Main_Componente or Secondary_Component;
  • Properties: consists of a set of configurations associated with the models involved in the AR experience, encompassing attributes such as text, description, mass, volume, draggable status, and trigger utilization;
  • Tag: associates specific labels with particular models, streamlining organization and identification, and regulating associability between elements. While some tags lie in the system by default, additional ones can be generated and assigned during experience configuration;
  • Coordinate_System: deals with data pertaining to position, rotation, and scale for diverse Component and Docking_Zone instances. Whenever needed, associated Model instances make use of the same values to fit in the AR environment.

3.5. Proposed System’s Backoffice–AR-Based Pedagogic Experience Configuration

The backoffice component provides essential services vital for the seamless operation of the frontoffice, i.e., another module used by students to carry out AR-based pedagogic experiences, encompassing various configurations such as didactic experiments, 3D models, and conditions, among others. More specifically, it incorporates a dedicated functionality set for configuring teaching experiences, as well as a process for entailing the creation and customization of AR-based environments (Figure 4).
This configuration is facilitated through a user-friendly configurator accessible via a web browser, dependent on Internet connectivity. This tool empowers users to parameterize and define the myriad elements to involve in an experience, including the inclusion of components and their disposition in the environment, and action functionalities both properly organized in a gallery system. The components gallery serves as a repository housing an array of 2D and 3D models indispensable for constructing diverse experiential environments. Conversely, the action gallery enables the association of specific actions or behaviors with particular components within the environment under configuration. These behaviors subsequently allow the induction of changes in the experiential state, viewable through the transformations reflected in the components. Before making a given experience available to students, teachers are prompted to meticulously select all desired components and actions earmarked for inclusion within a configuration for pedagogic purposes.

3.6. Proposed System’s Frontoffice–AR-Based Pedagogic Experience Contact

The module for displaying AR-based pedagogic experiences constitutes the frontoffice interface of the system, which also provides a set of interactions and functionalities to interact with the environment. With the goal of being a virtual laboratory for students, this interface intends to infuse some formative teachings practices, providing means for verifying results of the actions and manipulations interactively made upon the virtual components that lie in AR experiences, as an educationally convincing digital twin.
In the proposed solution, both marker-based and markerless tracking methods were considered, more specifically those methods based on image registration and surface detection, which are currently supported by many available AR software development kits. As such, running AR-based pedagogic experiences is a possibility not only in classrooms settings, but also in other everyday environments, like a bedroom or an outdoor recreational space. Additionally, it is imperative to recognize that both tracking methods possess distinct characteristics that necessitate careful consideration based on the underlying application context. Some of these key attributes are summarized in Table 3.
The proposed AR-based system’s frontoffice is designed to operate in two modes regarding the AR environment setup: configuration-driven filling and system-driven filling. The first builds the experiences with the components defined in the configurations proposed by a teacher. It aims to familiarize the user with the experience and the associated pedagogic content. At the same time, it acts as a guide that aims to aid the user through the interaction with the environments, as well as the accomplishment of the encompassed pedagogic activities. Conversely, in the system-driven filling mode, only the basic components are made available to allow an essential level of operability within the experience.
The interaction with the components is performed through actions over touchscreen-based mobile device displays encompassing, for example, instantiating main and secondary components in the environment, dragging and snapping them into docking zones, changing values using slider buttons, and tapping virtual 2D buttons, among others, eventually resorting to specialized user interface tools for development support.
To provide the visualization of results upon components docking and combination, a cause-event strategy was followed, known in this work as condition-result method. It can be transversal to most of the pedagogic fields involving experimentation and outcome analysis. In terms of rational, the method relies on the association of visualizable results by linking main/secondary components to docking zones and considering a set of predefined or preconfigured conditions. Finally, the achievement of a certain result associated with a condition requires the system to validate a set of established precedencies, which consists of identifying whether all the components in the experience are well-placed and following the expected sequence.
Figure 5 sums up the planned workflow of the main operation steps taking place at the AR-based pedagogic experience side.
The next section focuses on the details that led the development of the proposed system specification into a usable software platform. Due to the intersection of concepts such as hand-held virtual laboratory and AR, the proposed system was named ARPocketLab and will be referred to by this designation from now on.

4. Prototype Development: ARPocketLab

This section is dedicated to the implementation of a prototype that materialized from the proposed system design.

4.1. Development Tools

To implement the functional prototype of the AR-based system described in this paper, several key technologies were employed:

4.2. 3D Assets Modelling

While some of the 3D models used in the AR experiences can be directly instantiated from Unity’s 3D object library, more complex virtual assets consisting of representations of real-world or even abstract entities (e.g., gobelet, molecules, equipment, numbers, etc.) must be modeled in third-party tools. As mentioned, Blender software was used to model such assets, which also allows export for Unity solution. The textures present in some models were sourced from the Internet and applied to the models. Figure 6 shows a set of exemplifying virtual assets from the Physics and Chemistry field, exported to Unity in Filmbox (.fbx) format, which supports a set of properties such as mesh, color, vertices, animations, and lights, among others.

4.3. Data Management

The data model (see Section 3) is implemented through a master C# class, with serialization/deserialization capabilities. The data management involved in these conversion processes relies on JavaScript Object Notation (JSON), which is a widely used format in platforms requiring server-client communications. Moreover, JSON is a compact and lightweight plaintext-based file format that eases not only interoperability but also readability, useful for spotting issues hampering importing and exporting of such structures into applicational layers, among other benefits.
In the case of Unity, JSON-class instance and vice-versa conversions can be achieved through Newtonsoft.Json plugin. More specifically, in the serialization process, a given class instance can be locally stored in a JSON file. When reversed, the deserialization process loads a class instance, including its fields, with the data previously stored in a JSON file. Serialization and deserialization are both depicted in Figure 7.

4.4. Implementation Strategy and Functional Rules

The visual and interactive menus for navigating the application were created using the Doozy UI plugin, a UI management system for Unity. It enables the design and linking of graphical wizards through a node-based control panel that defines flow and manages scene transitions.
For tracking methods, the Vuforia SDK was used, incorporating fiducial marker tracking and plane detection. Fiducial marker tracking detects real-world patterns or images that match internal references, associating each marker with digital content like 3D models or videos. Plane detection, specifically Vuforia Ground Plane, uses the Plane Finder component to recognize real-world features and display virtual grids identifying planes. This allows mapping and positioning of virtual elements on these planes. The user must confirm the position for the augmented experience, concluding the plane detection process. Figure 8 shows a smartphone operating in either tracking mode.
Leveraging Vuforia tracking functionalities, the proposed solution provides two modes for the setup of virtual environments: configuration-driven or system driving fulfilling. While the former requires the definition of a set of properties in the virtual components articulated for a given purpose proposed by the teacher, the latter relates to default general settings programmed into the system. Regarding these components, they can assume the following types:
  • Main: corresponding to independent entities in their intrinsic state;
  • Secondary: more associated to assets (virtual models, visual effects, buttons, labels, etc.) that trigger a visualizable result when certain main components interact with each other;
  • Docking component (zones): regard to boxes wherein the above-mentioned components can be fitted in.
Figure 9 illustrates the relationship between the three components, highlighting their cause-effect process for pedagogic purposes. The user selects an environment-filling mode, between configuration-driven or system-driven, before starting an AR-based pedagogic experience, triggering the loading of a configuration file. This file is deserialized into a local data class instance containing all environment information. Based on the chosen filling mode, the system then instantiates and arranges docking zones as predefined in the configuration file. Display warnings may prompt the user to position specific components to enable result displays. Main component models are loaded from the Resources/MainComponents folder and placed in the virtual environment. If configuration-driven, components are automatically positioned in matching tagged docking zones; otherwise, they are placed in default coordinates within a creation zone.
After positioning, the system assigns properties to set up the environment and checks for secondary components associated with the main ones. These secondary components are specified in the configuration files and are loaded from the Resources/SecondaryComponents folder, including 3D models, text, buttons, or particle systems, and positioned accordingly. Considering configuration file rules, they are instantiated in the virtual environment, and their properties are assigned, including their location, according to one of the following positioning strategies:
  • absolute positioning matching the location of the main component.
  • or relative positioning at a predefined location relative to the position of the main component.
If the association is relative to a main component, it is placed under the respective hierarchic coordinate system, enabling group dragging.
With the compatibility with Windows (Microsoft, WA, USA) and Android (Google, CA, USA) operating systems, virtual assets can be loaded from Unity’s Resources folder, where they are stored in prefab format, which represents a 2D/3D template of a given component. The prefab is able to create instances of itself that conserve original properties but with the possibility of changing the parameters of the copied objects, adjusting them for different use-cases. Each of the components constituting the virtual environment has a set of Unity’s native properties, such as mesh (mesh), material (with a texture often associated), colliders (impact-sensitive wrapper), and rigid body (to manage physics), that are handy for visualization and interaction aims. By default, the collider property has the trigger feature enabled, and the rigid body has the gravity check inactive, aiming at a custom management of the assets in collision detections, suitable, for example, to the aid in the tasks related with the dragging and docking of components through physics-based restrictions.
The main components have a property that allows them to manipulate their position through a dragging interaction, making it possible to switch them between docking zones. The underlying implementation relies on a plugin known as LeanTouch, which has a set of touch interactions for different platforms. To drag the components, a few plugin scripts must be dynamically added, namely:
  • LeanSelectable, which allows selecting a component by touch;
  • and LeanDragTranslate, for the dragging of a component along Unity’s three-axis Cartesian system.
To solve issues related with Y-axis drifting, essentially characterized by a sudden loss of a given object during drag interaction, a script was implemented to block the movement in that cartesian component.
Physics (e.g., Unity’s RigidBody and Collider) and scripting functionalities integrated in the main components facilitate snapping operations into docking zones sharing equal tags. This is where the condition-result feature enters, leading to a set of verifications. First, the configuration file is consulted to check the conditions involving the components in association. If this is verified, the system proceeds to the precedence verification step, which consists of checking the fulfillment of other docking zones necessary to trigger the result exhibition. If the system detects the lack of precedencies, the condition in assessment halts until all the docking zones meet the predefined requirements, and a set of warnings are shown to the user. Otherwise, the result associated with the verified condition are exhibited, which includes, for example, the activation of secondary components such as particle systems, UI components, text, and virtual buttons, allowing the user to interact with these and at the same time visualize the changes occurring in the virtual environment. Through Unity’s native onTriggerExit event, components dissociation can be detected and the results in exhibition can be turned off. As such, many experimental interactions can be supported.

4.5. Didactic Experience Configuration

The starting point of a didactic experience configuration activity by a teacher is a general menu, offering options for setting up of an AR-based environment with pedagogic purposes (Figure 10a). All components and conditions are specified in this area of the solution. There are also options to change the environment’s configuration by adding and deleting components, conditions, and results. Along with the general menu, there is a gallery of available components (Figure 10b) that the user can add to the environment, such as docking areas, containers, experience assets (e.g., molecules and materials), texts, and buttons, among others. By clicking on any type of component, a user can create an instance and manipulate its position, rotation, and scale in the virtual environment. The component properties menu (Figure 10c) refers to another set of options that allow one to define various parameters, such as the name of the component and it’s position, rotation, and scaling (with the manual input of coordinates using text boxes or more interactively, through draggable gizmos), as well as physical properties, associations with other main or secondary components, and, also, assignment of actions. All these properties will serve the purpose of triggering a certain behavior when the students carry out the AR experience. A component identification strategy was also implemented (Figure 10d) through a tag menu, which enables one to organize items into different categories that can later be highlighted in the environment. Additionally, condition-result features are also triggered in this tagging approach. After selecting objects from the gallery, respective virtual representations are displayed in an interactive configuration viewport that allows the user to perform positioning, rotation, and scaling operations upon added assets. Such interaction is provided through a manipulatable virtual gizmo similar to the ones that can be found in 3D modelling tools, leveraging an official Unity plugin known as Runtime Transform Gizmos, in which several features can be found and explored (e.g., object’s pivot alteration, free-form spatial movement, 2D sliding rules, actions’ undo and redo, etc.). A floating context menu, available when the object is selected, facilitates the user’s access to the available operations. An example of the addressed functionalities is illustrated in Figure 11.
It is worth highlighting that the confirmation viewport consists of a grid with a predefined size for moving components with a homogeneous spacing adjustment unit (i.e., spacing), respecting the limits/adjacencies of neighboring elements. Moreover, the configurator also checks for wrong or incomplete configurations before the releasing of the AR experience for the students. Such validation encompasses a set of necessary fields that require parameterization (e.g., indication of the components’ names, characteristics, and properties selection, transform values for environment’s arrangement, etc.) and that comply with the data model proposed in Section 3. Finally, there is the possibility of loading and editing previously configured environments.

4.6. Didatic Experience Delivery

As a counterpart of the experience, configuration is the experience delivery, which consists of a module wherein students can access and interact with previously prepared AR-based environments to carry out the pedagogic activities defined by a teacher. The development of this module was made with a focus on Android devices, without the need for an Internet connection, at least after experience configuration download. The following stages are involved: (a) data reading and interpretation of previously designed experiences and (b) setting up of a designed experience-related environment, as well as the layout of all associated components (main and secondary ones, besides docking zones).
As for the menus that allow users to navigate through the part of the application component that delivers the AR-based experience, a few are worthy to highlight, as depicted in Figure 12.
The initial menu presents the experience sets, built considering the subject’s themes (e.g., molecular properties in the scope of Physics and Chemistry), which, in turn, branch into categories or subtopics (e.g., physical change states, materials density, etc.). Afterwards, there is a menu driving the user to either the tutorial mode or experience mode. While the former consists of an onboarding tutorial that aims to build skills in the student on how to operate with the applicational level of the AR-based pedagogic tool, the latter refers to the display of the didactic experience itself.

4.6.1. Exploratory Tutorial

The system provides an introduction submodule within the experience delivery frontend, in the form of a tutorial, which comprises a set of step-by-step procedures, introducing the user to some of the available features and allowing him/her to perform some actions.
The introduction highlights the tracking method and steps for overlaying the virtual environment onto the real-world stage. It then guides users through filling the AR-based environment. In tutorial mode, only the system-driven filling approach is active, requiring more manual operations. Components are instantiated in their respective positions, and users receive instructions on how to drag components between positions. Dragging triggers a visual indication of associations between components and docking areas based on tag similarity, along with a post-snapping result overview. Users can freely move components and switch between docking zones to see different results. Finally, the user is redirected to the main menu. Figure 13 shows a set of screenshots associated with the distinct steps of an ongoing tutorial.

4.6.2. AR for Pedagogic Activities

The Vuforia SDK processes the camera feed to deliver AR experiences using marker-based and markerless tracking techniques. Marker-based tracking matches known template images with real-world regions of interest. Markerless tracking, specifically horizontal plane detection, uses Vuforia’s plane finder component to identify real-world features and estimate surfaces like the ground or tables. Successful detection triggers a virtual grid on the screen, indicating recognized physical planes for arranging virtual components. The augmentation environment can be constantly repositioned until tracking is lost. These techniques make the application flexible for AR-based pedagogic experiences, utilizing both image-based registration and natural feature detection for robust tracking. Figure 14 depicts both scenarios.

5. Tests and Results

Tests involving students within a classroom environment were carried out to assess the ARPocketLab application regarding usability, user satisfaction, and its impact on the learning process.
The experiments took place in the presence of and with constant surveillance by the teacher in charge of the underaged students, and all the participating subjects voluntarily agreed be involved in the study without the opposition of their legal guardians. Though these experiments were harmless, measures were taken to ensure the safety of the students during the experiments, namely obstacle clearance, confinement of navigation area extension, and screening of students based on relevant health conditions (e.g., photosensitivity). Moreover, the surveying instruments used to collect participants’ data were designed to ensure complete anonymity.

5.1. Target Audience

The targets of these tests were two 8th grade class groups of the Eiriz School Cluster (Paços de Ferreira, Portugal), composed of 40 students in total, from which 21 were males and 19 were females (Figure 15a). All students were within the age group ranging between 13-14 years old. To deploy the planned AR-based pedagogic experiences, explained in more detailed later in this chapter, and to collect opinions, considering the number of available devices with the preinstalled application, the participants were split into subgroups of three students. Prior to these tests, 55% of them (22 subjects) declared previous contact with AR (Figure 15b).

5.2. Case-Studies

To carry out the experiments for ARPocketLab evaluation, two distinct AR environments centered around the 8th grade Physics and Chemistry subject were configured, more specifically, focusing concepts of physical state changes and material density. These environments stemmed from the challenges observed in previous academic years involving the topics embraced.
The first environment, titled “Physical State Changes” (Figure 16a), encompasses molecules that can be assigned varying temperatures by the user in order to induce physical states alterations, represented through visualizable transformations such as water turning into ice or water vapor. Temperature values can be manipulated using a slider control, reflecting inline changes, in each transition. The primary objective of this environment is to elucidate the relationship between different components and their respective physical states, as well as to illustrate molecular organization within each state.
The second environment, “Material Density” (Figure 16b), comprises a water tank and a set of elements for sinking/fluctuation tests, specifically solid water under the form of an ice cube, as well as cork, aluminum, and gold. Considering these laboratorial settings, participants are encouraged to test the contact of these materials with the tank to observe their interactions with water, with the goal of earning an empirical-like understanding regarding the different densities.

5.3. Strategy for Data Gathering

A system usability scale (SUS)-like questionnaire and a knowledge assessment test (KAT) were the main instruments for collecting data from the participants. The former is composed of a series of interactivity/usability-related questions, which are meant to be answered using a single option on a 5-point Likert-style scale. Regarding the questions encompassed by the SUS-like, they address the operational easiness/intuitiveness associated to a specific set of factors, namely: (i) virtual components’ position manipulation; (ii) virtual components’ placement targets (e.g., docking-zones); (iii) post-action (dragging the objects, changing temperature, changing density values, etc.) virtual components response predictability; and (iv) application’s overall utilization. The KAT serves as a valuable utility assessment tool, posing three Physics and Chemistry-related questions to participants both before and after engaging in AR-based pedagogical activities. The primary purpose of the KAT is to evaluate participants’ knowledge prior to the experience and assess the impact of ARPocketLab on their learning process. These questions are specifically disclosed in Table 4.
Both usability and utility are referred by Nielsen [50] as important concepts to be considered for acknowledging whether a system can be used in a way that achieves the proposed goal. Usability refers to how easily users can utilize specific functions within a system to achieve their goals. Conversely, utility outlines whether the functionality effectively meets the user’s needs.

5.4. Procedures

The ARPocketLab tests comprised five distinct stages: (i) applicational protype general presentation; (ii) participants’ knowledge registration through the KAT-based questionnaire prior to the AR-based experience; (iii) engagement of participants with the pedagogical experience supported by ARPocketLab; (iv) post-experience participants’ knowledge registration using the same KAT questionnaire; and (v) completion of the SUS-like questionnaire.
Initially, a general presentation of ARPocketLab prototype took place, including its main functionalities and the interaction dynamics with the AR-based tracking features. Then, without using the application, participants were asked to individually complete the KAT. This step was followed by a demonstration of the two environments to lightly train the participants before effectively using the ARPocketLab application. Then, groups were constituted, and each one received a device with the application preinstalled. During the AR-based experience, the various groups were monitored to ensure compliance with the minimum requirements necessary for the best possible use of the application. In the end, participants were asked to complete the KAT once again to collect data for the later analysis of how the experience impacts students’ learning. Finally, the SUS questionnaire to obtain the opinions regarding application’s usability was also willingly fulfilled by the participants.

5.5. Results and Discussion

For the questions associated with virtual components perception, manipulation, placement, actions’ predictability, and overall utilization satisfaction, the results of the tests are presented in Table 5. Ranging from 1 (corresponding to the worst opinion regarding the difficulty level in interaction) to 5 (corresponding to the best opinion regarding the difficulty level in interaction), in this spectrum of questions that aim to characterize usability, a global x ¯ of around 4.38 points was achieved, which represents the level of excellence on the Bangor scale [51]. Manipulation of virtual components was the most challenging interaction feature, according to the participants. Still, an x ¯ = 4.20 points was observed ( α = 0.93). The participants also highlighted the excellent overall utilization of the system, with an x ¯ = 4.55 points ( α = 0.55).
As for intuition, confidence, and autonomy, quite acceptable rates were reported as well (Table 6), reaching a global x ¯ of 3.75 points. Despite the good values achieved for intuition and confidence, 4.48 ( α = 0.67) and 4.23 ( α = 0.72) points, respectively, autonomy gathered the least unanimity ( x ¯ = 2.55, α = 0.67), with many participants considering that the AR-based pedagogic experiences require qualified supervision (e.g., a teacher).
Throughout the experiments, several comments from the students were collected, both during and at the end of the experiment carried out with ARPocketLab. As positive feedback, the proposed solution was highlighted in several aspects, such as the ease of use, the potential to raise interest and to be adequately interactive, the simplicity associated with the access to and learning from digital didactic content, and the balance between educational and recreational edges. Furthermore, participants seem to share the vision of an increasingly modern school, with AR-based tools integrated in the pedagogic processes. Supporting this evidence is Figure 17a, which illustrates a clear inclination among students to utilize ARPocketLab or a similar application for pedagogical purposes in the future. On the flip side, only a few issues were emphasized, including some difficulties in dragging virtual components for a given position, a few visual errors, and the scarcity of alternative pedagogic-oriented virtual environments in both quantity and variety.
The results derived from the KAT, consisting of a set of questions related to the Physics and Chemistry curricular unit course presented to the students, before and after they carry out the AR application that portrays the experiences in alignment with these contents, evidence the positive contribute provided by ARPocketLab to the participants’ learning process.
Regarding the first question of KAT (“Does the water molecules aggregation vary with temperature?”), there was no fluctuation in the answer before and after the experience. Effectively, 39 out of 40 students answered the question correctly, affirming that water molecules indeed vary with temperature. The plot associated with the results of this question is shown in Figure 17b.
Regarding the question “Does this organization increase or decrease with temperature?”, Figure 17c shows that before the AR-based pedagogic experience, only 15 participants could answer correctly, while after contact with the application, this number expanded to 20. In this case, the ARPocketLab solution was able to effectively strength learning, increasing by 12.5% the section of the sample that was able to answer according to the scientific truth.
Finally, for the question “Identify materials that are denser than water”, four options were given, ice, cork, aluminum, and gold, and each participant was able to provide more than one answer. To account for overall performance, the number of correct and wrong answers was considered for both moments before and after the AR-based experience. These values are disclosed in Figure 17d. In it, one can easily infer that the contact with ARPocketLab solution was not only able to marginally increase the number of correct answers, but also to considerably reduce the erroneous responses, overall.
Considering the proposed AR solution and obtained results, the research questions posed in the beginning of this work can now be addressed.
Starting by RQ1, concerning the corroboration of AR’s effectiveness learning assessment compared with related literature, it is possible to affirm that the conclusions drawn from the tests conducted with students are in agreement with the scientific literature surveyed in this paper, which highlight AR as an effective technology for supporting pedagogic activities. For example, in [45], AR proved to be a significant add on to pedagogic tactics, namely, involving collaborative learning, problem-based learning, and situated learning strategies. In these mixed strategies, students’ performances increased, respectively, from C to a A−, C to B+, and C to B. AR also demonstrated benefits for both formal and informal contexts. In [46], the employment of AR also resulted in increased performances compared to the control group, with gains of 18.52% for engineering graphics questions and 28.97% in mental rotation tests. Garzón et al. [47], who examined the role of AR in education through a systematic review and meta-analysis of 27 studies, resorting to pretest-posttest control design to measure student learning gains, observed a positive effect measured through Cohen’s d effect, mostly in Primary and Upper Secondary education (with d = 0.65 and d = 0.70, respectively). However, the students that had the least performance gains were the ones frequenting Lower Secondary Education sector, but still, with a significant rate (d = 0.60), which matches the education level targeted in the proposed work around ARPocketLab. Moreover, it shed light on the fact that Arts and Humanities and Health and Welfare sectors were the fields of education that most benefit from the inclusion of AR technologies. The measured impact in Natural Sciences was also relevant, with a d = 0.69. In [48], in which the influence of AR on university students’ engagement, performance, and the influence on knowledge acquisition was investigated involving the topic “ways of using videos in teaching”, test grades significantly increased from 4.56 to 9.95 after contact with the encompassed technology.
In the feedback collected from the participants testing ARPocketLab, while technology acceptance rate was high (4.97/6), 97.5% of them declared willingness to use the proposed application (or a similar one) in the future for pedagogic purposes. These opinions, along with KAT’s results, which indicate an increase of learning performance aligned with the related literature (e.g., [45,46,47,48]), denotes a positive influence of AR-based technologies in pedagogic practices within the Portuguese educational context, particularly for middle school stages, therefore responding to the RQ2 posed earlier in this paper.
Last, but not least, this study’s RQ3, regarding digital era students’ proficiency with learning-oriented AR-based technologies, can be addressed with the results obtained from the usability, confidence, intuition, and autonomy questionnaire. After interacting with ARPocketLab, participants reported high satisfaction levels (average score: 4.38/5), indicating confidence and intuitive understanding. This positive outcome may be influenced by the implemented condition-result method, which enhances the visualizable cause-effect responsiveness. However, the need for the assistance of a supervisor was also expressed, notably in terms of autonomy. Despite students’ enthusiasm for AR technologies in learning, the perceived necessity for supervisor guidance emphasizes the significance of resolving autonomy-related challenges for successful integration of AR for pedagogic purposes.

6. Conclusions and Future Work

This paper presented ARPocketLab, a flexible and scalable AR-oriented solution designed to meet the dynamic needs of modern education, through handheld digital learning environments. Unlike most of the existing works in the literature, ARPocketLab prioritizes authoring tools, offering a dual-interface system that enables both educators and students to interactively design and engage with AR content directly aligned with educational objectives. Requiring minimal physical resources to operate, ARPocketLab can be seen as a versatile mobile solution applicable across a multitude of subjects, themes, and topics. With the ability to seamlessly integrate pedagogic-oriented digital experiences into the real-world while providing the necessary interactivity and visual stimuli to reinforce the association between virtual-based actions and reactions, ARPocketLab intends to improve students’ learnability, considering the requirements outlined by a teacher, through configuration.
In the tests conducted using ARPocketLab, focusing two Physics and Chemistry case-studies and involving 8th grade students, distinctly positive feedback was reported, namely that related to the system’s usability (average score: 4.38/5). Furthermore, participants also highlighted a good experience regarding confidence and intuitiveness after the contact with the applicational prototype. However, autonomy-related challenges require attention for successful ARPocketLab integration in pedagogic settings. Despite this, 97.5% of participants also reported willingness to use ARPocketLab (or a similar application) in the future, underscoring the potential of the proposed solution for enhancing learning experiences. Moreover, ARPocketLab aligns with existing scientific literature, highlighting AR as a promising tool to enhance students’ learnability. This was evident in the KATs, which revealed improved student performance following their interaction with the proposed solution.
Looking ahead, future research should focus on bringing human-centric development [52] to the educational context, advancing mass personalization through the prioritization of scalable and universal concepts that benefit all generations and emphasize equity. Additionally, incorporating modern generative artificial intelligence services publicly available, such as ChatGPT, could result in important research lines for innovative features. Leveraging natural language processing and rule-based systems, strategies for quickly setting up AR-based pedagogical environments could be developed, reducing the technical knowledge required. Additionally, AI’s context-aware text generation could aid in implementing digital tutors, complementing educators’ activities. Integrating gamification strategies [53] could also be a valuable research avenue to enhance student concentration, motivation, engagement, and flow experience.

Author Contributions

Conceptualization, M.N., T.A. and P.B.; methodology, M.N., T.A. and P.B.; software, M.N.; validation, T.A. and P.B.; formal analysis, T.A., E.P. and R.M.; investigation, M.N., S.S. and A.C.; resources, M.N. and T.A.; data curation, T.A. and D.C.; writing—original draft preparation, M.N., T.A., S.S., A.C. and D.C.; writing—review and editing, T.A., P.B., L.M., E.P. and R.M.; visualization, M.N., T.A. and S.S.; supervision, T.A., P.B. and L.M.; project administration, E.P. and R.M.; funding acquisition, E.P. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

Authors would like to acknowledge the Vine and Wine Portugal Project, co-financed by the RRP—Recovery and Resilience Plan and the European Next Generation EU Funds, within the scope of the Mobilizing Agendas for Reindustrialization, under the reference C644866286-00000011. Finally, this research activity was co-supported by national funds from the FCT-Portuguese Foundation for Science and Technology under the projects UIDB/04033/2020 and LA/P/0126/2020.

Data Availability Statement

All the relevant data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Haleem, A.; Javaid, M.; Qadri, M.A.; Suman, R. Understanding the Role of Digital Technologies in Education: A Review. Sustain. Oper. Comput. 2022, 3, 275–285. [Google Scholar] [CrossRef]
  2. Riel, J.; Lawless, K.A. Developments in MOOC Technologies and Participation Since 2012. In Encyclopedia of Information Science and Technology, 4th ed.; IGI Global: Hershey, PA, USA, 2018; pp. 7888–7897. ISBN 978-1-5225-2255-3. [Google Scholar]
  3. Giovannella, C. Effect Induced by the COVID-19 Pandemic on Students’ Perception about Technologies and Distance Learning. In Ludic, Co-Design and Tools Supporting Smart Learning Ecosystems and Smart Education; Springer: Singapore, 2020. [Google Scholar] [CrossRef]
  4. Fresen, J.W. Embracing Distance Education in a Blended Learning Model: Challenges and Prospects. Distance Educ. 2018, 39, 224–240. [Google Scholar] [CrossRef]
  5. Ober, J.; Kochmańska, A. Remote Learning in Higher Education: Evidence from Poland. Int. J. Environ. Res. Public Health 2022, 19, 14479. [Google Scholar] [CrossRef]
  6. Halverson, R.; Shapiro, B. Technologies for Education and Technologies for Learners: How information technologies are (and should be) changing schools. Wis. Cent. Educ. Res. (WCER) Work. Pap. 2012, 6, 33. [Google Scholar]
  7. Kovács, P.T.; Murray, N.; Rozinaj, G.; Sulema, Y.; Rybárová, R. Application of Immersive Technologies for Education: State of the Art. In Proceedings of the 2015 International Conference on Interactive Mobile Communication Technologies and Learning (IMCL), Thessaloniki, Greece, 19–20 November 2015; pp. 283–288. [Google Scholar]
  8. Osadchyi, V.V.; Valko, N.V.; Kuzmich, L.V. Using Augmented Reality Technologies for STEM Education Organization. J. Phys. Conf. Ser. 2021, 1840, 012027. [Google Scholar] [CrossRef]
  9. Borthwick, A.C.; Anderson, C.L.; Finsness, E.S.; Foulger, T.S. Special Article Personal Wearable Technologies in Education: Value or Villain? J. Digit. Learn. Teach. Educ. 2015, 31, 85–92. [Google Scholar] [CrossRef]
  10. Desai, M.S.; Vidyapeeth, B. Role of information communication technologies in education. In Proceedings of the 4th National Conference, Ottawa, ON, Canada, 11–12 May 1964. [Google Scholar]
  11. Beck, D. Special Issue: Augmented and Virtual Reality in Education: Immersive Learning Research. J. Educ. Comput. Res. 2019, 57, 1619–1625. [Google Scholar] [CrossRef]
  12. Fernandez, M. Augmented Virtual Reality: How to Improve Education Systems. High. Learn. Res. Commun. 2017, 7, 1–15. [Google Scholar] [CrossRef]
  13. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.-M. A Review on Mixed Reality: Current Trends, Challenges and Prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef]
  14. Aheleroff, S.; Mostashiri, N.; Xu, X.; Zhong, R.Y. Mass Personalisation as a Service in Industry 4.0: A Resilient Response Case Study. Adv. Eng. Inform. 2021, 50, 101438. [Google Scholar] [CrossRef]
  15. Rojas-Sánchez, M.A.; Palos-Sánchez, P.R.; Folgado-Fernández, J.A. Systematic Literature Review and Bibliometric Analysis on Virtual Reality and Education. Educ. Inf. Technol. 2023, 28, 155–192. [Google Scholar] [CrossRef]
  16. Azuma, R.T. A Survey of Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  17. Santos, M.E.C.; Chen, A.; Taketomi, T.; Yamamoto, G.; Miyazaki, J.; Kato, H. Augmented Reality Learning Experiences: Survey of Prototype Design and Evaluation. IEEE Trans. Learn. Technol. 2014, 7, 38–56. [Google Scholar] [CrossRef]
  18. Costanza, E.; Kunz, A.; Fjeld, M. Mixed Reality: A Survey. In Human Machine Interaction: Research Results of the MMI Program; Lalanne, D., Kohlas, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 47–68. ISBN 978-3-642-00437-7. [Google Scholar]
  19. Athow, D. The Apple Vision Pro Costs the Same as Microsoft’s Hololens-There’s a Reason Why. Available online: https://www.techradar.com/news/the-apple-vision-pro-costs-the-same-as-microsofts-hololens-theres-a-reason-why (accessed on 25 May 2024).
  20. Wang, M.; Callaghan, V.; Bernhardt, J.; White, K.; Peña-Rios, A. Augmented Reality in Education and Training: Pedagogical Approaches and Illustrative Case Studies. J. Ambient. Intell. Humaniz. Comput. 2018, 9, 1391–1402. [Google Scholar] [CrossRef]
  21. Kim, H.; Kwon, Y.-T.; Lim, H.-R.; Kim, J.-H.; Kim, Y.-S.; Yeo, W.-H. Recent Advances in Wearable Sensors and Integrated Functional Devices for Virtual and Augmented Reality Applications. Adv. Funct. Mater. 2021, 31, 2005692. [Google Scholar] [CrossRef]
  22. Wang, K.; Yap, L.W.; Gong, S.; Wang, R.; Wang, S.J.; Cheng, W. Nanowire-Based Soft Wearable Human–Machine Interfaces for Future Virtual and Augmented Reality Applications. Adv. Funct. Mater. 2021, 31, 2008347. [Google Scholar] [CrossRef]
  23. Blattgerste, J.; Luksch, K.; Lewa, C.; Pfeiffer, T. TrainAR: A Scalable Interaction Concept and Didactic Framework for Procedural Trainings Using Handheld Augmented Reality. Multimodal Technol. Interact. 2021, 5, 30. [Google Scholar] [CrossRef]
  24. Žilak, M.; Car, Ž.; Čuljak, I. A Systematic Literature Review of Handheld Augmented Reality Solutions for People with Disabilities. Sensors 2022, 22, 7719. [Google Scholar] [CrossRef]
  25. Wyss, C.; Bührer, W.; Furrer, F.; Degonda, A.; Hiss, J.A. Innovative Teacher Education with the Augmented Reality Device Microsoft HoloLens—Results of an Exploratory Study and Pedagogical Considerations. Multimodal Technol. Interact. 2021, 5, 45. [Google Scholar] [CrossRef]
  26. Dhar, P.; Rocks, T.; Samarasinghe, R.M.; Stephenson, G.; Smith, C. Augmented Reality in Medical Education: Students’ Experiences and Learning Outcomes. Med. Educ. Online 2021, 26, 1953953. [Google Scholar] [CrossRef]
  27. Koulieris, G.A.; Akşit, K.; Stengel, M.; Mantiuk, R.K.; Mania, K.; Richardt, C. Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality. Comput. Graph. Forum 2019, 38, 493–519. [Google Scholar] [CrossRef]
  28. Berryman, D.R. Augmented Reality: A Review. Med. Ref. Serv. Q. 2012, 31, 212–218. [Google Scholar] [CrossRef]
  29. Lee, K. Augmented Reality in Education and Training. TechTrends 2012, 56, 13–21. [Google Scholar] [CrossRef]
  30. Marín, V.; Sampedro, B.E.; Muñoz González, J.M.; Vega, E.M. Primary Education and Augmented Reality. Other Form to Learn. Cogent Educ. 2022, 9, 2082082. [Google Scholar] [CrossRef]
  31. Barreira, J.; Bessa, M.; Pereira, L.C.; Adão, T.; Peres, E.; Magalhães, L. MOW: Augmented Reality Game to Learn Words in Different Languages: Case Study: Learning English Names of Animals in Elementary School. In Proceedings of the 7th Iberian Conference on Information Systems and Technologies (CISTI 2012), Madrid, Spain, 20–23 June 2012; pp. 1–6. [Google Scholar]
  32. Afnan; Muhammad, K.; Khan, N.; Lee, M.-Y.; Imran, A.S.; Sajjad, M. School of the Future: A Comprehensive Study on the Effectiveness of Augmented Reality as a Tool for Primary School Children’s Education. Appl. Sci. 2021, 11, 5277. [Google Scholar] [CrossRef]
  33. Kerawalla, L.; Luckin, R.; Seljeflot, S.; Woolard, A. “Making It Real”: Exploring the Potential of Augmented Reality for Teaching Primary School Science. Virtual Real. 2006, 10, 163–174. [Google Scholar] [CrossRef]
  34. Demitriadou, E.; Stavroulia, K.-E.; Lanitis, A. Comparative Evaluation of Virtual and Augmented Reality for Teaching Mathematics in Primary Education. Educ. Inf. Technol. 2020, 25, 381–401. [Google Scholar] [CrossRef]
  35. Chiu, J.L.; DeJaegher, C.J.; Chao, J. The Effects of Augmented Virtual Science Laboratories on Middle School Students’ Understanding of Gas Properties. Comput. Educ. 2015, 85, 59–73. [Google Scholar] [CrossRef]
  36. Di Serio, Á.; Ibáñez, M.B.; Kloos, C.D. Impact of an Augmented Reality System on Students’ Motivation for a Visual Art Course. Comput. Educ. 2013, 68, 586–596. [Google Scholar] [CrossRef]
  37. Sahin, D.; Yilmaz, R.M. The Effect of Augmented Reality Technology on Middle School Students’ Achievements and Attitudes towards Science Education. Comput. Educ. 2020, 144, 103710. [Google Scholar] [CrossRef]
  38. Moreno-Guerrero, A.-J.; Alonso García, S.; Ramos Navas-Parejo, M.; Campos-Soto, M.N.; Gómez García, G. Augmented Reality as a Resource for Improving Learning in the Physical Education Classroom. Int. J. Environ. Res. Public Health 2020, 17, 3637. [Google Scholar] [CrossRef]
  39. Squire, K.D.; Jan, M. Mad City Mystery: Developing Scientific Argumentation Skills with a Place-Based Augmented Reality Game on Handheld Computers. J. Sci. Educ. Technol. 2007, 16, 5–29. [Google Scholar] [CrossRef]
  40. Pellas, N.; Kazanidis, I.; Palaigeorgiou, G. A Systematic Literature Review of Mixed Reality Environments in K-12 Education. Educ. Inf. Technol. 2020, 25, 2481–2520. [Google Scholar] [CrossRef]
  41. Martín-Gutiérrez, J.; Fabiani, P.; Benesova, W.; Meneses, M.D.; Mora, C.E. Augmented Reality to Promote Collaborative and Autonomous Learning in Higher Education. Comput. Hum. Behav. 2015, 51, 752–761. [Google Scholar] [CrossRef]
  42. Cabero-Almenara, J.; Barroso-Osuna, J.; Llorente-Cejudo, C.; Fernández Martínez, M.d.M. Educational Uses of Augmented Reality (AR): Experiences in Educational Science. Sustainability 2019, 11, 4990. [Google Scholar] [CrossRef]
  43. Wasko, C. What Teachers Need to Know About Augmented Reality Enhanced Learning Environments. TechTrends 2013, 57, 17–21. [Google Scholar] [CrossRef]
  44. Tzima, S.; Styliaras, G.; Bassounas, A. Augmented Reality Applications in Education: Teachers Point of View. Educ. Sci. 2019, 9, 99. [Google Scholar] [CrossRef]
  45. Garzón, J.; Kinshuk; Baldiris, S.; Gutiérrez, J.; Pavón, J. How Do Pedagogical Approaches Affect the Impact of Augmented Reality on Education? A Meta-Analysis and Research Synthesis. Educ. Res. Rev. 2020, 31, 100334. [Google Scholar] [CrossRef]
  46. Kudale, P.; Buktar, R. Investigation of the Impact of Augmented Reality Technology on Interactive Teaching Learning Process. Int. J. Virtual Pers. Learn. Environ. (IJVPLE) 2022, 12, 1–16. [Google Scholar] [CrossRef]
  47. Garzón, J.; Pavón, J.; Baldiris, S. Systematic Review and Meta-Analysis of Augmented Reality in Educational Settings. Virtual Real. 2019, 23, 447–459. [Google Scholar] [CrossRef]
  48. Cabero-Almenara, J.; Fernández-Batanero, J.M.; Barroso-Osuna, J. Adoption of Augmented Reality Technology by University Students. Heliyon 2019, 5, e01597. [Google Scholar] [CrossRef]
  49. Avila-Garzon, C.; Bacca-Acosta, J.; Kinshuk; Duarte, J.; Betancourt, J. Augmented Reality in Education: An Overview of Twenty-Five Years of Research. Contemp. Educ. Technol. 2021, 13, ep302. [Google Scholar] [CrossRef]
  50. Nielsen, J. Usability Engineering; Morgan Kaufmann: Burlington, MA, USA, 1994; ISBN 978-0-12-518406-9. [Google Scholar]
  51. Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Studies 2009, 4, 114–123. [Google Scholar]
  52. Aheleroff, S.; Huang, H.; Xu, X.; Zhong, R.Y. Toward Sustainability and Resilience with Industry 4.0 and Industry 5.0. Front. Manuf. Technol. 2022, 2, 951643. [Google Scholar] [CrossRef]
  53. Oliveira, W.; Hamari, J.; Shi, L.; Toda, A.M.; Rodrigues, L.; Palomino, P.T.; Isotani, S. Tailored Gamification in Education: A Literature Review and Future Agenda. Educ. Inf. Technol. 2023, 28, 373–406. [Google Scholar] [CrossRef]
Figure 1. Conceptual architecture fitting the proposed teaching-learning process enhancement system.
Figure 1. Conceptual architecture fitting the proposed teaching-learning process enhancement system.
Computers 13 00148 g001
Figure 2. General use-case diagram, congregating the main actions that can be performed by: (a) a teacher; and (b) a student.
Figure 2. General use-case diagram, congregating the main actions that can be performed by: (a) a teacher; and (b) a student.
Computers 13 00148 g002
Figure 3. Non-normalized E-R model for the proposed AR-based system.
Figure 3. Non-normalized E-R model for the proposed AR-based system.
Computers 13 00148 g003
Figure 4. AR-based pedagogic experience configuration process. This process involves defining docking zones and components, setting their properties and condition-result definitions, aligning tags, and confirm system’s validation, before storing AR experience in a database for being future accessed by teachers or student.
Figure 4. AR-based pedagogic experience configuration process. This process involves defining docking zones and components, setting their properties and condition-result definitions, aligning tags, and confirm system’s validation, before storing AR experience in a database for being future accessed by teachers or student.
Computers 13 00148 g004
Figure 5. AR-based pedagogic experience main control flow. After the selection of the environment (set up by the teacher), the preferred tracking method must be selected (marker-based or plane recognition), as well as the filling mode (configuration-driven or system-driven). Afterwards, components are instantiated and can be manipulated and attached to the docking zones by the user, who is provided with results after some validations related to conditions and precedencies satisfaction criteria.
Figure 5. AR-based pedagogic experience main control flow. After the selection of the environment (set up by the teacher), the preferred tracking method must be selected (marker-based or plane recognition), as well as the filling mode (configuration-driven or system-driven). Afterwards, components are instantiated and can be manipulated and attached to the docking zones by the user, who is provided with results after some validations related to conditions and precedencies satisfaction criteria.
Computers 13 00148 g005
Figure 6. Examples of 3D models imported to Unity in Filmbox (.fbx) format, representing entities of the physicochemical context (molecules, goblets, water tank, weighing scale device, abstract objects for density tests, etc.).
Figure 6. Examples of 3D models imported to Unity in Filmbox (.fbx) format, representing entities of the physicochemical context (molecules, goblets, water tank, weighing scale device, abstract objects for density tests, etc.).
Computers 13 00148 g006
Figure 7. Serialize and deserialize process.
Figure 7. Serialize and deserialize process.
Computers 13 00148 g007
Figure 8. ARPocketLab tracking methods, through Vuforia support: (a) surface/plane detection; and (b) fiducial marker tracking.
Figure 8. ARPocketLab tracking methods, through Vuforia support: (a) surface/plane detection; and (b) fiducial marker tracking.
Computers 13 00148 g008
Figure 9. Relationship between configuration file, main component, secondary component and dicking component/zone. The main component represents the intrinsic state of an element that, after being snapped to the docking zone produces a visualizable result through an associated secondary component.
Figure 9. Relationship between configuration file, main component, secondary component and dicking component/zone. The main component represents the intrinsic state of an element that, after being snapped to the docking zone produces a visualizable result through an associated secondary component.
Computers 13 00148 g009
Figure 10. Menus for configurating AR-based pedagogic experiences: (a) general configuration menu; (b) component gallery menu; (c) component properties menu. (d) tags menu.
Figure 10. Menus for configurating AR-based pedagogic experiences: (a) general configuration menu; (b) component gallery menu; (c) component properties menu. (d) tags menu.
Computers 13 00148 g010
Figure 11. Disposition of gizmos in the selected component. From left to right, position, rotation and scale operations are represented.
Figure 11. Disposition of gizmos in the selected component. From left to right, position, rotation and scale operations are represented.
Computers 13 00148 g011
Figure 12. Application mockup of the ARPocketLab menus sequence, in didactic experience delivery mode.
Figure 12. Application mockup of the ARPocketLab menus sequence, in didactic experience delivery mode.
Computers 13 00148 g012
Figure 13. Tutorial steps: (a) guides for components dragging, as well as placement in docking zones; (b) shows how to replace components of the same category to visualize simulations under a diversity combination (e.g., molecules variation); (c) indicates that the tutorial has ended and shows an option to reach the main menu.
Figure 13. Tutorial steps: (a) guides for components dragging, as well as placement in docking zones; (b) shows how to replace components of the same category to visualize simulations under a diversity combination (e.g., molecules variation); (c) indicates that the tutorial has ended and shows an option to reach the main menu.
Computers 13 00148 g013
Figure 14. 8th-grade Portuguese students experimenting the AR-based application in a classroom of the Eiriz School Cluster-Paços de Ferreira, Portugal.
Figure 14. 8th-grade Portuguese students experimenting the AR-based application in a classroom of the Eiriz School Cluster-Paços de Ferreira, Portugal.
Computers 13 00148 g014
Figure 15. Test group characterization: (a) shows the composition of the participants with regard to gender distribution; (b) glimpse the distribution regarding previous contact with AR technology.
Figure 15. Test group characterization: (a) shows the composition of the participants with regard to gender distribution; (b) glimpse the distribution regarding previous contact with AR technology.
Computers 13 00148 g015
Figure 16. Case-study scenarios: (a) physical state changes experience; (b) material density experience.
Figure 16. Case-study scenarios: (a) physical state changes experience; (b) material density experience.
Computers 13 00148 g016
Figure 17. Results collected from participants regarding their interest in adopting AR for learning purposes and KAT’s. In (a) are presented the statistics regarding the willingness to repeat the experience of using ARPocketLab or a similar application for pedagogical purposes in future. Plots (b,c) depict the cumulative number of correct answers, respectively, for KAT’s Question 1 (“Does the water molecules aggregation vary with temperature?”) and KAT’s Question 2 (“Does this aggregation, increase or decrease as temperature value grows?”), after and before the AR-based pedagogic experience. Finally, KAT’s question 3 (“Identify materials that are denser than water”) answers are presented in (d), also, after and before contact with the proposed AR-based app.
Figure 17. Results collected from participants regarding their interest in adopting AR for learning purposes and KAT’s. In (a) are presented the statistics regarding the willingness to repeat the experience of using ARPocketLab or a similar application for pedagogical purposes in future. Plots (b,c) depict the cumulative number of correct answers, respectively, for KAT’s Question 1 (“Does the water molecules aggregation vary with temperature?”) and KAT’s Question 2 (“Does this aggregation, increase or decrease as temperature value grows?”), after and before the AR-based pedagogic experience. Finally, KAT’s question 3 (“Identify materials that are denser than water”) answers are presented in (d), also, after and before contact with the proposed AR-based app.
Computers 13 00148 g017
Table 1. Functional requirements, listing the features and functions of interest, with a strong focus on what will be possible to do with the application to be implemented.
Table 1. Functional requirements, listing the features and functions of interest, with a strong focus on what will be possible to do with the application to be implemented.
Functional RequirementsTarget Actor
A backoffice for the configuration of AR-based pedagogic experiences must be provided.Teacher
Within the configuration area, users must be able of defining areas for placing virtual assets.Teacher
Configuration actors must be able to select virtual assets from a gallery, as well as to place them in the environment, predefining their positions.Teacher
Each virtual asset should be tolerant to the definition of a visualizable cause-effect event.Teacher
The system must allow the definition of precedency rules to orient experience events order.Teacher
The system must be able to synchronize configurations with a database, ensuring the coherency in the use of virtual assets and eventual precedencies.Teacher
The system must provide optional usage tutorials for onboarding students.Student
The system must be able to launch AR-based experiences in a frontoffice, loaded from the configurations made in the backoffice.Student
During AR experiences, the system must be able to display cause-effect occurrences, confining the use of virtual assets and precedencies in agreement with the loaded configurations.Student
Navigation menus should ensure reachability to all of the applicational resourcesStudent/Teacher
Table 2. Non-functional requirements, addressing the general aspects that the application to be implemented needs to consider in terms of performance, usability, platforms availability and involved technologies, alongside the verification guidelines, consisting of attainable confirmation measures.
Table 2. Non-functional requirements, addressing the general aspects that the application to be implemented needs to consider in terms of performance, usability, platforms availability and involved technologies, alongside the verification guidelines, consisting of attainable confirmation measures.
Non-Functional RequirementsVerification Guidelines
The system must be cross-platform.Deployment for, at least, Android and Windows.
The system should have a simple and user-friendly user interface (UI).Application’s controls must be self-explanatory and provide visual feedback upon interaction.
Interaction with menus should be intuitive, minimal and pragmatism tailored.
Interaction in the 3D environments should be smooth and flexible to allow virtual objects manipulation, but well-regulated to avoid unexpected events, such as object loss during dragging.
The system should provide an intuitive navigability.Utilization of well-established UI toolsets, preferably, grounded in UI heuristics.
The application’s control descriptions should be clear and self-explanatory.
Navigation menus must be chained adequately, allowing an easy walkthrough the diverse sections of the application, with a minimal cognitive load.
The system should have an adequate usability.Utilization of well-established UI toolsets, preferably, grounded in usability heuristics.
The system should provide an expeditious operationalizationMost demanding features, such as tracking, must be stuttering-free and the augmentations upon environment registration must take less than 1 s.
Targeted devices’ hardware should be of medium/high grade, especially valid for smartphones, allowing efficient support to modern AR features.
Table 3. Marker-based vs. plane detection tracking.
Table 3. Marker-based vs. plane detection tracking.
Marker-BasedPlane-Based
Requires a known marker/image.Augments over a marker-free surface.
More robust, at least, with markers composed of proper trackable features.May glitch in uniform surfaces.
Typically, loses tracking shortly after misalignment of the camera with the marker.Tends to the drift of augmented virtual models, when the camera’s capturing area is extended.
Provides more expeditious augmentations.Needs to acknowledge the environment for recognizing surfaces for augmentation.
Fixed initial augmentation anchor.Augmentation anchor with flexible positioning, within a recognized surface portion.
Table 4. KAT questions.
Table 4. KAT questions.
TopicQuestionPossible Answers
Physical State ChangesQuestion 1: Does the water molecules aggregation vary with temperature?Yes (correct); No (incorrect).
Question 2: Does this aggregation, increase or decrease as temperature value grows?Decreases (correct); Increases (incorrect).
Materials DensityQuestion 3: Identify materials that are denser than water.Aluminum (correct); Cork (incorrect); Gold (correct); Ice (incorrect).
Table 5. SUS-based assessment, involving the following aspects: virtual components perception, manipulation, placement, actions’ predictability, and overall utilization satisfaction.
Table 5. SUS-based assessment, involving the following aspects: virtual components perception, manipulation, placement, actions’ predictability, and overall utilization satisfaction.
Parameter x ¯ α
Virtual component perception4.280.84
Virtual component manipulation4.200.93
Virtual component placement4.380.66
Actions’ predictability4.500.63
Overall utilization satisfaction4.550.55
Table 6. Average values reported by the participants for intuition, confidence, and autonomy aspects, related with the ARPocketLab application experience.
Table 6. Average values reported by the participants for intuition, confidence, and autonomy aspects, related with the ARPocketLab application experience.
Parameter x ¯ α
Intuition4.480.67
Confidence4.230.72
Autonomy2.551.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nunes, M.; Adão, T.; Shahrabadi, S.; Capela, A.; Carneiro, D.; Branco, P.; Magalhães, L.; Morais, R.; Peres, E. ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications. Computers 2024, 13, 148. https://doi.org/10.3390/computers13060148

AMA Style

Nunes M, Adão T, Shahrabadi S, Capela A, Carneiro D, Branco P, Magalhães L, Morais R, Peres E. ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications. Computers. 2024; 13(6):148. https://doi.org/10.3390/computers13060148

Chicago/Turabian Style

Nunes, Miguel, Telmo Adão, Somayeh Shahrabadi, António Capela, Diana Carneiro, Pedro Branco, Luís Magalhães, Raul Morais, and Emanuel Peres. 2024. "ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications" Computers 13, no. 6: 148. https://doi.org/10.3390/computers13060148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop