Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (148)

Search Parameters:
Keywords = Microsoft Kinect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1534 KiB  
Article
Machine-Learning-Based Validation of Microsoft Azure Kinect in Measuring Gait Profiles
by Claudia Ferraris, Gianluca Amprimo, Serena Cerfoglio, Giulia Masi, Luca Vismara and Veronica Cimolin
Electronics 2024, 13(23), 4739; https://doi.org/10.3390/electronics13234739 - 29 Nov 2024
Viewed by 634
Abstract
Gait is one of the most extensively studied motor tasks using motion capture systems, the gold standard for instrumental gait analysis. Various sensor-based solutions have been recently proposed to evaluate gait parameters, typically providing lower accuracy but greater flexibility. Validation procedures are crucial [...] Read more.
Gait is one of the most extensively studied motor tasks using motion capture systems, the gold standard for instrumental gait analysis. Various sensor-based solutions have been recently proposed to evaluate gait parameters, typically providing lower accuracy but greater flexibility. Validation procedures are crucial to assess the measurement accuracy of these solutions since residual errors may arise from environmental, methodological, or processing factors. This study aims to enhance validation by employing machine learning techniques to investigate the impact of such errors on the overall assessment of gait profiles. Two datasets of gait trials, collected from healthy and post-stroke subjects using a motion capture system and a 3D camera-based system, were considered. The estimated gait profiles include spatiotemporal, asymmetry, and body center of mass parameters to capture various normal and pathologic gait peculiarities. Machine learning models show the equivalence and the high level of agreement and concordance between the measurement systems in assessing gait profiles (accuracy: 98.7%). In addition, they demonstrate data interchangeability and integrability despite residual errors identified by traditional statistical metrics. These findings suggest that validation procedures can extend beyond strict measurement differences to comprehensively assess gait performance. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

16 pages, 650 KiB  
Article
The Impact of Interactive Video Games Training on the Quality of Life of Children Treated for Leukemia
by Aleksandra Kowaluk, Iwona Malicka, Krzysztof Kałwak and Marek Woźniewski
Cancers 2024, 16(21), 3599; https://doi.org/10.3390/cancers16213599 - 25 Oct 2024
Viewed by 2580
Abstract
Objectives: The study aimed to assess the impact of interactive video games (IVGs) as a form of physical activity (PA) on the quality of life. Methods: The study used a quality-of-life questionnaire (KIDSCREEN-10) and the HBSC questionnaire. In order to determine individual IVGs [...] Read more.
Objectives: The study aimed to assess the impact of interactive video games (IVGs) as a form of physical activity (PA) on the quality of life. Methods: The study used a quality-of-life questionnaire (KIDSCREEN-10) and the HBSC questionnaire. In order to determine individual IVGs training parameters, an initial assessment of cardiorespiratory fitness level was performed, using the Cardio Pulmonary Exercise Test—Godfrey’s progressive protocol. Children in the intervention group participated in 12 interval training sessions using IVGs (Microsoft’s Xbox 360 S console with Kinect,). Results: The study included 21 patients (7–13 years old; 12 boys and 9 girls) treated for acute lymphoblastic leukemia (n = 13) and acute myeloid leukemia (n = 8). Before the IVGs, all children had insufficient PA levels (90% of children in the intervention group and 90.91% of children in the control group did not engage in any PA during the last 7 days). After the intervention, 80% of the children in the IVGs group undertook PA lasting at least 60 min a day, three times a week. They exhibited better well-being, a subjective feeling of improved physical fitness (p < 0.0001), a greater subjective sense of strength and energy (p < 0.0001), and less feeling of sadness (p = 0.0016) than the children from the control group (p = 0.0205). Conclusions: The results of our study confirmed that an attractive form of virtual game or sport is willingly undertaken by children undergoing cancer treatment, and has significant benefits in improving the quality-of-life parameters. There is a clear need to create specific recommendations and rehabilitation models for children with cancer. Full article
(This article belongs to the Section Cancer Survivorship and Quality of Life)
Show Figures

Figure 1

23 pages, 3934 KiB  
Article
A Multi-Scale Covariance Matrix Descriptor and an Accurate Transformation Estimation for Robust Point Cloud Registration
by Fengguang Xiong, Yu Kong, Xinhe Kuang, Mingyue Hu, Zhiqiang Zhang, Chaofan Shen and Xie Han
Appl. Sci. 2024, 14(20), 9375; https://doi.org/10.3390/app14209375 - 14 Oct 2024
Cited by 1 | Viewed by 861
Abstract
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing [...] Read more.
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing with registration problems in a higher noise environment since the mean operation in generating the covariance matrix can filter out most of the noise-damaged samples or outliers and also make itself robust to noise. Compared with transformation estimation, such as feature matching, clustering, ICP, RANSAC, etc., our transformation estimation is able to find a better optimal transformation between a pair of point clouds since our transformation estimation is a multi-level point cloud transformation estimator including feature matching, coarse transformation estimation based on clustering, and a fine transformation estimation based on ICP. Experiment findings reveal that our proposed feature descriptor and transformation estimation outperforms state-of-the-art feature descriptors and transformation estimation, and registration effectiveness based on our registration framework of point cloud is extremely successful in the Stanford 3D Scanning Repository, the SpaceTime dataset, and the Kinect dataset, where the Stanford 3D Scanning Repository is known for its comprehensive collection of high-quality 3D scans, and the SpaceTime dataset and the Kinect dataset are captured by a SpaceTime Stereo scanner and a low-cost Microsoft Kinect scanner, respectively. Full article
Show Figures

Figure 1

23 pages, 11804 KiB  
Article
Therapeutic Exercise Recognition Using a Single UWB Radar with AI-Driven Feature Fusion and ML Techniques in a Real Environment
by Shahzad Hussain, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Muhammad Amjad Raza, Josep Alemany Iturriaga, Alvaro Velarde-Sotres and Isabel De la Torre Díez
Sensors 2024, 24(17), 5533; https://doi.org/10.3390/s24175533 - 27 Aug 2024
Viewed by 914
Abstract
Physiotherapy plays a crucial role in the rehabilitation of damaged or defective organs due to injuries or illnesses, often requiring long-term supervision by a physiotherapist in clinical settings or at home. AI-based support systems have been developed to enhance the precision and effectiveness [...] Read more.
Physiotherapy plays a crucial role in the rehabilitation of damaged or defective organs due to injuries or illnesses, often requiring long-term supervision by a physiotherapist in clinical settings or at home. AI-based support systems have been developed to enhance the precision and effectiveness of physiotherapy, particularly during the COVID-19 pandemic. These systems, which include game-based or tele-rehabilitation monitoring using camera-based optical systems like Vicon and Microsoft Kinect, face challenges such as privacy concerns, occlusion, and sensitivity to environmental light. Non-optical sensor alternatives, such as Inertial Movement Units (IMUs), Wi-Fi, ultrasound sensors, and ultrawide band (UWB) radar, have emerged to address these issues. Although IMUs are portable and cost-effective, they suffer from disadvantages like drift over time, limited range, and susceptibility to magnetic interference. In this study, a single UWB radar was utilized to recognize five therapeutic exercises related to the upper limb, performed by 34 male volunteers in a real environment. A novel feature fusion approach was developed to extract distinguishing features for these exercises. Various machine learning methods were applied, with the EnsembleRRGraBoost ensemble method achieving the highest recognition accuracy of 99.45%. The performance of the EnsembleRRGraBoost model was further validated using five-fold cross-validation, maintaining its high accuracy. Full article
Show Figures

Figure 1

29 pages, 18651 KiB  
Article
Realization of Impression Evidence with Reverse Engineering and Additive Manufacturing
by Osama Abdelaal and Saleh Ahmed Aldahash
Appl. Sci. 2024, 14(13), 5444; https://doi.org/10.3390/app14135444 - 23 Jun 2024
Cited by 1 | Viewed by 1385
Abstract
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to [...] Read more.
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to evaluate the capabilities and limitations of five 3D scanning technologies, including laser scanning (LS), structured-light (SL) scanning, smartphone (SP) photogrammetry, Microsoft Kinect v2 RGB-D camera, and iPhone’s LiDAR (iLiDAR) Sensor, for 3D reconstruction of 3D impression evidence. Furthermore, methodologies for 3D reconstruction of latent impression and visible 2D impression based on a single 2D photo were proposed. Additionally, the FDM additive manufacturing process was employed to build impression evidence models created by each procedure. The results showed that the SL scanning system generated the highest reconstruction accuracy. Consequently, the SL system was employed as a benchmark to assess the reconstruction quality of other systems. In comparison to the SL data, LS showed the smallest absolute geometrical deviations (0.37 mm), followed by SP photogrammetry (0.78 mm). In contrast, the iLiDAR exhibited the largest absolute deviations (2.481 mm), followed by Kinect v2 (2.382 mm). Additionally, 3D printed impression replicas demonstrated superior detail compared to Plaster of Paris (POP) casts. The feasibility of reconstructing 2D impressions into 3D models is progressively increasing. Finally, this article explores potential future research directions in this field. Full article
(This article belongs to the Special Issue Advances in 3D Sensing Techniques and Its Applications)
Show Figures

Figure 1

11 pages, 1726 KiB  
Article
Comparing a Portable Motion Analysis System against the Gold Standard for Potential Anterior Cruciate Ligament Injury Prevention and Screening
by Nicolaos Karatzas, Patrik Abdelnour, Jason Philip Aaron Hiro Corban, Kevin Y. Zhao, Louis-Nicolas Veilleux, Stephane G. Bergeron, Thomas Fevens, Hassan Rivaz, Athanasios Babouras and Paul A. Martineau
Sensors 2024, 24(6), 1970; https://doi.org/10.3390/s24061970 - 20 Mar 2024
Cited by 3 | Viewed by 1506
Abstract
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet [...] Read more.
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet to be assessed specifically for tracking the coronal and sagittal knee angles of the drop vertical jump. Eleven participants performed three drop vertical jumps that were recorded using both the Kinect V2 and a gold standard motion analysis system (Vicon, Los Angeles, CA, USA). The initial coronal, peak coronal, and peak sagittal angles of the left and right knees were measured by both systems simultaneously. Analysis of the data obtained by the Kinect V2 was performed by our software. The differences in the mean knee angles measured by the Kinect V2 and the Vicon system were non-significant for all parameters except for the peak sagittal angle of the right leg with a difference of 7.74 degrees and a p-value of 0.008. There was excellent agreement between the Kinect V2 and the Vicon system, with intraclass correlation coefficients consistently over 0.75 for all knee angles measured. Visual analysis revealed a moderate frame-to-frame variability for coronal angles measured by the Kinect V2. The Kinect V2 can be used to capture knee coronal and sagittal angles with sufficient accuracy during a drop vertical jump, suggesting that a Kinect-based portable motion analysis system is suitable to screen individuals for the risk of non-contact anterior cruciate ligament injury. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

22 pages, 2241 KiB  
Article
Motion Capture in Mixed-Reality Applications: A Deep Denoising Approach
by André Correia Gonçalves, Rui Jesus and Pedro Mendes Jorge
Virtual Worlds 2024, 3(1), 135-156; https://doi.org/10.3390/virtualworlds3010007 - 11 Mar 2024
Viewed by 1601
Abstract
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to [...] Read more.
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data. Full article
Show Figures

Figure 1

21 pages, 4113 KiB  
Article
Simulation of Human Movement in Zero Gravity
by Adelina Bärligea, Kazunori Hase and Makoto Yoshida
Sensors 2024, 24(6), 1770; https://doi.org/10.3390/s24061770 - 9 Mar 2024
Viewed by 2181
Abstract
In the era of expanding manned space missions, understanding the biomechanical impacts of zero gravity on human movement is pivotal. This study introduces a novel and cost-effective framework that demonstrates the application of Microsoft’s Azure Kinect body tracking technology as a motion input [...] Read more.
In the era of expanding manned space missions, understanding the biomechanical impacts of zero gravity on human movement is pivotal. This study introduces a novel and cost-effective framework that demonstrates the application of Microsoft’s Azure Kinect body tracking technology as a motion input generator for subsequent OpenSim simulations in weightlessness. Testing rotations, locomotion, coordination, and martial arts movements, we validate the results’ realism under the constraints of angular and linear momentum conservation. While complex, full-body coordination tasks face limitations in a zero gravity environment, our findings suggest possible approaches to device-free exercise routines for astronauts and reveal insights into the feasibility of hand-to-hand combat in space. However, some challenges remain in distinguishing zero gravity effects in the simulations from discrepancies in the captured motion input or forward dynamics calculations, making a comprehensive validation difficult. The paper concludes by highlighting the framework’s practical potential for the future of space mission planning and related research endeavors, while also providing recommendations for further refinement. Full article
Show Figures

Figure 1

26 pages, 1847 KiB  
Systematic Review
Economic Cost of Rehabilitation with Robotic and Virtual Reality Systems in People with Neurological Disorders: A Systematic Review
by Roberto Cano-de-la-Cuerda, Aitor Blázquez-Fernández, Selena Marcos-Antón, Patricia Sánchez-Herrera-Baeza, Pilar Fernández-González, Susana Collado-Vázquez, Carmen Jiménez-Antona and Sofía Laguarta-Val
J. Clin. Med. 2024, 13(6), 1531; https://doi.org/10.3390/jcm13061531 - 7 Mar 2024
Cited by 6 | Viewed by 2860
Abstract
Background: The prevalence of neurological disorders is increasing worldwide. In recent decades, the conventional rehabilitation for people with neurological disorders has been often reinforced with the use of technological devices (robots and virtual reality). The aim of this systematic review was to [...] Read more.
Background: The prevalence of neurological disorders is increasing worldwide. In recent decades, the conventional rehabilitation for people with neurological disorders has been often reinforced with the use of technological devices (robots and virtual reality). The aim of this systematic review was to identify the evidence on the economic cost of rehabilitation with robotic and virtual reality devices for people with neurological disorders through a review of the scientific publications over the last 15 years. Methods: A systematic review was conducted on partial economic evaluations (cost description, cost analysis, description of costs and results) and complete (cost minimization, cost-effectiveness, cost utility and cost benefit) studies. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. The main data sources used were PubMed, Scopus and Web of Science (WOS). Studies published in English over the last 15 years were considered for inclusion in this review, regardless of the type of neurological disorder. The critical appraisal instrument from the Joanna Briggs Institute for economic evaluation and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) were used to analyse the methodological quality of all the included papers. Results: A total of 15 studies were included in this review. Ten papers were focused on robotics and five on virtual reality. Most of the studies were focused on people who experienced a stroke. The robotic device most frequently used in the papers included was InMotion® (Bionik Co., Watertown, MA, USA), and for those focused on virtual reality, all papers included used semi-immersive virtual reality systems, with commercial video game consoles (Nintendo Wii® (Nintendo Co., Ltd., Kyoto, Japan) and Kinect® (Microsoft Inc., Redmond, WA, USA)) being used the most. The included studies mainly presented cost minimization outcomes and a general description of costs per intervention, and there were disparities in terms of population, setting, device, protocol and the economic cost outcomes evaluated. Overall, the methodological quality of the included studies was of a moderate level. Conclusions: There is controversy about using robotics in people with neurological disorders in a rehabilitation context in terms of cost minimization, cost-effectiveness, cost utility and cost benefits. Semi-immersive virtual reality devices could involve savings (mainly derived from the low prices of the systems analysed and transportation services if they are applied through telerehabilitation programmes) compared to in-clinic interventions. Full article
(This article belongs to the Section Clinical Rehabilitation)
Show Figures

Figure 1

27 pages, 5246 KiB  
Article
On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications
by Aswin K. Ramasubramanian, Marios Kazasidis, Barry Fay and Nikolaos Papakostas
Sensors 2024, 24(2), 578; https://doi.org/10.3390/s24020578 - 17 Jan 2024
Cited by 4 | Viewed by 1939
Abstract
Tracking human operators working in the vicinity of collaborative robots can improve the design of safety architecture, ergonomics, and the execution of assembly tasks in a human–robot collaboration scenario. Three commercial spatial computation kits were used along with their Software Development Kits that [...] Read more.
Tracking human operators working in the vicinity of collaborative robots can improve the design of safety architecture, ergonomics, and the execution of assembly tasks in a human–robot collaboration scenario. Three commercial spatial computation kits were used along with their Software Development Kits that provide various real-time functionalities to track human poses. The paper explored the possibility of combining the capabilities of different hardware systems and software frameworks that may lead to better performance and accuracy in detecting the human pose in collaborative robotic applications. This study assessed their performance in two different human poses at six depth levels, comparing the raw data and noise-reducing filtered data. In addition, a laser measurement device was employed as a ground truth indicator, together with the average Root Mean Square Error as an error metric. The obtained results were analysed and compared in terms of positional accuracy and repeatability, indicating the dependence of the sensors’ performance on the tracking distance. A Kalman-based filter was applied to fuse the human skeleton data and then to reconstruct the operator’s poses considering their performance in different distance zones. The results indicated that at a distance less than 3 m, Microsoft Azure Kinect demonstrated better tracking performance, followed by Intel RealSense D455 and Stereolabs ZED2, while at ranges higher than 3 m, ZED2 had superior tracking performance. Full article
(This article belongs to the Special Issue Multi-sensor for Human Activity Recognition: 2nd Edition)
Show Figures

Figure 1

17 pages, 3388 KiB  
Article
Assessment of ADHD Subtypes Using Motion Tracking Recognition Based on Stroop Color–Word Tests
by Chao Li, David Delgado-Gómez, Aaron Sujar, Ping Wang, Marina Martin-Moratinos, Marcos Bella-Fernández, Antonio Eduardo Masó-Besga, Inmaculada Peñuelas-Calvo, Juan Ardoy-Cuadros, Paula Hernández-Liebo and Hilario Blasco-Fontecilla
Sensors 2024, 24(2), 323; https://doi.org/10.3390/s24020323 - 5 Jan 2024
Cited by 1 | Viewed by 2322
Abstract
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder known for its significant heterogeneity and varied symptom presentation. Describing the different subtypes as predominantly inattentive (ADHD–I), combined (ADHD–C), and hyperactive–impulsive (ADHD–H) relies primarily on clinical observations, which can be subjective. To address the need for [...] Read more.
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder known for its significant heterogeneity and varied symptom presentation. Describing the different subtypes as predominantly inattentive (ADHD–I), combined (ADHD–C), and hyperactive–impulsive (ADHD–H) relies primarily on clinical observations, which can be subjective. To address the need for more objective diagnostic methods, this pilot study implemented a Microsoft Kinect-based Stroop Color–Word Test (KSWCT) with the objective of investigating the potential differences in executive function and motor control between different subtypes in a group of children and adolescents with ADHD. A series of linear mixture modeling were used to encompass the performance accuracy, reaction times, and extraneous movements during the tests. Our findings suggested that age plays a critical role, and older subjects showed improvements in KSWCT performance; however, no significant divergence in activity level between the subtypes (ADHD–I and ADHD–H/C) was established. Patients with ADHD–H/C showed tendencies toward deficits in motor planning and executive control, exhibited by shorter reaction times for incorrect responses and more difficulty suppressing erroneous responses. This study provides preliminary evidence of unique executive characteristics among ADHD subtypes, advances our understanding of the heterogeneity of the disorder, and lays the foundation for the development of refined and objective diagnostic tools for ADHD. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

14 pages, 4823 KiB  
Article
Determining the Presence and Size of Shoulder Lesions in Sows Using Computer Vision
by Shubham Bery, Tami M. Brown-Brandl, Bradley T. Jones, Gary A. Rohrer and Sudhendu Raj Sharma
Animals 2024, 14(1), 131; https://doi.org/10.3390/ani14010131 - 29 Dec 2023
Cited by 6 | Viewed by 1845
Abstract
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal’s body condition, or an existing injury that causes [...] Read more.
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal’s body condition, or an existing injury that causes lameness. These lesions represent not only a welfare concern but also have an economic impact due to the labor needed for treatment and medication. The objective of this study was to evaluate the use of computer vision techniques in detecting and determining the size of shoulder lesions. A Microsoft Kinect V2 camera captured the top-down depth and RGB images of sows in farrowing crates. The RGB images were collected at a resolution of 1920 × 1080. To ensure the best view of the lesions, images were selected with sows lying on their right and left sides with all legs extended. A total of 824 RGB images from 70 sows with lesions at various stages of development were identified and annotated. Three deep learning-based object detection models, YOLOv5, YOLOv8, and Faster-RCNN, pre-trained with the COCO and ImageNet datasets, were implemented to localize the lesion area. YOLOv5 was the best predictor as it was able to detect lesions with an [email protected] of 0.92. To estimate the lesion area, lesion pixel segmentation was carried out on the localized region using traditional image processing techniques like Otsu’s binarization and adaptive thresholding alongside DL-based segmentation models based on U-Net architecture. In conclusion, this study demonstrates the potential of computer vision techniques in effectively detecting and assessing the size of shoulder lesions in breeding sows, providing a promising avenue for improving sow welfare and reducing economic losses. Full article
(This article belongs to the Special Issue 2nd U.S. Precision Livestock Farming Conference)
Show Figures

Figure 1

16 pages, 2557 KiB  
Article
Ergonomic Design of an Adaptive Automation Assembly System
by Marco Bortolini, Lucia Botti, Francesco Gabriele Galizia and Cristina Mora
Cited by 1 | Viewed by 2683
Abstract
Ergonomics is a key factor in the improvement of health and productivity in workplaces. Its use in improving the performance of a manufacturing process and its positive effects on productivity and human performance is drawing the attention of researchers and practitioners in the [...] Read more.
Ergonomics is a key factor in the improvement of health and productivity in workplaces. Its use in improving the performance of a manufacturing process and its positive effects on productivity and human performance is drawing the attention of researchers and practitioners in the field of industrial engineering. This paper proposes an ergonomic design approach applied to an innovative prototype of an adaptive automation assembly system (A3S) equipped with Microsoft Kinect™ for real-time adjustment. The system acquires the anthropometric measurements of the operator by means of the 3-D sensing device and changes its layout, arranging the mobile elements accordingly. The aim of this study was to adapt the assembly workstation to the operator dimensions, improving the ergonomics of the workstation and reducing the risks of negative effects on workers’ health and safety. The case study of an assembly operation of a centrifugal electric pump is described to validate the proposed approach. The assembly operation was simulated at a traditional fixed workstation and at the A3S. The shoulder flexion angle during the assembly tasks at the A3S reduced between 18% and 47%. The ergonomic risk assessment confirmed the improvement of the ergonomic conditions and the ergonomic benefits of the A3S. Full article
Show Figures

Figure 1

23 pages, 1231 KiB  
Review
Effects of Virtual Reality in the Rehabilitation of Parkinson’s Disease: A Systematic Review
by Juan Rodríguez-Mansilla, Celia Bedmar-Vargas, Elisa María Garrido-Ardila, Silvia Teresa Torres-Piles, Blanca González-Sánchez, María Trinidad Rodríguez-Domínguez, María Valle Ramírez-Durán and María Jiménez-Palomares
J. Clin. Med. 2023, 12(15), 4896; https://doi.org/10.3390/jcm12154896 - 26 Jul 2023
Cited by 7 | Viewed by 4105
Abstract
Background: Parkinson’s disease is characterised by the loss of balance and the presence of walking difficulties. The inclusion of rehabilitation therapies to complement pharmacological therapy allows for comprehensive management of the disease. In recent years, virtual reality has been gaining importance in the [...] Read more.
Background: Parkinson’s disease is characterised by the loss of balance and the presence of walking difficulties. The inclusion of rehabilitation therapies to complement pharmacological therapy allows for comprehensive management of the disease. In recent years, virtual reality has been gaining importance in the treatment of neurological diseases and their associated symptoms. Therefore, the objective of this systematic review was to analyse the effectiveness of virtual reality on balance and gait in patients with Parkinson’s disease. Methods: This study is a systematic review conducted following PRISMA’s statements. An electronic search of the literature was carried out in the following databases: PubMed, Cochrane, Dialnet, Scopus, Web of Science, PsycINFO and Science Direct PEDro. The inclusion criteria were controlled and non-controlled clinical trials published in the last 12 years in English or Spanish, in which virtual reality was applied to treat balance and gait impairments in patients with Parkinson’s disease. Results: 20 studies were finally included in this review. A total of 480 patients participated in the included studies. All patients were diagnosed with Parkinson’s disease. Most of the investigations used the Nintendo Wii + Balance Board or the Microsoft Kinect TM combined with the Kinect Adventures games as a virtual reality device. Conclusions: According to the results of this literature review, virtual reality-based interventions achieve good adherence to treatment, bring innovation and motivation to rehabilitation, and provide feedback as well as cognitive and sensory stimulation in patients with Parkinson’s disease. Therefore, virtual reality can be considered an alternative for personalised rehabilitation and for home treatment. Full article
(This article belongs to the Special Issue Clinical Applications of Immersive and Nonimmersive Virtual Reality)
Show Figures

Figure 1

24 pages, 12290 KiB  
Article
METRIC—Multi-Eye to Robot Indoor Calibration Dataset
by Davide Allegro, Matteo Terreran and Stefano Ghidoni
Information 2023, 14(6), 314; https://doi.org/10.3390/info14060314 - 29 May 2023
Cited by 1 | Viewed by 2143
Abstract
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all [...] Read more.
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all the viewpoints of the network. This is particularly important in applications such as object detection and people tracking. Multi-camera calibration is a critical requirement also in several robotics scenarios, particularly those involving a robotic workcell equipped with a manipulator surrounded by multiple sensors. Within this scenario, the robot-world hand-eye calibration is an additional crucial element for determining the exact position of each camera with respect to the robot, in order to provide information about the surrounding workspace directly to the manipulator. Despite the importance of the calibration process in the two scenarios outlined above, namely (i) a camera network, and (ii) a camera network with a robot, there is a lack of standard datasets available in the literature to evaluate and compare calibration methods. Moreover they are usually treated separately and tested on dedicated setups. In this paper, we propose a general standard dataset acquired in a robotic workcell where calibration methods can be evaluated in two use cases: camera network calibration and robot-world hand-eye calibration. The Multi-Eye To Robot Indoor Calibration (METRIC) dataset consists of over 10,000 synthetic and real images of ChAruCo and checkerboard patterns, each one rigidly attached to the robot end-effector, which was moved in front of four cameras surrounding the manipulator from different viewpoints during the image acquisition. The real images in the dataset includes several multi-view image sets captured by three different types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455 and Intel RealSense Lidar L515, to evaluate their advantages and disadvantages for calibration. Furthermore, in order to accurately analyze the effect of camera-robot distance on calibration, we acquired a comprehensive synthetic dataset, with related ground truth, with three different camera network setups corresponding to three levels of calibration difficulty depending on the cell size. An additional contribution of this work is to provide a comprehensive evaluation of state-of-the-art calibration methods using our dataset, highlighting their strengths and weaknesses, in order to outline two benchmarks for the two aforementioned use cases. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

Back to TopTop