Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (363)

Search Parameters:
Keywords = RANSAC

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5467 KiB  
Article
Stem and Leaf Segmentation and Phenotypic Parameter Extraction of Tomato Seedlings Based on 3D Point
by Xuemei Liang, Wenbo Yu, Li Qin, Jianfeng Wang, Peng Jia, Qi Liu, Xiaoyu Lei and Minglai Yang
Agronomy 2025, 15(1), 120; https://doi.org/10.3390/agronomy15010120 (registering DOI) - 5 Jan 2025
Abstract
High-throughput measurements of phenotypic parameters in plants generate substantial data, significantly improving agricultural production optimization and breeding efficiency. However, these measurements face several challenges, including environmental variability, sample heterogeneity, and complex data processing. This study presents a method applicable to stem and leaf [...] Read more.
High-throughput measurements of phenotypic parameters in plants generate substantial data, significantly improving agricultural production optimization and breeding efficiency. However, these measurements face several challenges, including environmental variability, sample heterogeneity, and complex data processing. This study presents a method applicable to stem and leaf segmentation and parameter extraction during the tomato seedling stage, utilizing three-dimensional point clouds. Focusing on tomato seedlings, data was captured using a depth camera to create point cloud models. The RANSAC, region-growing, and greedy projection triangulation algorithms were employed to extract phenotypic parameters such as plant height, stem thickness, leaf area, and leaf inclination angle. The results showed strong correlations, with coefficients of determination for manually measured parameters versus extracted 3D point cloud parameters being 0.920, 0.725, 0.905, and 0.917, respectively. The root-mean-square errors were 0.643, 0.168, 1.921, and 4.513, with absolute percentage errors of 3.804%, 5.052%, 5.509%, and 7.332%. These findings highlight a robust relationship between manual measurements and the extracted parameters, establishing a technical foundation for high-throughput automated phenotypic parameter extraction in tomato seedlings. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

28 pages, 16917 KiB  
Article
A Framework of State Estimation on Laminar Grinding Based on the CT Image–Force Model
by Jihao Liu, Guoyan Zheng and Weixin Yan
Sensors 2025, 25(1), 238; https://doi.org/10.3390/s25010238 - 3 Jan 2025
Viewed by 298
Abstract
It is a great challenge for a safe surgery to localize the cutting tip during laminar grinding. To address this problem, we develop a framework of state estimation based on the CT image–force model. For the proposed framework, the pre-operative CT image and [...] Read more.
It is a great challenge for a safe surgery to localize the cutting tip during laminar grinding. To address this problem, we develop a framework of state estimation based on the CT image–force model. For the proposed framework, the pre-operative CT image and intra-operative milling force signal work as source inputs. In the framework, a bone milling force prediction model is built, and the surgical planned paths can be transformed into the prediction sequences of milling force. The intra-operative milling force signal is segmented by the tumbling window algorithm. Then, the similarity between the prediction sequences and the segmented milling signal is derived by the dynamic time warping (DTW) algorithm. The derived similarity indicates the position of the cutting tip. Finally, to overcome influences of some factors, we used the random sample consensus (RANSAC). The code of the functional simulations has be opened. Full article
(This article belongs to the Special Issue Deep Learning for Perception and Recognition: Method and Applications)
Show Figures

Figure 1

13 pages, 543 KiB  
Article
Fitting Geometric Shapes to Fuzzy Point Cloud Data
by Vincent B. Verhoeven, Pasi Raumonen and Markku Åkerblom
Viewed by 216
Abstract
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, [...] Read more.
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, we propose the novel approach of using the expected Mahalanobis distance. The primary benefit is that it takes both the different magnitude and orientation of uncertainty for each data point into account. We illustrate the approach with laser scanning data of a cylinder and compare its performance with that of the conventional least squares method with and without random sample consensus (RANSAC). Our proposed method fits the geometry more accurately, albeit generally with greater uncertainty, and shows promise for geometry reconstruction with laser-scanned data. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

13 pages, 5760 KiB  
Article
Alignment Detection Technology of Chang’e-6 Primary Package Container
by Guanyu Wang, Shenyi Jin, Xiangjin Deng and Yufu Qu
Viewed by 270
Abstract
The Chang’e-6 mission achieved the first successful sample collection and return from the Moon’s far side. Accurate alignment detection of the primary packaging container is critical for the success of this mission, as it ensures proper retrieval of lunar soil. To address challenges [...] Read more.
The Chang’e-6 mission achieved the first successful sample collection and return from the Moon’s far side. Accurate alignment detection of the primary packaging container is critical for the success of this mission, as it ensures proper retrieval of lunar soil. To address challenges such as complex backgrounds, uneven lighting, and reflective surfaces, this paper introduces an alignment detection method that integrates YOLO object recognition, Devernay subpixel edge detection, and the RANSAC fitting algorithm. By employing both linear and elliptical fitting techniques, the method accurately determines the median line of the primary packaging container, ensuring precise alignment detection. The effectiveness of this approach is demonstrated by an average alignment distance of 0.28 mm with a standard deviation of 0.03 mm in lunar surface images, underscoring its accuracy and reliability. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

25 pages, 9017 KiB  
Article
Predictive Analytics in Construction: Multi-Output Machine Learning Models for Abrasion Resistance
by Shaheen Mohammed Saleh Ahmed, Hakan Güneyli and Süleyman Karahan
Viewed by 379
Abstract
This study aims to accurately predict abrasion resistance, measured through the Los Angeles (LA) abrasion test, and modulus of elasticity, assessed using the Micro-Deval Abrasion (MDA) test, to support structural integrity and efficient material use in construction projects. We applied multi-output machine learning [...] Read more.
This study aims to accurately predict abrasion resistance, measured through the Los Angeles (LA) abrasion test, and modulus of elasticity, assessed using the Micro-Deval Abrasion (MDA) test, to support structural integrity and efficient material use in construction projects. We applied multi-output machine learning models—specifically Linear Regression (LR), Huber, RANSAC, and Support Vector Regression (SVR)—to predict LA and MDA values based on primary input parameters, including Uniaxial Compression Strength (UCS), Point Load Index (PLI), Schmidt Hammer Rebound (Sh_h), and Ultrasonic Pulse Velocity (UPV). The experimental work involved assessing model performance using metrics such as Mean Absolute Error (MAE), R-squared (R2), and Mean Squared Error (MSE). Linear Regression demonstrated superior predictive accuracy, achieving 94% for R2 with an MAE of 0.21 and MSE of 0.09 for LA predictions and 92% for R2 with an MAE of 0.24 and MSE of 0.11 for MDA predictions. These results underscore the potential of machine learning techniques in accurately predicting critical material properties, offering engineers reliable tools for optimizing material selection and structural design. This research contributes to the advancement of construction practices, promoting the development of durable and efficient infrastructure. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

15 pages, 16510 KiB  
Article
Mosaicking and Correction Method of Gaofen-3 ScanSAR Images in Coastal Areas with Subswath Overlap Range Constraints
by Jiajun Wang, Guowang Jin, Xin Xiong, Jiahao Li, Hao Ye and He Yang
J. Mar. Sci. Eng. 2024, 12(12), 2277; https://doi.org/10.3390/jmse12122277 - 11 Dec 2024
Viewed by 389
Abstract
The ScanSAR mode image obtained by the Gaofen-3 (GF-3) satellite has an imaging width of up to 130–500 km, which is of great significance in monitoring oceanography, meteorology, water conservancy, and transportation. To address the issues of subswath misalignment and the inability to [...] Read more.
The ScanSAR mode image obtained by the Gaofen-3 (GF-3) satellite has an imaging width of up to 130–500 km, which is of great significance in monitoring oceanography, meteorology, water conservancy, and transportation. To address the issues of subswath misalignment and the inability to correct in the processing of GF-3 ScanSAR images in coastal areas using software such as PIE, ENVI, and SNAP, a method for mosaicking and correcting GF-3 ScanSAR images with subswaths that overlap within specified range constraints is proposed. This method involves correlating the coefficients of each subswath thumbnail image in order to determine the extent of the overlap range. Given that the matching points are constrained to the overlap between subswaths, the normalized cross-correlation (NCC) matching algorithm is utilized to calculate the matching points between subswaths. Subsequently, the random sampling consistency (RANSAC) algorithm is employed to eliminate the mismatching points. Subsequently, the subswaths should be mosaicked together with the stitching translation of subswaths, based on the coordinates of the matching points. The image brightness correction coefficient is calculated based on the average grayscale value of pixels in the overlapping region. This is performed in order to correct the grayscale values of adjacent subswaths and thereby reducing the brightness difference at the junction of subswaths. The entire ScanSAR slant range image is produced. By employing the Range–Doppler model for indirect orthorectification, corrected images with geographic information are generated. The experiment utilized three coastal GF-3 ScanSAR images for mosaicking and correction, and the results were contrasted with those attained through PIE software V7.0 processing. This was conducted to substantiate the efficacy and precision of the methodology for mosaicking and correcting coastal GF-3 ScanSAR images. Full article
(This article belongs to the Special Issue Ocean Observations)
Show Figures

Figure 1

21 pages, 17557 KiB  
Article
Lidar Simultaneous Localization and Mapping Algorithm for Dynamic Scenes
by Peng Ji, Qingsong Xu and Yifan Zhao
World Electr. Veh. J. 2024, 15(12), 567; https://doi.org/10.3390/wevj15120567 - 7 Dec 2024
Viewed by 855
Abstract
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes [...] Read more.
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes a LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithm tailored for dynamic scenes. The algorithm employs a tightly coupled SLAM framework integrating LiDAR and inertial measurement unit (IMU). In the process of dynamic obstacle removal, the point cloud data is first gridded. To more comprehensively represent the point cloud information, the point cloud within the perception area is linearly discretized by height to obtain the distribution of the point cloud at different height layers, which is then encoded to construct a linear discretized height descriptor for dynamic region extraction. To preserve more static feature points without altering the original point cloud, the Random Sample Consensus (RANSAC) ground fitting algorithm is employed to fit and segment the ground point cloud within the dynamic regions, followed by the removal of dynamic obstacles. Finally, accurate point cloud poses are obtained through static feature matching. The proposed algorithm has been validated using open-source datasets and self-collected campus datasets. The results demonstrate that the algorithm improves dynamic point cloud removal accuracy by 12.3% compared to the ERASOR algorithm and enhances overall mapping and localization accuracy by 8.3% compared to the LIO-SAM algorithm, thereby providing a reliable environmental description for intelligent mobile vehicles. Full article
Show Figures

Figure 1

19 pages, 7427 KiB  
Article
Determination of Chimney Non-Verticality from TLS Data Using RANSAC Method
by Žan Pleterski, Gašper Rak and Klemen Kregar
Remote Sens. 2024, 16(23), 4541; https://doi.org/10.3390/rs16234541 - 4 Dec 2024
Viewed by 461
Abstract
The continuous monitoring of tall industrial buildings is necessary to ensure safe operation. With technological advances in terrestrial laser scanning and other non-contact measurement methods, the methods and techniques for assessing the stability of tall industrial chimneys are evolving. This paper presents a [...] Read more.
The continuous monitoring of tall industrial buildings is necessary to ensure safe operation. With technological advances in terrestrial laser scanning and other non-contact measurement methods, the methods and techniques for assessing the stability of tall industrial chimneys are evolving. This paper presents a method for determining the non-verticality and straightness of chimneys that offers significant advantages over existing methods. Narrow bands of scanned point clouds are processed at selected height intervals. Using the RANSAC method, points that do not belong to the chimney shell are filtered and the centre of the circle or ellipse is adjusted using the least squares method. The proposed method enables the efficient filtering of point clouds due to frequent obstructions on the chimney shell, the determination of the regularity of the chimney shell shape, a mathematical analysis of the chimney axis curvature, and an intuitive graphical representation of chimney non-verticality. The comparison of the results with other studies confirms the efficiency of the method. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Graphical abstract

22 pages, 51155 KiB  
Article
Development and Experiment of Adaptive Oolong Tea Harvesting Robot Based on Visual Localization
by Ruidong Yu, Yinhui Xie, Qiming Li, Zhiqin Guo, Yuanquan Dai, Zhou Fang and Jun Li
Agriculture 2024, 14(12), 2213; https://doi.org/10.3390/agriculture14122213 - 3 Dec 2024
Viewed by 556
Abstract
Aimed to improve the quality of picked tea leaves and the efficiency of tea harvesting, an adaptive oolong tea harvesting robot with an adjustment module of a cutting tool and a harvesting line localization algorithm is proposed. The robot includes a vision measurement [...] Read more.
Aimed to improve the quality of picked tea leaves and the efficiency of tea harvesting, an adaptive oolong tea harvesting robot with an adjustment module of a cutting tool and a harvesting line localization algorithm is proposed. The robot includes a vision measurement module and an adjustment mechanism of a cutting tool, enabling it to assess the shape of tea bushes and adaptively adjust the cutter configuration. To address the challenges of complex tea bush structures and environmental noise, a Prior–Tukey RANSAC algorithm was proposed for accurate harvesting model fitting. Our algorithm leverages prior knowledge about tea bush stem characteristics, uses the Tukey loss function to enhance robustness to outliers, and incorporates workspace constraints to ensure that the cutting tool remains within feasible operational limits. To evaluate the performance of the robot, experiments were conducted in a tea garden in Wuyi Mountain, China. Under ideal conditions, our algorithm achieved an inlier ratio of 43.10% and an R2 value of 0.9787, significantly outperforming traditional RANSAC and other variants. Under challenging field conditions, the proposed algorithm demonstrated robustness, maintaining an inlier ratio of 47.50% and an R2 value of 0.9598. And the processing time of the algorithm met the real-time requirements for effective tea-picking operations. The field experiments also showed an improvement in intact tea rates, from 79.34% in the first harvest to 81.57% in the second harvest, with a consistent usable tea rate of around 85%. Additionally, the robot had a harvesting efficiency of 260.14 kg/h, which was superior to existing handheld and riding-type tea pickers. These results indicate that the robot effectively balances efficiency, accuracy, and robustness, providing a promising solution for high-quality tea harvesting in complex environments. Full article
Show Figures

Figure 1

26 pages, 13748 KiB  
Article
An Automatic Solution for Registration Between Single-Image and Point Cloud in Manhattan World Using Line Primitives
by Yifeng He, Jingui Zou, Ruoming Zhai, Liyuan Meng, Yinzhi Zhao, Dingliang Yang and Na Wang
Remote Sens. 2024, 16(23), 4382; https://doi.org/10.3390/rs16234382 - 23 Nov 2024
Viewed by 571
Abstract
2D-3D registration is increasingly being applied in various scientific and engineering scenarios. However, due to appearance differences and cross-modal discrepancies, it is demanding for image and point cloud registration methods to establish correspondences, making 2D-3D registration highly challenging. To handle these problems, we [...] Read more.
2D-3D registration is increasingly being applied in various scientific and engineering scenarios. However, due to appearance differences and cross-modal discrepancies, it is demanding for image and point cloud registration methods to establish correspondences, making 2D-3D registration highly challenging. To handle these problems, we propose a novel and automatic solution for 2D-3D registration in Manhattan world based on line primitives, which we denote as VPPnL. Firstly, we derive the rotation matrix candidates by establishing the vanishing point coordinate system as the link of point cloud principal directions to camera coordinate system. Subsequently, the RANSAC algorithm, which accounts for the clustering of parallel lines, is employed in conjunction with the least-squares method for translation vectors estimation and optimization. Finally, a nonlinear least-squares graph optimization method is carried out to optimize the camera pose and realize the 2D-3D registration and point colorization. Experiments on synthetic data and real-world data illustrate that our proposed algorithm can address the problem of 2D-3D direct registration in the case of Manhattan scenes where images are limited and sparse. Full article
Show Figures

Figure 1

22 pages, 4119 KiB  
Article
Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot
by Janusz Jakubiak and Jakub Delicat
Appl. Sci. 2024, 14(23), 10774; https://doi.org/10.3390/app142310774 - 21 Nov 2024
Viewed by 492
Abstract
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the [...] Read more.
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the location of inspected objects, such as, for example, conveyor idlers in the vicinity of the robot. This paper presents a novel approach to analyze the 3D LIDAR data to detect idler frames in real time with high accuracy. Our method processes a point cloud image to determine positions of the frames relative to the robot. The detection algorithm utilizes density histograms, Euclidean clustering, and a dimension-based classifier. The proposed data flow focuses on separate processing of single scans independently, to minimize the computational load, necessary for real-time performance. The algorithm is verified with data recorded in a raw material processing plant by comparing the results with human-labeled objects. The proposed process is capable of detecting idler frames in a single 3D scan with accuracy above 83%. The average processing time of a single scan is under 22 ms, with a maximum of 75 ms, ensuring that idler frames are detected within the scan acquisition period, allowing continuous operation without delays. These results demonstrate that the algorithm enables the fast and accurate detection and localization of idler frames in real-world scenarios. Full article
Show Figures

Figure 1

18 pages, 7247 KiB  
Article
Intelligent Inspection Method for Rebar Installation Quality of Reinforced Concrete Slab Based on Point Cloud Processing and Semantic Segmentation
by Ruishi Wang, Jianxiong Zhang, Hongxing Qiu and Jian Sun
Buildings 2024, 14(11), 3693; https://doi.org/10.3390/buildings14113693 - 20 Nov 2024
Viewed by 664
Abstract
The rebar installation quality significantly impacts the safety and durability of reinforced concrete (RC) structures. Traditional manual inspection is time-consuming, inefficient, and highly subjective. In order to solve this problem, this study uses a depth camera and aims to develop an intelligent inspection [...] Read more.
The rebar installation quality significantly impacts the safety and durability of reinforced concrete (RC) structures. Traditional manual inspection is time-consuming, inefficient, and highly subjective. In order to solve this problem, this study uses a depth camera and aims to develop an intelligent inspection method for the rebar installation quality of an RC slab. The Random Sample Consensus (RANSAC) method is used to extract point cloud data for the bottom formwork, the upper and lower rebar lattices, and individual rebars. These data are utilized to measure the concrete cover thickness, the distance between the upper and lower rebar lattices, and the spacing between rebars in the RC slab. This paper introduces the concept of the “diameter calculation region” and combines point cloud semantic information with rebar segmentation mask information through the relationship between pixel coordinates and camera coordinates to measure the nominal diameter of the rebar. The verification results indicate that the maximum deviations for the concrete cover thickness, the distance between the upper and lower rebar lattices, and the spacing of the double-layer bidirectional rebar in the RC slab are 0.41 mm, 1.32 mm, and 5 mm, respectively. The accuracy of the nominal rebar diameter measurement reaches 98.4%, demonstrating high precision and applicability for quality inspection during the actual construction stage. Overall, this study integrates computer vision into traditional civil engineering research, utilizing depth cameras to acquire point cloud data and color results. It replaces inefficient manual inspection methods with an intelligent and efficient approach, addressing the challenge of detecting double-layer reinforcement. This has significant implications for practical engineering applications and the development of intelligent engineering monitoring systems. Full article
(This article belongs to the Topic Resilient Civil Infrastructure, 2nd Edition)
Show Figures

Figure 1

27 pages, 28012 KiB  
Article
A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement
by Boyang You and Barmak Honarvar Shakibaei Asli
Big Data Cogn. Comput. 2024, 8(11), 164; https://doi.org/10.3390/bdcc8110164 - 20 Nov 2024
Viewed by 667
Abstract
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. [...] Read more.
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. The feature detection methods scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and KAZE are compared across six datasets, with SIFT proving the most effective (matching rate higher than 0.12). Using K-nearest-neighbor matching and random sample consensus (RANSAC), refined feature point matching and 3D spatial representation are achieved via antipodal geometry. Then, the Poisson surface reconstruction algorithm converts the point cloud into a mesh model. Additionally, texture images are enhanced by leveraging a visual geometry group (VGG) network-based deep learning approach. Content images from a dataset provide geometric contours via higher-level VGG layers, while textures from style images are extracted using the lower-level layers. These are fused to create texture-transferred images, where the image quality assessment (IQA) metrics SSIM and PSNR are used to evaluate texture-enhanced images. Finally, texture mapping integrates the enhanced textures with the mesh model, improving the scene representation with enhanced texture. The method presented in this paper surpassed a LiDAR-based reconstruction approach by 20% in terms of point cloud density and number of model facets, while the hardware cost was only 1% of that associated with LiDAR. Full article
Show Figures

Figure 1

18 pages, 2990 KiB  
Article
A GGCM-E Based Semantic Filter and Its Application in VSLAM Systems
by Yuanjie Li, Chunyan Shao and Jiaming Wang
Electronics 2024, 13(22), 4487; https://doi.org/10.3390/electronics13224487 - 15 Nov 2024
Viewed by 424
Abstract
Image matching-based visual simultaneous localization and mapping (vSLAM) extracts low-level pixel features to reconstruct camera trajectories and maps through the epipolar geometry method. However, it fails to achieve correct trajectories and mapping when there are low-quality feature correspondences in several challenging environments. Although [...] Read more.
Image matching-based visual simultaneous localization and mapping (vSLAM) extracts low-level pixel features to reconstruct camera trajectories and maps through the epipolar geometry method. However, it fails to achieve correct trajectories and mapping when there are low-quality feature correspondences in several challenging environments. Although the RANSAC-based framework can enable better results, it is computationally inefficient and unstable in the presence of a large number of outliers. A Faster R-CNN learning-based semantic filter is proposed to explore the semantic information of inliers to remove low-quality correspondences, helping vSLAM localize accurately in our previous work. However, the semantic filter learning method generalizes low precision for low-level and dense texture-rich scenes, leading the semantic filter-based vSLAM to be unstable and have poor geometry estimation. In this paper, a GGCM-E-based semantic filter using YOLOv8 is proposed to address these problems. Firstly, the semantic patches of images are collected from the KITTI dataset, the TUM dataset provided by the Technical University of Munich, and real outdoor scenes. Secondly, the semantic patches are classified by our proposed GGCM-E descriptors to obtain the YOLOv8 neural network training dataset. Finally, several semantic filters for filtering low-level and dense texture-rich scenes are generated and combined into the ORB-SLAM3 system. Extensive experiments show that the semantic filter can detect and classify semantic levels of different scenes effectively, filtering low-level semantic scenes to improve the quality of correspondences, thus achieving accurate and robust trajectory reconstruction and mapping. For the challenging autonomous driving benchmark and real environments, the vSLAM system with respect to the GGCM-E-based semantic filter demonstrates its superiority regarding reducing the 3D position error, such that the absolute trajectory error is reduced by up to approximately 17.44%, showing its promise and good generalization. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Robotics)
Show Figures

Figure 1

14 pages, 6543 KiB  
Article
Cleaning of Abnormal Wind Speed Power Data Based on Quartile RANSAC Regression
by Fengjuan Zhang, Xiaohui Zhang, Zhilei Xu, Keliang Dong, Zhiwei Li and Yubo Liu
Energies 2024, 17(22), 5697; https://doi.org/10.3390/en17225697 - 14 Nov 2024
Viewed by 587
Abstract
The combined complexity of wind turbine systems and harsh operating conditions pose significant challenges to the accuracy of operational data in Supervisory Control and Data Acquisition (SCADA) systems. Improving the precision of data cleaning for high proportions of stacked abnormalities remains an urgent [...] Read more.
The combined complexity of wind turbine systems and harsh operating conditions pose significant challenges to the accuracy of operational data in Supervisory Control and Data Acquisition (SCADA) systems. Improving the precision of data cleaning for high proportions of stacked abnormalities remains an urgent problem. This paper deeply analyzes the distribution characteristics of abnormal data and proposes a novel method for abnormal data cleaning based on a classification processing framework. Firstly, the first type of abnormal data is cleaned based on operational criteria; secondly, the quartile method is used to eliminate sparse abnormal data to obtain a clearer boundary line; on this basis, the Random Sample Consensus (RANSAC) algorithm is employed to eliminate stacked abnormal data; finally, the effectiveness of the proposed algorithm in cleaning abnormal data with a high proportion of stacked abnormalities is verified through case studies, and evaluation indicators are introduced through comparative experiments to quantitatively assess the cleaning effect. The research results indicate that the algorithm excels in cleaning effectiveness, efficiency, accuracy, and rationality of data deletion. The cleaning accuracy improvement is particularly significant when dealing with a high proportion of stacked anomaly data, thereby bringing significant value to wind power applications such as wind power prediction, condition assessment, and fault detection. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

Back to TopTop