Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (277)

Search Parameters:
Keywords = UAV cloud detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7754 KiB  
Article
Fruit Detection and Yield Mass Estimation from a UAV Based RGB Dense Cloud for an Apple Orchard
by Marius Hobart, Michael Pflanz, Nikos Tsoulias, Cornelia Weltzien, Mia Kopetzky and Michael Schirrmann
Viewed by 250
Abstract
Precise photogrammetric mapping of preharvest conditions in an apple orchard can help determine the exact position and volume of single apple fruits. This can help estimate upcoming yields and prevent losses through spatially precise cultivation measures. These parameters also are the basis for [...] Read more.
Precise photogrammetric mapping of preharvest conditions in an apple orchard can help determine the exact position and volume of single apple fruits. This can help estimate upcoming yields and prevent losses through spatially precise cultivation measures. These parameters also are the basis for effective storage management decisions, post-harvest. These spatial orchard characteristics can be determined by low-cost drone technology with a consumer grade red-green-blue (RGB) sensor. Flights were conducted in a specified setting to enhance the signal-to-noise ratio of the orchard imagery. Two different altitudes of 7.5 m and 10 m were tested to estimate the optimum performance. A multi-seasonal field campaign was conducted on an apple orchard in Brandenburg, Germany. The test site consisted of an area of 0.5 ha with 1334 trees, including the varieties ‘Gala’ and ‘Jonaprince’. Four rows of trees were tested each season, consisting of 14 blocks with eight trees each. Ripe apples were detected by their color and structure from a photogrammetrically created three-dimensional point cloud with an automatic algorithm. The detection included the position, number, volume and mass of apples for all blocks over the orchard. Results show that the identification of ripe apple fruit is possible in RGB point clouds. Model coefficients of determination ranged from 0.41 for data captured at an altitude of 7.5 m for 2018 to 0.40 and 0.53 for data from a 10 m altitude, for 2018 and 2020, respectively. Model performance was weaker for the last captured tree rows because data coverage was lower. The model underestimated the number of apples per block, which is reasonable, as leaves cover some of the fruits. However, a good relationship to the yield mass per block was found when the estimated apple volume per block was combined with a mean apple density per variety. Overall, coefficients of determination of 0.56 (for the 7.5 m altitude flight) and 0.76 (for the 10 m flights) were achieved. Therefore, we conclude that mapping at an altitude of 10 m performs better than 7.5 m, in the context of low-altitude UAV flights for the estimation of ripe apple parameters directly from 3D RGB dense point clouds. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

34 pages, 1229 KiB  
Review
A Review of CNN Applications in Smart Agriculture Using Multimodal Data
by Mohammad El Sakka, Mihai Ivanovici, Lotfi Chaari and Josiane Mothe
Sensors 2025, 25(2), 472; https://doi.org/10.3390/s25020472 - 15 Jan 2025
Viewed by 352
Abstract
This review explores the applications of Convolutional Neural Networks (CNNs) in smart agriculture, highlighting recent advancements across various applications including weed detection, disease detection, crop classification, water management, and yield prediction. Based on a comprehensive analysis of more than 115 recent studies, coupled [...] Read more.
This review explores the applications of Convolutional Neural Networks (CNNs) in smart agriculture, highlighting recent advancements across various applications including weed detection, disease detection, crop classification, water management, and yield prediction. Based on a comprehensive analysis of more than 115 recent studies, coupled with a bibliometric study of the broader literature, this paper contextualizes the use of CNNs within Agriculture 5.0, where technological integration optimizes agricultural efficiency. Key approaches analyzed involve image classification, image segmentation, regression, and object detection methods that use diverse data types ranging from RGB and multispectral images to radar and thermal data. By processing UAV and satellite data with CNNs, real-time and large-scale crop monitoring can be achieved, supporting advanced farm management. A comparative analysis shows how CNNs perform with respect to other techniques that involve traditional machine learning and recent deep learning models in image processing, particularly when applied to high-dimensional or temporal data. Future directions point toward integrating IoT and cloud platforms for real-time data processing and leveraging large language models for regulatory insights. Potential research advancements emphasize improving increased data accessibility and hybrid modeling to meet the agricultural demands of climate variability and food security, positioning CNNs as pivotal tools in sustainable agricultural practices. A related repository that contains the reviewed articles along with their publication links is made available. Full article
(This article belongs to the Special Issue Feature Review Papers in Intelligent Sensors)
Show Figures

Figure 1

27 pages, 30735 KiB  
Article
A Cloud Detection System for UAV Sense and Avoid: Analysis of a Monocular Approach in Simulation and Flight Tests
by Adrian Dudek and Peter Stütz
Viewed by 354
Abstract
In order to contribute to the operation of unmanned aerial vehicles (UAVs) according to visual flight rules (VFR), this article proposes a monocular approach for cloud detection using an electro-optical sensor. Cloud avoidance is motivated by several factors, including improving visibility for collision [...] Read more.
In order to contribute to the operation of unmanned aerial vehicles (UAVs) according to visual flight rules (VFR), this article proposes a monocular approach for cloud detection using an electro-optical sensor. Cloud avoidance is motivated by several factors, including improving visibility for collision prevention and reducing the risks of icing and turbulence. The described workflow is based on parallelized detection, tracking and triangulation of features with prior segmentation of clouds in the image. As output, the system generates a cloud occupancy grid of the aircraft’s vicinity, which can be used for cloud avoidance calculations afterwards. The proposed methodology was tested in simulation and flight experiments. With the aim of developing cloud segmentation methods, datasets were created, one of which was made publicly available and features 5488 labeled, augmented cloud images from a real flight experiment. The trained segmentation models based on the YOLOv8 framework are able to separate clouds from the background even under challenging environmental conditions. For a performance analysis of the subsequent cloud position estimation stage, calculated and actual cloud positions are compared and feature evaluation metrics are applied. The investigations demonstrate the functionality of the approach, even if challenges become apparent under real flight conditions. Full article
(This article belongs to the Special Issue Flight Control and Collision Avoidance of UAVs)
Show Figures

Figure 1

30 pages, 12451 KiB  
Article
A Method Coupling NDT and VGICP for Registering UAV-LiDAR and LiDAR-SLAM Point Clouds in Plantation Forest Plots
by Fan Wang, Jiawei Wang, Yun Wu, Zhijie Xue, Xin Tan, Yueyuan Yang and Simei Lin
Forests 2024, 15(12), 2186; https://doi.org/10.3390/f15122186 - 12 Dec 2024
Viewed by 541
Abstract
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements [...] Read more.
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements and susceptibility to local optima, in this paper, we propose a high-precision, robust, NDT-VGICP registration method that integrates voxel features to register UAV-LiDAR and LiDAR-SLAM point clouds at the forest stand scale. First, the point clouds are voxelized, and their normal vectors and normal distribution models are computed, then the initial transformation matrix is quickly estimated based on the point pair distribution characteristics to achieve preliminary alignment. Second, high-dimensional feature weighting is introduced, and the iterative closest point (ICP) algorithm is used to optimize the distance between the matching point pairs, adjusting the transformation matrix to reduce the registration errors iteratively. Finally, the algorithm converges when the iterative conditions are met, yielding an optimal transformation matrix and achieving precise point cloud registration. The results show that the algorithm performs well in Chinese fir forest stands of different age groups (average RMSE—horizontal: 4.27 cm; vertical: 3.86 cm) and achieves high accuracy in single-tree crown vertex detection and tree height estimation (average F-score: 0.90; R2 for tree height estimation: 0.88). This study demonstrates that the NDT-VGICP algorithm can effectively fuse and collaboratively apply multi-platform LiDAR data, providing a methodological reference for accurately quantifying individual tree parameters and efficiently monitoring 3D forest stand structures. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

9 pages, 1927 KiB  
Proceeding Paper
UAV-Based Multi-Sensor Data Fusion for 3D Building Detection
by Mohsen Shahraki, Ahmed El-Rabbany and Ahmed Elamin
Viewed by 337
Abstract
Three-dimensional building extraction is crucial for urban planning, environmental analysis, and autonomous navigation. One method for data collection involves using unmanned aerial vehicles (UAVs), which allow for flexible and rapid data acquisition. However, accurate 3D building extraction from these data remains challenging due [...] Read more.
Three-dimensional building extraction is crucial for urban planning, environmental analysis, and autonomous navigation. One method for data collection involves using unmanned aerial vehicles (UAVs), which allow for flexible and rapid data acquisition. However, accurate 3D building extraction from these data remains challenging due to the abundance of information in high-resolution datasets. To tackle this problem, a novel UAV-based multi-sensor data fusion model is developed, which utilizes deep neural networks (DNNs) to enhance point cloud segmentation. Urban datasets, acquired by a UAV equipped with a Zenmuse L1 payload, are collected and used to train, validate, and test the DNNs. It is shown that most building extraction results have precision, accuracy, and F-score values greater than 0.96. Full article
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)
Show Figures

Figure 1

19 pages, 6073 KiB  
Article
Effective UAV Photogrammetry for Forest Management: New Insights on Side Overlap and Flight Parameters
by Atman Dhruva, Robin J. L. Hartley, Todd A. N. Redpath, Honey Jane C. Estarija, David Cajes and Peter D. Massam
Forests 2024, 15(12), 2135; https://doi.org/10.3390/f15122135 - 2 Dec 2024
Viewed by 1232
Abstract
Silvicultural operations such as planting, pruning, and thinning are vital for the forest value chain, requiring efficient monitoring to prevent value loss. While effective, traditional field plots are time-consuming, costly, spatially limited, and rely on assumptions that they adequately represent a wider area. [...] Read more.
Silvicultural operations such as planting, pruning, and thinning are vital for the forest value chain, requiring efficient monitoring to prevent value loss. While effective, traditional field plots are time-consuming, costly, spatially limited, and rely on assumptions that they adequately represent a wider area. Alternatively, unmanned aerial vehicles (UAVs) can cover large areas while keeping operators safe from hazards including steep terrain. Despite their utility, optimal flight parameters to ensure flight efficiency and data quality remain under-researched. This study evaluated the impact of forward and side overlap and flight altitude on the quality of two- and three-dimensional spatial data products from UAV photogrammetry (UAV-SfM) for assessing stand density in a recently thinned Pinus radiata D. Don plantation. A contemporaneously acquired UAV laser scanner (ULS) point cloud provided reference data. The results indicate that the optimal UAV-SfM flight parameters are 90% forward and 85% side overlap at a 120 m altitude. Flights at an 80 m altitude offered marginal resolution improvement (2.2 cm compared to 3.2 cm ground sample distance/GSD) but took longer and were more error-prone. Individual tree detection (ITD) for stand density assessment was then applied to both UAV-SfM and ULS canopy height models (CHMs). Manual cleaning of the detected ULS tree peaks provided ground truth for both methods. UAV-SfM had a lower recall (0.85 vs. 0.94) but a higher precision (0.97 vs. 0.95) compared to ULS. Overall, the F-score indicated no significant difference between a prosumer-grade photogrammetric UAV and an industrial-grade ULS for stand density assessments, demonstrating the efficacy of affordable, off-the-shelf UAV technology for forest managers. Furthermore, in addressing the knowledge gap regarding optimal UAV flight parameters for conducting operational forestry assessments, this study provides valuable insights into the importance of side overlap for orthomosaic quality in forest environments. Full article
(This article belongs to the Special Issue Image Processing for Forest Characterization)
Show Figures

Graphical abstract

18 pages, 3847 KiB  
Article
EC-WAMI: Event Camera-Based Pose Optimization in Remote Sensing and Wide-Area Motion Imagery
by Isaac Nkrumah, Maryam Moshrefizadeh, Omar Tahri, Erik Blasch, Kannappan Palaniappan and Hadi AliAkbarpour
Sensors 2024, 24(23), 7493; https://doi.org/10.3390/s24237493 - 24 Nov 2024
Viewed by 748
Abstract
In this paper, we present EC-WAMI, the first successful application of neuromorphic event cameras (ECs) for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous [...] Read more.
In this paper, we present EC-WAMI, the first successful application of neuromorphic event cameras (ECs) for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous pixel-level brightness changes, offer key advantages over traditional frame-based sensors such as high temporal resolution, low power consumption, and resilience to dynamic lighting. These capabilities allow ECs to overcome challenges such as glare, uneven lighting, and low-light conditions that are common in aerial imaging and remote sensing, while also extending UAV flight endurance. To evaluate the effectiveness of ECs in WAMI, we simulate event data from RGB WAMI imagery and integrate them into SfM pipelines for camera pose optimization and 3D point cloud generation. Using two state-of-the-art SfM methods, namely, COLMAP and Bundle Adjustment for Sequential Imagery (BA4S), we show that although ECs do not capture scene content like traditional cameras, their spike-based events, which only measure illumination changes, allow for accurate camera pose recovery in WAMI scenarios even in low-framerate(5 fps) simulations. Our results indicate that while BA4S and COLMAP provide comparable accuracy, BA4S significantly outperforms COLMAP in terms of speed. Moreover, we evaluate different feature extraction methods, showing that the deep learning-based LIGHTGLUE descriptor consistently outperforms traditional handcrafted descriptors by providing improved reliability and accuracy of event-based SfM. These results highlight the broader potential of ECs in remote sensing, aerial imaging, and 3D reconstruction beyond conventional WAMI applications. Our dataset will be made available for public use. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

17 pages, 12206 KiB  
Article
Smart Monitoring Method for Land-Based Sources of Marine Outfalls Based on an Improved YOLOv8 Model
by Shicheng Zhao, Haolan Zhou and Haiyan Yang
Water 2024, 16(22), 3285; https://doi.org/10.3390/w16223285 - 15 Nov 2024
Viewed by 694
Abstract
Land-based sources of marine outfalls are a major source of marine pollution. The monitoring of land-based sources of marine outfalls is an important means for marine environmental protection and governance. Traditional on-site manual monitoring methods are inefficient, expensive, and constrained by geographic conditions. [...] Read more.
Land-based sources of marine outfalls are a major source of marine pollution. The monitoring of land-based sources of marine outfalls is an important means for marine environmental protection and governance. Traditional on-site manual monitoring methods are inefficient, expensive, and constrained by geographic conditions. Satellite remote sensing spectral analysis methods can only identify pollutant plumes and are affected by discharge timing and cloud/fog interference. Therefore, we propose a smart monitoring method for land-based sources of marine outfalls based on an improved YOLOv8 model, using unmanned aerial vehicles (UAVs). This method can accurately identify and classify marine outfalls, offering high practical application value. Inspired by the sparse sampling method in compressed sensing, we incorporated a multi-scale dilated attention mechanism into the model and integrated dynamic snake convolutions into the C2f module. This approach enhanced the model’s detection capability for occluded and complex-feature targets while constraining the increase in computational load. Additionally, we proposed a new loss calculation method by combining Inner-IoU (Intersection over Union) and MPDIoU (IoU with Minimum Points Distance), which further improved the model’s regression speed and its ability to predict multi-scale targets. The final experimental results show that the improved model achieved an mAP50 (mean Average Precision at 50) of 87.0%, representing a 3.4% increase from the original model, effectively enabling the smart monitoring of land-based marine discharge outlets. Full article
(This article belongs to the Section Oceans and Coastal Zones)
Show Figures

Figure 1

15 pages, 4236 KiB  
Article
Automated Estimation of Building Heights with ICESat-2 and GEDI LiDAR Altimeter and Building Footprints: The Case of New York City and Los Angeles
by Yunus Kaya
Buildings 2024, 14(11), 3571; https://doi.org/10.3390/buildings14113571 - 9 Nov 2024
Viewed by 1179
Abstract
Accurate estimation of building height is crucial for urban aesthetics and urban planning as it enables an accurate calculation of the shadow period, the effective management of urban energy consumption, and thorough investigation of regional climatic patterns and human-environment interactions. Although three-dimensional (3D) [...] Read more.
Accurate estimation of building height is crucial for urban aesthetics and urban planning as it enables an accurate calculation of the shadow period, the effective management of urban energy consumption, and thorough investigation of regional climatic patterns and human-environment interactions. Although three-dimensional (3D) cadastral data, ground measurements (total station, Global Positioning System (GPS), ground laser scanning) and air-based (such as Unmanned Aerial Vehicle—UAV) measurement methods are used to determine building heights, more comprehensive and advanced techniques need to be used in large-scale studies, such as in cities or countries. Although satellite-based altimetry data, such as Ice, Cloud and land Elevation Satellite (ICESat-2) and Global Ecosystem Dynamics Investigation (GEDI), provide important information on building heights due to their high vertical accuracy, it is often difficult to distinguish between building photons and other objects. To overcome this challenge, a self-adaptive method with minimal data is proposed. Using building photons from ICESat-2 and GEDI data and building footprints from the New York City (NYC) and Los Angeles (LA) open data platform, the heights of 50,654 buildings in NYC and 84,045 buildings in LA were estimated. As a result of the study, root mean square error (RMSE) 8.28 m and mean absolute error (MAE) 6.24 m were obtained for NYC. In addition, 46% of the buildings had an RMSE of less than 5 m and 7% less than 1 m. In LA data, the RMSE and MAE were 6.42 m and 4.66 m, respectively. It was less than 5 m in 67% of the buildings and less than 1 m in 7%. However, ICESat-2 data had a better RMSE than GEDI data. Nevertheless, combining the two data provided the advantage of detecting more building heights. This study highlights the importance of using minimum data for determining urban-scale building heights. Moreover, continuous monitoring of urban alterations using satellite altimetry data would provide more effective energy consumption assessment and management. Full article
Show Figures

Figure 1

18 pages, 5160 KiB  
Article
DPFANet: Deep Point Feature Aggregation Network for Classification of Irregular Objects in LIDAR Point Clouds
by Shuming Zhang and Dali Xu
Electronics 2024, 13(22), 4355; https://doi.org/10.3390/electronics13224355 - 6 Nov 2024
Viewed by 589
Abstract
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) [...] Read more.
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) that integrates adaptive graph convolution and space-filling curve sampling modules to effectively address the feature extraction problem for irregular object point clouds. To refine the feature representation, we utilize the affinity matrix to quantify inter-channel relationships and adjust the input feature matrix accordingly, thereby improving the classification accuracy of the object point cloud. To validate the effectiveness of the proposed approach, a TreeNet dataset was created, comprising four categories of tree point clouds derived from publicly available UAV point cloud data. The experimental findings illustrate that the model attains a mean accuracy of 91.4% on the ModelNet40 dataset, comparable to prevailing state-of-the-art techniques. When applied to the more challenging TreeNet dataset, the model achieves a mean accuracy of 88.0%, surpassing existing state-of-the-art methods in all classification metrics. These results underscore the high potential of the model for point cloud classification of irregular objects. Full article
(This article belongs to the Special Issue Point Cloud Data Processing and Applications)
Show Figures

Figure 1

25 pages, 24649 KiB  
Article
Power Corridor Safety Hazard Detection Based on Airborne 3D Laser Scanning Technology
by Shuo Wang, Zhigen Zhao and Hang Liu
ISPRS Int. J. Geo-Inf. 2024, 13(11), 392; https://doi.org/10.3390/ijgi13110392 - 1 Nov 2024
Viewed by 1089
Abstract
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning [...] Read more.
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning of the detection area, offering significant value in identifying safety hazards along transmission lines in complex environments. In this paper, five transmission lines, spanning a total of 160 km in the mountainous area of Sanmenxia City, Henan Province, China, serve as the primary research objects and generate several insights. The location and elevation of each power tower pole are determined using an Unmanned Aerial Vehicle (UAV), which assesses the direction and elevation changes in the transmission lines. Moreover, point cloud data of the transmission line corridor are acquired and archived using a UAV equipped with LiDAR during variable-height flight. The data processing of the 3D laser point cloud of the power corridor involves denoising, line repair, thinning, and classification. By calculating the clearance, horizontal, and vertical distances between the power towers, transmission lines, and other surface features, in conjunction with safety distance requirements, information about potential hazards can be generated. The results of detecting these five transmission lines reveal 54 general hazards, 22 major hazards, and an emergency hazard in terms of hazards of the vegetation type. The type of hazard in the current working condition is mainly vegetation, and the types of cross-crossing hazards are power lines and buildings. The detection results are submitted to the local power department in a timely manner, and relevant measures are taken to eliminate hazards and ensure the normal supply of power resources. The research in this paper will provide a basis and an important reference for identifying the potential safety hazards of transmission lines in Henan Province and other complex environments and solving existing problems in the manual inspection of transmission lines. Full article
Show Figures

Figure 1

26 pages, 284813 KiB  
Article
Automatic Method for Detecting Deformation Cracks in Landslides Based on Multidimensional Information Fusion
by Bo Deng, Qiang Xu, Xiujun Dong, Weile Li, Mingtang Wu, Yuanzhen Ju and Qiulin He
Remote Sens. 2024, 16(21), 4075; https://doi.org/10.3390/rs16214075 - 31 Oct 2024
Viewed by 1022
Abstract
As cracks are a precursor landslide deformation feature, they can provide forecasting information that is useful for the early identification of landslides and determining motion instability characteristics. However, it is difficult to solve the size effect and noise-filtering problems associated with the currently [...] Read more.
As cracks are a precursor landslide deformation feature, they can provide forecasting information that is useful for the early identification of landslides and determining motion instability characteristics. However, it is difficult to solve the size effect and noise-filtering problems associated with the currently available automatic crack detection methods under complex conditions using single remote sensing data sources. This article uses multidimensional target scene images obtained by UAV photogrammetry as the data source. Firstly, under the premise of fully considering the multidimensional image characteristics of different crack types, this article accomplishes the initial identification of landslide cracks by using six algorithm models with indicators including the roughness, slope, eigenvalue rate of the point cloud and pixel gradient, gray value, and RGB value of the images. Secondly, the initial extraction results are processed through a morphological repair task using three filtering algorithms (calculating the crack orientation, length, and frequency) to address background noise. Finally, this article proposes a multi-dimensional information fusion method, the Bayesian probability of minimum risk methods, to fuse the identification results derived from different models at the decision level. The results show that the six tested algorithm models can be used to effectively extract landslide cracks, providing Area Under the Curve (AUC) values between 0.6 and 0.85. After the repairing and filtering steps, the proposed method removes complex noise and minimizes the loss of real cracks, thus increasing the accuracy of each model by 7.5–55.3%. Multidimensional data fusion methods solve issues associated with the spatial scale effect during crack identification, and the F-score of the fusion model is 0.901. Full article
(This article belongs to the Topic Landslides and Natural Resources)
Show Figures

Figure 1

11 pages, 5384 KiB  
Article
Visualization of Aerial Droplet Distribution for Unmanned Aerial Spray Systems Based on Laser Imaging
by Zhichong Wang, Peng Qi, Yangfan Li and Xiongkui He
Cited by 1 | Viewed by 695
Abstract
Unmanned aerial spray systems (UASSs) are a commonly used spraying method for plant protection operations. However, their spraying parameters have complex effects on droplet distribution. The large-scale 3D droplet density distribution measurement method is insufficient, especially since the downwash wind is easily affected [...] Read more.
Unmanned aerial spray systems (UASSs) are a commonly used spraying method for plant protection operations. However, their spraying parameters have complex effects on droplet distribution. The large-scale 3D droplet density distribution measurement method is insufficient, especially since the downwash wind is easily affected by the environment. Therefore, there is a need to develop a technique that can quickly visualize 3D droplet distribution. In this study, a laser imaging method was proposed to quickly scan moving droplets in the air, and a test method that can visualize 3D droplet distribution was constructed by using the traveling mode of the machine perpendicular to the scanning plane. The 3D droplet distribution of targeted and conventional UAVs was tested, and the methods for signal processing, noise reduction, and point cloud rebuilding for laser imaging were developed. Compared with the simulation results, laser imaging showed the pattern of droplet distribution from the two UAV structures well. The results showed that the laser imaging based method for detecting 3D droplet distribution is feasible, fast, and environmentally friendly. Full article
Show Figures

Figure 1

29 pages, 13098 KiB  
Article
Benchmarking of Individual Tree Segmentation Methods in Mediterranean Forest Based on Point Clouds from Unmanned Aerial Vehicle Imagery and Low-Density Airborne Laser Scanning
by Abderrahim Nemmaoui, Fernando J. Aguilar and Manuel A. Aguilar
Remote Sens. 2024, 16(21), 3974; https://doi.org/10.3390/rs16213974 - 25 Oct 2024
Cited by 1 | Viewed by 1391
Abstract
Three raster-based (RB) and one point cloud-based (PCB) algorithms were tested to segment individual Aleppo pine trees and extract their tree height (H) and crown diameter (CD) using two types of point clouds generated from two different techniques: (1) Low-Density (≈1.5 points/m2 [...] Read more.
Three raster-based (RB) and one point cloud-based (PCB) algorithms were tested to segment individual Aleppo pine trees and extract their tree height (H) and crown diameter (CD) using two types of point clouds generated from two different techniques: (1) Low-Density (≈1.5 points/m2) Airborne Laser Scanning (LD-ALS) and (2) photogrammetry based on high-resolution unmanned aerial vehicle (UAV) images. Through intensive experiments, it was concluded that the tested RB algorithms performed best in the case of UAV point clouds (F1-score > 80.57%, H Pearson’s r > 0.97, and CD Pearson´s r > 0.73), while the PCB algorithm yielded the best results when working with LD-ALS point clouds (F1-score = 89.51%, H Pearson´s r = 0.94, and CD Pearson´s r = 0.57). The best set of algorithm parameters was applied to all plots, i.e., it was not optimized for each plot, in order to develop an automatic pipeline for mapping large areas of Mediterranean forests. In this case, tree detection and height estimation showed good results for both UAV and LD-ALS (F1-score > 85% and >76%, and H Pearson´s r > 0.96 and >0.93, respectively). However, very poor results were found when estimating crown diameter (CD Pearson´s r around 0.20 for both approaches). Full article
Show Figures

Graphical abstract

15 pages, 6308 KiB  
Article
Physics-Driven Image Dehazing from the Perspective of Unmanned Aerial Vehicles
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li, Xiaofei Ji, Jiawei Hao and Jie Yang
Electronics 2024, 13(21), 4186; https://doi.org/10.3390/electronics13214186 - 25 Oct 2024
Viewed by 763
Abstract
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like [...] Read more.
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like color distortion, reduced contrast, and lower clarity, which negatively impact the performance of subsequent advanced visual tasks. To improve the quality of unmanned aerial vehicle (UAV) images, we propose a dehazing method based on calibration of the atmospheric scattering model. We designed two specialized neural network structures to estimate the two unknown parameters in the atmospheric scattering model: the atmospheric light intensity A and medium transmission t. However, calculation errors always occur in both processes for estimating the two unknown parameters. The error accumulation for atmospheric light and medium transmission will cause the deviation in color fidelity and brightness. Therefore, we designed an encoder-decoder structure for irradiance guidance, which not only eliminates error accumulation but also enhances the detail in the restored image, achieving higher-quality dehazing results. Quantitative and qualitative evaluations indicate that our dehazing method outperforms existing techniques, effectively eliminating haze from drone images and significantly enhancing image clarity and quality in hazy conditions. Specifically, the compared experiment on the R100 dataset demonstrates that the proposed method improved the peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics by 6.9 dB and 0.08 over the second-best method, respectively. On the N100 dataset, the method improved the PSNR and SSIM metrics by 8.7 dB and 0.05 over the second-best method, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Graphical abstract

Back to TopTop