Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (198)

Search Parameters:
Keywords = image dehazing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3921 KiB  
Article
Image Dehazing Enhancement Strategy Based on Polarization Detection of Space Targets
by Shuzhuo Miao, Zhengwei Li, Han Zhang and Hongwen Li
Appl. Sci. 2024, 14(21), 10042; https://doi.org/10.3390/app142110042 - 4 Nov 2024
Viewed by 385
Abstract
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the [...] Read more.
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the low ambient brightness and limited target radiation information during space target detection, the polarization information of space target is seriously lost, and the advantages of polarization detection technology in identifying targets through clouds and fog cannot be effectively exerted under the condition of haze detection. In order to solve the above problem, a dehazing enhancement strategy specifically applied to polarization images of space targets is proposed. Firstly, a hybrid multi-channel interpolation method based on regional correlation analysis is proposed to improve the calculation accuracy of polarization information during preprocessing. Secondly, an image processing method based on full polarization information inversion is proposed to obtain the degree of polarization of the image after inversion and the intensity of the image after dehazing. Finally, the image fusion method based on discrete cosine transform is used to obtain the dehazing polarization fusion enhancement image. The effectiveness of the proposed image processing strategy is verified by carrying out simulated and real space target detection experiments. Compared with other methods, by using the proposed image processing strategy, the quality of the polarization images of space targets obtained under the haze condition is significantly improved. Our research results have important practical implications for promoting the wide application of polarization detection technology in the field of space target detection. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

15 pages, 6308 KiB  
Article
Physics-Driven Image Dehazing from the Perspective of Unmanned Aerial Vehicles
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li, Xiaofei Ji, Jiawei Hao and Jie Yang
Electronics 2024, 13(21), 4186; https://doi.org/10.3390/electronics13214186 - 25 Oct 2024
Viewed by 511
Abstract
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like [...] Read more.
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like color distortion, reduced contrast, and lower clarity, which negatively impact the performance of subsequent advanced visual tasks. To improve the quality of unmanned aerial vehicle (UAV) images, we propose a dehazing method based on calibration of the atmospheric scattering model. We designed two specialized neural network structures to estimate the two unknown parameters in the atmospheric scattering model: the atmospheric light intensity A and medium transmission t. However, calculation errors always occur in both processes for estimating the two unknown parameters. The error accumulation for atmospheric light and medium transmission will cause the deviation in color fidelity and brightness. Therefore, we designed an encoder-decoder structure for irradiance guidance, which not only eliminates error accumulation but also enhances the detail in the restored image, achieving higher-quality dehazing results. Quantitative and qualitative evaluations indicate that our dehazing method outperforms existing techniques, effectively eliminating haze from drone images and significantly enhancing image clarity and quality in hazy conditions. Specifically, the compared experiment on the R100 dataset demonstrates that the proposed method improved the peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics by 6.9 dB and 0.08 over the second-best method, respectively. On the N100 dataset, the method improved the PSNR and SSIM metrics by 8.7 dB and 0.05 over the second-best method, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Graphical abstract

19 pages, 4551 KiB  
Article
Autonomous Single-Image Dehazing: Enhancing Local Texture with Haze Density-Aware Image Blending
by Siyeon Han, Dat Ngo, Yeonggyu Choi and Bongsoon Kang
Remote Sens. 2024, 16(19), 3641; https://doi.org/10.3390/rs16193641 - 29 Sep 2024
Viewed by 550
Abstract
Single-image dehazing is an ill-posed problem that has attracted a myriad of research efforts. However, virtually all methods proposed thus far assume that input images are already affected by haze. Little effort has been spent on autonomous single-image dehazing. Even though deep learning [...] Read more.
Single-image dehazing is an ill-posed problem that has attracted a myriad of research efforts. However, virtually all methods proposed thus far assume that input images are already affected by haze. Little effort has been spent on autonomous single-image dehazing. Even though deep learning dehazing models, with their widely claimed attribute of generalizability, do not exhibit satisfactory performance on images with various haze conditions. In this paper, we present a novel approach for autonomous single-image dehazing. Our approach consists of four major steps: sharpness enhancement, adaptive dehazing, image blending, and adaptive tone remapping. A global haze density weight drives the adaptive dehazing and tone remapping to handle images with various haze conditions, including those that are haze-free or affected by mild, moderate, and dense haze. Meanwhile, the proposed approach adopts patch-based haze density weights to guide the image blending, resulting in enhanced local texture. Comparative performance analysis with state-of-the-art methods demonstrates the efficacy of our proposed approach. Full article
Show Figures

Figure 1

16 pages, 8351 KiB  
Article
SCL-Dehaze: Toward Real-World Image Dehazing via Semi-Supervised Codebook Learning
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li and Xiaofei Ji
Electronics 2024, 13(19), 3826; https://doi.org/10.3390/electronics13193826 - 27 Sep 2024
Viewed by 542
Abstract
Existing dehazing methods deal with real-world haze images with difficulty, especially scenes with thick haze. One of the main reasons is lacking real-world pair data and robust priors. To improve dehazing ability in real-world scenes, we propose a semi-supervised codebook learning dehazing method. [...] Read more.
Existing dehazing methods deal with real-world haze images with difficulty, especially scenes with thick haze. One of the main reasons is lacking real-world pair data and robust priors. To improve dehazing ability in real-world scenes, we propose a semi-supervised codebook learning dehazing method. The codebook is used as a strong prior to guide the hazy image recovery process. However, the following two issues arise when the codebook is applied to the image dehazing task: (1) Latent space features obtained from the coding of degraded hazy images suffer from matching errors when nearest-neighbour matching is performed. (2) Maintaining a good balance of image recovery quality and fidelity for heavily degraded dense hazy images is difficult. To reduce the nearest-neighbor matching error rate in the vector quantization stage of VQGAN, we designed the unit dual-attention residual transformer module (UDART) to correct the latent space features. The UDART can make the latent features obtained from the encoding stage closer to those of the corresponding clear image. To balance the quality and fidelity of the dehazing result, we design a haze density guided weight adaptive module (HDGWA), which can adaptively adjust the multi-scale skip connection weights according to haze density. In addition, we use mean teacher, a semi-supervised learning strategy, to bridge the domain gap between synthetic and real-world data and enhance the model generalization in real-world scenes. Comparative experiments show that our method achieves improvements of 0.003, 2.646, and 0.019 over the second-best method for the no-reference metrics FADE, MUSIQ, and DBCNN, respectively, on the real-world dataset URHI. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

14 pages, 6866 KiB  
Article
MSNet: A Multistage Network for Lightweight Image Dehazing with Content-Guided Attention and Adaptive Encoding
by Lingrui Dai, Hongrui Liu and Shuoshi Li
Electronics 2024, 13(19), 3812; https://doi.org/10.3390/electronics13193812 - 26 Sep 2024
Viewed by 499
Abstract
Image dehazing is a critical technique aimed at improving the visual clarity of images. The diverse nature of hazy environments poses significant challenges in developing an efficient and lightweight dehazing model. In this paper, we design a multistage network (MSNet) with content-guided attention [...] Read more.
Image dehazing is a critical technique aimed at improving the visual clarity of images. The diverse nature of hazy environments poses significant challenges in developing an efficient and lightweight dehazing model. In this paper, we design a multistage network (MSNet) with content-guided attention and adaptive encoding. The multistage dehazing framework decomposes the complex task of image dehazing into three distinct stages, thereby substantially reducing model complexity. Additionally, we introduce a content-guided attention mechanism that assigns varying weights to different image content elements based on their specific characteristics, thereby improving the efficiency of nonhomogeneous dehazing. Furthermore, we present an adaptive encoder that employs a dual-branch feature extraction structure combined with a gating mechanism, enabling dynamic adjustment of the interactions between the two branches according to the input image. Extensive experimental evaluations on three popular dehazing datasets demonstrate the effectiveness of our proposed MSNet. Full article
Show Figures

Figure 1

18 pages, 8451 KiB  
Article
Remote Sensing Image Dehazing via Dual-View Knowledge Transfer
by Lei Yang, Jianzhong Cao, He Bian, Rui Qu, Huinan Guo and Hailong Ning
Appl. Sci. 2024, 14(19), 8633; https://doi.org/10.3390/app14198633 - 25 Sep 2024
Viewed by 421
Abstract
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, [...] Read more.
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, a Dual-View Knowledge Transfer (DVKT) framework is proposed to generate a lightweight and efficient student network by distilling knowledge from a pre-trained teacher network on natural image dehazing datasets. The DVKT framework includes two novel knowledge-transfer modules: Intra-layer Transfer (Intra-KT) and Inter-layer Knowledge Transfer (Inter-KT) modules. Specifically, the Intra-KT module is designed to correct the learning bias of the student network by distilling and transferring knowledge from a well-trained teacher network. The Inter-KT module is devised to distill and transfer knowledge about cross-layer correlations. This enables the student network to learn hierarchical and cross-layer dehazing knowledge from the teacher network, thereby extracting compact and effective features. Evaluation results on benchmark datasets demonstrate that the proposed DVKT framework achieves superior performance for RSID. In particular, the distilled model achieves a significant speedup with less than 6% of the parameters and computational cost of the original model, while maintaining a state-of-the-art dehazing performance. Full article
Show Figures

Figure 1

20 pages, 3181 KiB  
Article
Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection
by Farkhod Akhmedov, Rashid Nasimov and Akmalbek Abdusalomov
Cited by 3 | Viewed by 745
Abstract
Ship fire detection presents significant challenges in computer vision-based approaches due to factors such as the considerable distances from which ships must be detected and the unique conditions of the maritime environment. The presence of water vapor and high humidity further complicates the [...] Read more.
Ship fire detection presents significant challenges in computer vision-based approaches due to factors such as the considerable distances from which ships must be detected and the unique conditions of the maritime environment. The presence of water vapor and high humidity further complicates the detection and classification tasks for deep learning models, as these factors can obscure visual clarity and introduce noise into the data. In this research, we explain the development of a custom ship fire dataset, a YOLO (You Only Look Once)-v10 model with a fine-tuning combination of dehazing algorithms. Our approach integrates the power of deep learning with sophisticated image processing to deliver comprehensive solutions for ship fire detection. The results demonstrate the efficacy of using YOLO-v10 in conjunction with a dehazing algorithm, highlighting significant improvements in detection accuracy and reliability. Experimental results show that the YOLO-v10-based developed ship fire detection model outperforms several YOLO and other detection models in precision (97.7%), recall (98%), and [email protected] score (89.7%) achievements. However, the model reached a relatively lower score in terms of F1 score in comparison with YOLO-v8 and ship-fire-net model performances. In addition, the dehazing approach significantly improves the model’s detection performance in a haze environment. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

16 pages, 4126 KiB  
Article
An Efficient Multi-Scale Wavelet Approach for Dehazing and Denoising Ultrasound Images Using Fractional-Order Filtering
by Li Wang, Zhenling Yang, Yi-Fei Pu, Hao Yin and Xuexia Ren
Fractal Fract. 2024, 8(9), 549; https://doi.org/10.3390/fractalfract8090549 - 23 Sep 2024
Viewed by 644
Abstract
Ultrasound imaging is widely used in medical diagnostics due to its non-invasive and real-time capabilities. However, existing methods often overlook the benefits of fractional-order filters for denoising and dehazing. Thus, this work introduces an efficient multi-scale wavelet method for dehazing and denoising ultrasound [...] Read more.
Ultrasound imaging is widely used in medical diagnostics due to its non-invasive and real-time capabilities. However, existing methods often overlook the benefits of fractional-order filters for denoising and dehazing. Thus, this work introduces an efficient multi-scale wavelet method for dehazing and denoising ultrasound images using a fractional-order filter, which integrates a guided filter, directional filter, fractional-order filter, and haze removal to the different resolution images generated by a multi-scale wavelet. In the directional filter stage, an eigen-analysis of each pixel is conducted to extract structural features, which are then classified into edges for targeted filtering. The guided filter subsequently reduces speckle noise in homogeneous anatomical regions. The fractional-order filter allows the algorithm to effectively denoise while improving edge definition, irrespective of the edge size. Haze removal can effectively eliminate the haze caused by attenuation. Our method achieved significant improvements, with PSNR reaching 31.25 and SSIM 0.905 on our ultrasound dataset, outperforming other methods. Additionally, on external datasets like McMaster and Kodak24, it achieved the highest PSNR (29.68, 28.62) and SSIM (0.858, 0.803). Clinical evaluations by four radiologists confirmed its superiority in liver and carotid artery images. Overall, our approach outperforms existing speckle reduction and structural preservation techniques, making it highly suitable for clinical ultrasound imaging. Full article
(This article belongs to the Section Life Science, Biophysics)
Show Figures

Figure 1

18 pages, 25777 KiB  
Article
Adaptive Multi-Feature Attention Network for Image Dehazing
by Hongyuan Jing, Jiaxing Chen, Chenyang Zhang, Shuang Wei, Aidong Chen and Mengmeng Zhang
Electronics 2024, 13(18), 3706; https://doi.org/10.3390/electronics13183706 - 18 Sep 2024
Viewed by 430
Abstract
Currently, deep-learning-based image dehazing methods occupy a dominant position in image dehazing applications. Although many complicated dehazing models have achieved competitive dehazing performance, effective methods for extracting useful features are still under-researched. Thus, an adaptive multi-feature attention network (AMFAN) consisting of the point-weighted [...] Read more.
Currently, deep-learning-based image dehazing methods occupy a dominant position in image dehazing applications. Although many complicated dehazing models have achieved competitive dehazing performance, effective methods for extracting useful features are still under-researched. Thus, an adaptive multi-feature attention network (AMFAN) consisting of the point-weighted attention (PWA) mechanism and the multi-layer feature fusion (AMLFF) is presented in this paper. We start by enhancing pixel-level attention for each feature map. Specifically, we design a PWA block, which aggregates global and local information of the feature map. We also employ PWA to make the model adaptively focus on significant channels/regions. Then, we design a feature fusion block (FFB), which can accomplish feature-level fusion by exploiting a PWA block. The FFB and PWA constitute our AMLFF. We design an AMLFF, which can integrate three different levels of feature maps to effectively balance the weights of the inputs to the encoder and decoder. We also utilize the contrastive loss function to train the dehazing network so that the recovered image is far from the negative sample and close to the positive sample. Experimental results on both synthetic and real-world images demonstrate that this dehazing approach surpasses numerous other advanced techniques, both visually and quantitatively, showcasing its superiority in image dehazing. Full article
Show Figures

Figure 1

20 pages, 14143 KiB  
Article
AEA-RDCP: An Optimized Real-Time Algorithm for Sea Fog Intensity and Visibility Estimation
by Shin-Hyuk Hwang, Ki-Won Kwon and Tae-Ho Im
Appl. Sci. 2024, 14(17), 8033; https://doi.org/10.3390/app14178033 - 8 Sep 2024
Viewed by 647
Abstract
Sea fog reduces visibility to less than 1 km and is a major cause of maritime accidents, particularly affecting the navigation of small fishing vessels as it forms when warm, moist air moves over cold water, making it difficult to predict. Traditional visibility [...] Read more.
Sea fog reduces visibility to less than 1 km and is a major cause of maritime accidents, particularly affecting the navigation of small fishing vessels as it forms when warm, moist air moves over cold water, making it difficult to predict. Traditional visibility measurement tools are costly and limited in their real-time monitoring capabilities, which has led to the development of video-based algorithms using cameras. This study introduces the Approximating and Eliminating the Airlight–Reduced DCP (AEA-RDCP) algorithm, designed to address the issue where sunlight reflections are mistakenly recognized as fog in existing video-based sea fog intensity measurement algorithms, thereby improving performance. The dataset used in the experiment is categorized into two types: one consisting of images unaffected by sunlight and another consisting of maritime images heavily influenced by sunlight. The AEA-RDCP algorithm enhances the previously researched RDCP algorithm by effectively eliminating the influence of atmospheric light, utilizing the initial stages of the Dark Channel Prior (DCP) process to generate the Dark Channel image. While the DCP algorithm is typically used for dehazing, this study employs it only to the point of generating the Dark Channel, reducing computational complexity. The generated image is then used to estimate visibility based on a threshold for fog density estimation, maintaining accuracy while reducing computational demands, thereby allowing for the real-time monitoring of sea conditions, enhancing maritime safety, and preventing accidents. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

13 pages, 27539 KiB  
Article
Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction
by Jungyun Kim, Tiong-Sik Ng and Andrew Beng Jin Teoh
Appl. Sci. 2024, 14(17), 7978; https://doi.org/10.3390/app14177978 - 6 Sep 2024
Viewed by 537
Abstract
Haze imagery suffers from reduced clarity, which can be attributed to atmospheric conditions such as dust or water vapor, resulting in blurred visuals and heightened brightness due to light scattering. Conventional methods employing the dark channel prior (DCP) for transmission map estimation often [...] Read more.
Haze imagery suffers from reduced clarity, which can be attributed to atmospheric conditions such as dust or water vapor, resulting in blurred visuals and heightened brightness due to light scattering. Conventional methods employing the dark channel prior (DCP) for transmission map estimation often excessively amplify fogged sky regions, causing image distortion. This paper presents a novel approach to improve transmission map granularity by utilizing multiple 1×1 DCPs derived from multiscale hazy, inverted, and Euclidean difference images. An adaptive airlight estimation technique is proposed to handle low-light, hazy images. Furthermore, an adaptive gamma correction method is introduced to refine the transmission map further. Evaluation of dehazed images using the Dehazing Quality Index showcases superior performance compared to existing techniques, highlighting the efficacy of the enhanced transmission map. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

18 pages, 4521 KiB  
Article
Feature Fusion Image Dehazing Network Based on Hybrid Parallel Attention
by Hong Chen, Mingju Chen, Hongyang Li, Hongming Peng and Qin Su
Electronics 2024, 13(17), 3438; https://doi.org/10.3390/electronics13173438 - 30 Aug 2024
Viewed by 568
Abstract
Most of the existing dehazing methods ignore some global and local detail information when processing images and fail to fully combine feature information at different levels, which leads to contrast imbalance and residual haze in the dehazed images. To this end, this article [...] Read more.
Most of the existing dehazing methods ignore some global and local detail information when processing images and fail to fully combine feature information at different levels, which leads to contrast imbalance and residual haze in the dehazed images. To this end, this article proposes a image dehazing network based on hybrid parallel attention feature fusion, called the HPA-HFF network. This network is an optimization of the basic network, FFA-Net. First, the hybrid parallel attention (HPA) module is introduced, which uses parallel connections to mix different types of attention mechanisms, which can not only enhance the extraction and fusion capabilities of global spatial context information but also enhance the expression capabilities of features and have better dehazing effects on uneven distribution of haze. Second, the hierarchical feature fusion (HFF) module is introduced, which dynamically fuses feature maps from different paths to adaptively increase their receptive field and refine and enhance image features. Experimental results demonstrate that the HPA-HFF network proposed in this article is contrasted with eight mainstream dehazing networks on the public dataset RESIDE. The HPA-HFF network achieves the highest PSNR (39.41) and SSIM (0.9967) and obtains a good dehazing effect in subjective vision. Full article
Show Figures

Figure 1

24 pages, 7011 KiB  
Article
A Comprehensive Review of Traditional and Deep-Learning-Based Defogging Algorithms
by Minxian Shen, Tianyi Lv, Yi Liu, Jialiang Zhang and Mingye Ju
Electronics 2024, 13(17), 3392; https://doi.org/10.3390/electronics13173392 - 26 Aug 2024
Viewed by 1339
Abstract
Images captured under adverse weather conditions often suffer from blurred textures and muted colors, which can impair the extraction of reliable information. Image defogging has emerged as a critical solution in computer vision to enhance the visual quality of such foggy images. However, [...] Read more.
Images captured under adverse weather conditions often suffer from blurred textures and muted colors, which can impair the extraction of reliable information. Image defogging has emerged as a critical solution in computer vision to enhance the visual quality of such foggy images. However, there remains a lack of comprehensive studies that consolidate both traditional algorithm-based and deep learning-based defogging techniques. This paper presents a comprehensive survey of the currently proposed defogging techniques. Specifically, we first provide a fundamental classification of defogging methods: traditional techniques (including image enhancement approaches and physical-model-based defogging) and deep learning algorithms (such as network-based models and training strategy-based models). We then delve into a detailed discussion of each classification, introducing several representative image fog removal methods. Finally, we summarize their underlying principles, advantages, disadvantages, and give the prospects for future development. Full article
Show Figures

Figure 1

15 pages, 3848 KiB  
Article
AODs-CLYOLO: An Object Detection Method Integrating Fog Removal and Detection in Haze Environments
by Xinyu Liang, Zhengyou Liang, Linke Li and Jiahong Chen
Appl. Sci. 2024, 14(16), 7357; https://doi.org/10.3390/app14167357 - 20 Aug 2024
Viewed by 823
Abstract
Foggy and hazy weather conditions can significantly reduce the clarity of images captured by cameras, making it difficult for object detection algorithms to accurately recognize targets. This degradation can cause failures in autonomous or assisted driving systems, posing severe safety threats to both [...] Read more.
Foggy and hazy weather conditions can significantly reduce the clarity of images captured by cameras, making it difficult for object detection algorithms to accurately recognize targets. This degradation can cause failures in autonomous or assisted driving systems, posing severe safety threats to both drivers and passengers. To address the issue of decreased detection accuracy in foggy weather, we propose an object detection algorithm specifically designed for such environments, named AODs-CLYOLO. To effectively handle images affected by fog, we introduce an image dehazing model, AODs, which is more suitable for detection tasks. This model incorporates a Channel–Pixel (CP) attention mechanism and a new Contrastive Regularization (CR), enhancing the dehazing effect while preserving the integrity of image information. For the detection network component, we propose a learnable Cross-Stage Partial Connection Module (CSPCM++), which is used before the detection head. Alongside this, we integrate the LSKNet selective attention mechanism to improve the extraction of effective features from large objects. Additionally, we apply the FocalGIoU loss function to enhance the model’s performance in scenarios characterized by sample imbalance or a high proportion of difficult samples. Experimental results demonstrate that the AODs-CLYOLO detection algorithm achieves up to a 10.1% improvement in the mAP (0.5:0.95) metric compared to the baseline model YOLOv5s. Full article
(This article belongs to the Section Ecology Science and Engineering)
Show Figures

Figure 1

15 pages, 4907 KiB  
Article
KCS-YOLO: An Improved Algorithm for Traffic Light Detection under Low Visibility Conditions
by Qinghui Zhou, Diyi Zhang, Haoshi Liu and Yuping He
Viewed by 634
Abstract
Autonomous vehicles face challenges in small-target detection and, in particular, in accurately identifying traffic lights under low visibility conditions, e.g., fog, rain, and blurred night-time lighting. To address these issues, this paper proposes an improved algorithm, namely KCS-YOLO (you only look once), to [...] Read more.
Autonomous vehicles face challenges in small-target detection and, in particular, in accurately identifying traffic lights under low visibility conditions, e.g., fog, rain, and blurred night-time lighting. To address these issues, this paper proposes an improved algorithm, namely KCS-YOLO (you only look once), to increase the accuracy of detecting and recognizing traffic lights under low visibility conditions. First, a comparison was made to assess different YOLO algorithms. The benchmark indicates that the YOLOv5n algorithm achieves the highest mean average precision (mAP) with fewer parameters. To enhance the capability for detecting small targets, the algorithm built upon YOLOv5n, namely KCS-YOLO, was developed using the K-means++ algorithm for clustering marked multi-dimensional target frames, embedding the convolutional block attention module (CBAM) attention mechanism, and constructing a small-target detection layer. Second, an image dataset of traffic lights was generated, which was preprocessed using the dark channel prior dehazing algorithm to enhance the proposed algorithm’s recognition capability and robustness. Finally, KCS-YOLO was evaluated through comparison and ablation experiments. The experimental results showed that the mAP of KCS-YOLO reaches 98.87%, an increase of 5.03% over its counterpart of YOLOv5n. This indicates that KCS-YOLO features high accuracy in object detection and recognition, thereby enhancing the capability of traffic light detection and recognition for autonomous vehicles in low visibility conditions. Full article
(This article belongs to the Special Issue Intelligent Control and Active Safety Techniques for Road Vehicles)
Show Figures

Figure 1

Back to TopTop