Next Article in Journal
A Telemetric Framework for Assessing Vehicle Emissions Based on Driving Behavior Using Unsupervised Learning
Previous Article in Journal
Investigation of the Impact of a Vehicle Front Hood Striker Geometry on Static Stiffness Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment

by
Mohamad Mofeed Chaar
*,
Jamal Raiyn
and
Galia Weidl
University of Applied Sciences Aschaffenburg, 63743 Aschaffenburg, Germany
*
Author to whom correspondence should be addressed.
Submission received: 3 October 2024 / Revised: 23 November 2024 / Accepted: 1 December 2024 / Published: 18 December 2024

Abstract

:
Autonomous driving (AD) technology has seen significant advancements in recent years; however, challenges remain, particularly in achieving reliable performance under adverse weather conditions such as heavy fog. In response, we propose a multi-class fog density classification approach to enhance the AD system performance. By categorizing fog density into multiple levels (25%, 50%, 75%, and 100%) and generating separate datasets for each class using the CARLA simulator, we improve the perception accuracy for each specific fog density level and analyze the effects of varying fog intensities. This targeted approach offers benefits such as improved object detection, specialized training for each fog class, and increased generalizability. Our results demonstrate enhanced perception of various objects, including cars, buses, trucks, vans, pedestrians, and traffic lights, across all fog densities. This multi-class fog density method is a promising advancement toward achieving reliable AD performance in challenging weather, improving both the precision and recall of object detection algorithms under diverse fog conditions.

1. Introduction

Autonomous driving (AD) is based on the principle of driving vehicles using artificial control and perception without human intervention [1]. Autonomous driving systems use a variety of sensors to perceive their surroundings [2], including cameras [3], radar [4], and LiDAR [5]. These sensors provide the system with information about the location of other vehicles, pedestrians, and objects in the environment. The system then uses this information to make decisions about how to control the vehicle. Autonomous driving systems are still under development, and they carry a promise with the potential to revolutionize transportation. AD could make transportation safer [6], more efficient, and accessible.
However, AD still faces challenges and open problems such as perception under severe weather conditions.
Numerous studies have investigated perception under foggy conditions. However, these studies have generally treated fog as a binary classification problem (foggy vs. non-foggy) and extrapolated conclusions to various levels of fog density. This approach overlooks the need for improved perception tailored to each specific fog density category.
Our research proposes a novel approach to object detection in foggy conditions employing a data-driven strategy and machine learning techniques. We categorize fog density into five distinct levels: 0%, 25%, 50%, 75%, and 100% (see Appendix A). Leveraging the CARLA simulator (Car Learning to Act) [7,8], we generate a comprehensive dataset encompassing a diverse range of fog densities [9]. Subsequently, we implement a bounding box-based machine learning algorithm to effectively detect objects under varying fog conditions. and we obtained highly accurate results for all fog levels, from clear weather to the highest fog density (100%). The purpose of this work is to enhance object recall (alongside precision) in multiple categories of fog conditions. We achieved high recall and precision across all fog density levels.
Given the close relationship between this research and safety-critical applications such as autonomous driving, examining the potential impact on navigation and vehicle safety is essential. Emphasizing how this model could be integrated into existing vehicle systems or improve object recognition accuracy in foggy conditions could significantly enhance the research’s relevance and practical application in real-world autonomous systems.
This paper is organized as follows. Section 2 gives an overview of the influence of the wearer on the performance of autonomous vehicles. The next sections describe the methodology (Section 3), discuss the results (Section 4), and conclude the discussion as well as point out directions for future research (Section 5).

2. Related Work

Weather phenomena can have various negative influences on the performance of autonomous vehicles (AVs), especially in their perception and sensing systems. Adverse conditions like heavy rain, snow, fog, and low lighting can significantly impair the sensors that AVs rely on, such as cameras, radar, LiDAR, and ultrasonic sensors. These systems are crucial for detecting obstacles, lane markings, pedestrians, and other vehicles. The diminished performance in such conditions poses a serious challenge to AV safety and reliability [10].
The authors (Diaz-Ruiz et al., 2022) [11] have developed datasets specifically tailored for severe weather conditions, including cloudy, rainy, snowy, night, and sunny scenarios. These datasets were generated using multiple sensors, and the data for each weather condition were trained separately. This approach significantly enhanced perception and increased accuracy. The authors demonstrated that models trained for specific weather conditions yield more accurate object detection when applied in those same conditions. For example, the model trained on data from sunny conditions achieved a mean average precision (mAP.5: 0.95) of 54.3 when tested under sunny conditions but only 38.9 when tested in rainy weather. Conversely, the model trained on rainy weather data yielded an (mAP .5 :.95) of 46.3 in rainy conditions, showing an improvement in accuracy from 38.9 to 46.3. However, this approach did not include the foggy conditions. In our work, we focused specifically on foggy conditions, dividing them into four distinct classes in addition to sunny conditions. We utilized the CARLA simulation environment to generate the datasets and employed our filtering techniques within the CARLA simulator to accurately label the data [12] and we achieved 0.739 in heavy fog (mAP .5: 0.95).
Furthermore, Valanarasu et al. (2022) [13] proposed a transformer-based model to restore images degraded by adverse weather conditions. The authors argue that transformers can be adapted to image restoration by treating images as sequences of pixels. The proposed model, called Transweather, consists of an encoder and a decoder [14,15]. The encoder takes an image degraded by adverse weather conditions as input and produces a latent representation of the image. The decoder then takes the latent representation as input and produces a restored image. The encoder is a multilayer convolutional transformer (MCT) model. The MCT model consists of a stack of convolutional layers and encoder–decoder attention layers. The convolutional layers extract features from the image, while the attention layers allow the model to learn long-range dependencies between pixels. The decoder is a convolutional transformer decoder (CTD) model. The CTD model consists of a stack of decoder–encoder attention layers and upsampling layers. The attention layers allow the model to attend to the latent representation of the image, while the upsampling layers reconstruct the restored image. The authors evaluated Transweather on a dataset of images degraded by rain, snow, haze, and fog. The results showed that Transweather outperforms several state-of-the-art image restoration methods. In previous work, fog was categorized as a single class (fog or no fog), which posed challenges, particularly when dealing with light fog. In contrast, our approach did not utilize TransWeather to transform foggy images into non-foggy ones. Instead, we focused on enhancing perception directly within foggy conditions. We developed separate models tailored to different fog densities, ranging from light to heavy fog, to improve accuracy and robustness across varying fog intensities.
The authors (Bijelic et al., 2020) [16] introduced an innovative approach by integrating four sensors—an RGB camera, LiDAR, a gated camera, and radar—into a unified perception system. The outputs of these sensors were projected into the camera’s coordinate space and then processed through a convolutional neural network with four input channels to enhance perception accuracy. The authors evaluated their method using a benchmark dataset focused on object detection in adverse weather conditions. Their approach was compared against several state-of-the-art single-sensor and fusion methods. The results demonstrated that their method outperformed existing approaches, achieving an average precision of 76.69 in heavy fog. In comparison, our approach further improved performance, achieving an average precision of 89.00. We cannot compare the two results directly due to the differing data types (simulation vs. real data), although the weather conditions are the same.
The authors (Li et al., 2023) [17] propose a domain adaptation framework that leverages both labeled data from the source domain (clear weather) and unlabeled data from the target domain (foggy weather). The key components of their approach include feature alignment, which involves mechanisms to align the feature distributions between the clear and foggy weather domains, helping the model to learn domain-invariant features that are robust to weather changes. They also employ domain adversarial training, using a domain discriminator to distinguish between the source and target domains, where the object detector is trained adversarially to perform well in both domains by confusing the discriminator, leading to features that generalize across different weather conditions. Additionally, the paper proposes multi-level adaptation, where adaptation occurs at multiple levels of the detection pipeline, including both the image and feature levels, to enhance the model’s robustness to foggy conditions. They also incorporate a self-training mechanism where the model iteratively generates pseudo-labels for the foggy images and refines its predictions, allowing the model to learn from the target domain data without requiring explicit labels. The mean average precision mAP for this work is 42.3 for heavy fog, 36.5 for walkers, and 50 for detecting walkers under heavy fog up to 200 m in distance.
The paper “A Review of the Impacts of Defogging on Deep Learning-Based Object Detectors in Self-Driving Cars” (Ogunrinde & Bernadin, 2021) [18] explores the effects of image defogging techniques on the performance of deep learning-based object detection systems used in autonomous vehicles. The authors analyze the effectiveness of these techniques in improving detection accuracy, highlighting that while defogging generally enhances image quality, its impact on detection performance varies depending on the method used. Some defogging approaches may introduce artifacts or alter important features in the images, potentially leading to reduced detection accuracy or false positives. The paper emphasizes the need for the careful selection and tuning of defogging methods to balance the trade-off between improved visibility and accurate object detection. Additionally, the authors discuss the potential of integrating defogging directly into the object detection pipeline, allowing models to learn defogging and detection tasks simultaneously, Using their methodology, they improved recall under heavy fog conditions from 59.61 to 62.02 and precision from 60.98 to 62.74. In comparison, our approach resulted in a more significant increase with recall improving from 43.4 to 63.6 and precision from 86.8 to 93.1. The differences in recall between our results and theirs can be attributed to the variations in the datasets used and the algorithms implemented; we employed YOLOv8 [19], while they used YOLOv3 [20].
The overviewed papers have generally treated fog as a binary class (fog or no fog). In contrast, our research introduces a more nuanced approach by developing four distinct categories for fog density (besides clear weather) with a separate model implemented for each category. Our findings demonstrate that by categorizing fog into multiple levels, we can significantly enhance perception accuracy compared to the binary classification approach. This methodology can be adopted by the studies mentioned above to improve their perception accuracy and achieve more precise results.
In this work, our objective is to improve perception under heavy fog conditions. Our novelty involved classifying fog levels into four distinct categories based on fog density, besides the clear weather (0% or clear weather, 25%, 50%, 75%, and 100%). We then train the model by data, using deep learning techniques tailored for object detection. The method’s foundation lies in categorizing first the input based on the fog density and then operating a model specifically trained for that particular fog density range (refer to Figure 1). For the dataset, we employed the CARLA simulator, which allows us to precisely control fog density and gather data with automated labeling for object detection in foggy conditions. We have made the data collection project available on our GitHub (https://github.com/Mofeed-Chaar/Improving-bouning-box-in-Carla-simulator, accessed on 2 October 2024) [18]. Additionally, we implemented flexible weather control by modifying parameters within the YAML file [21] named weather.yaml within our GitHub project. The objects we focused on in our work comprise six distinct categories: cars, buses, trucks, vans, pedestrians, and traffic lights. Furthermore, we meticulously generated distinct datasets for each fog density level and trained individual object detection models for each class of fog density. This approach yielded consistently high results across various metrics, including precision, recall, and MAP@50. In particular, we achieved an accuracy of more than 90% under a heavy fog condition (100% fog density), as we will see later in this paper.

3. Methodologies

3.1. Generate Data

The dataset plays a critical role in the development and enhancement of algorithms, as the quality and appropriateness of the data are foundational to the success of machine learning models. To ensure diverse and comprehensive data, collection can be conducted through various methods.
One approach involves capturing real-world data by driving on public roads, allowing for the recording of natural driving conditions and variability in weather, lighting, and traffic patterns [22,23,24,25]. This method is beneficial for gathering authentic data that reflect true environmental and road conditions, which is essential for training robust machine learning models.
Another approach includes data collection in controlled laboratory settings [26]. Laboratory environments allow for the precise manipulation of variables such as lighting, object placement, and sensor calibration, which helps isolate specific factors influencing model performance. Controlled settings can be particularly valuable for fine-tuning algorithms under known conditions or testing edge cases that may not frequently occur in real-world driving.
In addition to these traditional data collection methods, synthetic datasets can be generated, providing a flexible and scalable alternative for training machine learning models. Synthetic data can be created by adding fog or other environmental factors to clear-weather images [27], enabling the simulation of various weather conditions without physically capturing them. Furthermore, simulation environments, such as the CARLA simulator, offer an advanced platform for generating synthetic data. CARLA allows researchers to not only simulate fog but also precisely control its density, which aids in creating datasets that reflect a range of visibility conditions. This capability to manipulate environmental factors provides researchers with flexible, high-quality data tailored to specific needs and scenarios, supporting the development of models that perform well in diverse and challenging conditions.
Additionally, simulations can be used to generate data, with tools like the CARLA simulator offering a cost-effective and flexible way to control environmental conditions. Simulated data are generally less expensive and allow for precise adjustments to variables like weather and visibility. However, the quality of simulated data typically does not match that of real-world data, as it may lack certain complexities and nuances present in actual driving environments.

3.2. CARLA Simulator

CARLA (Car Learning to Act) [8] is a high-fidelity, open-source simulator widely used in autonomous driving research [7]. It provides a detailed, realistic urban environment complete with various road types, buildings, vehicles, and pedestrian models, making it an ideal platform for developing and testing algorithms for self-driving vehicles. CARLA’s environment includes intersections, traffic lights, roundabouts, and a range of obstacles that mirror real-world conditions, thereby enabling researchers to simulate complex driving scenarios.
One of CARLA’s key advantages is its ability to simulate and control environmental variables, including weather and lighting. This flexibility is particularly useful for generating datasets under specific weather conditions, such as fog, rain, or varying times of day. Using CARLA, researchers can create datasets with automatically labeled 3D or 2D bounding boxes within these controlled conditions [28]. This automated labeling is efficient and time saving, as it circumvents the manual annotation process typically required for training data. For our study, CARLA served as a critical tool in generating a dataset with multiple fog density classes (clear, 25%, 50%, 75%, 100%). Obtaining real-world data with these precise fog levels would be challenging, as capturing consistent fog densities on public roads is impractical, and obtaining representative images from existing sources is limited. Moreover, the task of labeling bounding boxes in dense fog (100% fog density) presents an additional difficulty, as thick fog often obscures or partially hides objects, making manual labeling highly challenging. CARLA’s controlled environment overcomes these limitations, allowing us to produce a comprehensive dataset tailored to our needs while providing accurately labeled bounding boxes across all fog density levels. This dataset forms the foundation of our work, enabling us to develop and test object detection models under varying fog conditions that simulate real-world challenges for autonomous driving systems.

3.3. Yolo (You Only Look Once)

A Convolutional Neural Network (CNN) represents a specialized category within artificial intelligence that focuses on analyzing input data with inherent spatial structures. Regarded as a pivotal component of AI, CNNs employ interconnected computational elements (neurons) to process perceptual data derived from the surrounding environment. CNNs serve as a subset of deep learning models capable of handling one-dimensional, two-dimensional, and three-dimensional data. Their primary purpose is to discern spatial hierarchies of features autonomously and adaptively, progressing from low- to high-level patterns [29]. They are typically comprising three convolutional layers, pooling, and a fully connected layer—CNNs utilize convolutional and pooling layers for feature extraction, while the fully connected layer maps the extracted features to produce the final output, such as classification. The CNN architecture encompasses three distinctive layers: convolutional, pooling, and classification. The convolutional layers serve as the heart of the CNN, where weights define a convolutional kernel applied to the original input in small, incremental receptive fields [30]. YOLO [31] is a real-time object detection algorithm that divides an image into a grid and predicts the bounding boxes and class probabilities for each object in the grid using CNN. It is a popular algorithm for object detection because it is fast, accurate, and easy to use. Due to its remarkable capabilities, YOLO has found widespread application in autonomous driving systems [32]. Moreover, there have been numerous versions of YOLO, each striving to enhance its accuracy and reduce latency, such as YOLOv5 [33] and YOLOv8 [19]. The loss function in YOLO is determined as the following equation [31]:
λ c o o r d i = 0 S 2 j = 0 B i j o b j [ ( x i x ^ i ) 2 + ( y i y ^ i ) 2 ] +
λ c o o r d i = 0 S 2 j = 0 B i j o b j [ ( w i w ^ i ) 2 + ( h i h ^ i ) 2 ] +
i = 0 S 2 j = 0 B i j o b j ( C i C ^ i ) 2 + λ n o o b j i = 0 S 2 j = 0 B i j n o o b j ( C i C ^ i ) 2 +
i = 0 S 2 i o b j c c l a s s e s ( p i ( c ) p ^ i ( c ) ) 2
where i j o b j equals 1 if the object appears in cell i with box number j; otherwise, it will be zero, S is the cell, B is the anchor box, ( x i , y i , w i , h i ) is ( x c e n t e r , y c e n t e r , w i d t h , h i g h t ) , respectively, in the base of the box.

Metrics

In machine learning, precision and recall are two important metrics used to evaluate the performance of a classifier. They are commonly used in tasks such as spam filtering, fraud detection, and medical diagnosis. Precision [2] measures the accuracy of positive predictions. It represents the proportion of positive predictions that are actually correct. High precision indicates that the classifier does not make many false positives. Formally, precision is defined as the following:
P r e c i s i o n = T P T P + F P
where T P ,   F P are true positives and false positives, respectively.
Recall [34], also known as sensitivity, measures the completeness of positive predictions. It represents the proportion of actual positive instances that were correctly identified by the classifier. A high recall indicates that the classifier does not miss many true positives. Formally, recall is defined as
R e c a l l = T P T P + F N
where F N is a false negative.
Precision and recall often have an inverse relationship. In other words, increasing one metric often comes at the expense of the other. This is because a classifier that is very strict with respect to its positive predictions may miss some true positives, resulting in a lower recall. Conversely, a classifier that is more lenient may identify more true positives, but it may also increase the number of false positives, leading to a lower precision. To address the trade-off between precision and recall [35], the F1 score [36] is often used. It is a harmonic mean of precision and recall, which gives equal weight to both metrics. A high F1 score indicates that the classifier performs well in both aspects. The equation of F1 is described as follows:
F 1 _ S c o r e = 2 P e r c i s i o n R e c a l l P r e c i s i o n + R e c a l l
The F1 score is a useful metric for evaluating the overall performance of a classifier. It provides a single measure that captures both precision and recall, allowing for a more balanced assessment of the classifier’s performance. The other metric is the mean average precision mAP50 [37], where 50 describes the intersection over union IOU. The mAP50 value is calculated by averaging the precision–recall curves (PRCs) for each object class in the dataset. A PRC is a plot of precision against recall, where each point on the curve corresponds to a given IoU threshold. The area under the curve (AUC) is used to measure the overall performance of the model. In general, a higher mAP50 indicates that the object detection model has better performance in detecting objects with a high degree of confidence. This is important for tasks such as object tracking, image segmentation, and autonomous vehicles, where accurate object detection is crucial for reliable and safe operation.

3.4. Collect the Data

Acquiring datasets that meet our specific requirements proved challenging due to the need for a diverse range of fog density levels. To address this obstacle, we opted to employ simulation to generate datasets encompassing five distinct fog conditions, which are categorized by fog density (clear, 25%, 50%, 75%, and 100%). We integrated the simulation to automatically label the bounding boxes (refer Figure 2) for six objects (car, bus, truck, van, walker, and traffic light) using eight maps within the CARLA simulator. Our data collection process entailed the following steps:
  • Establish an environment with fog conditions adhering to our specifications and designate the map name.
  • Remove all vehicles from the parking lot as they cannot be effectively labeled as bounding boxes due to the simulator’s inability to automate the labeling process for parked vehicles.
  • Mount the sensor on a random vehicle, designate it as the ego vehicle and enable autonomous driving.
  • Gather data from the sensors by capturing several RGB sensor images and preserving them accompanied by bounding box annotations.
Through our data collection process, we have amassed a comprehensive dataset of 40,000 images for each of five distinct fog density categories with each map contributing 5000 images. This substantial dataset provides a rich resource for training and evaluating fog-aware autonomous driving algorithms. We meticulously categorized our dataset into four distinct distance ranges labeling all objects up to 50 m, up to 100 m, up to 150 m, and up to 200 m, and saved the labels as a text file. Each image within these ranges was thoroughly labeled with the corresponding objects present within the scene. To ensure consistency and clarity, we have saved a consistent image resolution (1280 × 720 pixels) for all images. Additionally, we extended our datasets by incorporating data from other sensors, such as LiDAR, radar, semantic segmentation images, and depth camera images. Furthermore, the weather parameters for all data that we have generated for our study were detected and presented in Table 1 [38], the difference is only in fog density. This comprehensive dataset provides a valuable resource for researchers and developers working on autonomous driving algorithms. since it allows the handling of various fog conditions and various sensor modalities, and it is available in our GitHub project.

4. Results and Discussion

The datasets we generated were divided into five categories, each corresponding to a specific weather condition with varying fog density. For each category of fog density, we labeled objects into four ranges: objects within 50 m, objects within 100 m, objects within 150 m, and objects within 200 m. We then trained different models using YOLOv5s and YOLOv8m for a variety of distance ranges using specific hyperparameters (refer to Table 2) For the latency of the YoloV8 model, see Table 3 [42].
Our training results suggest that training our models on datasets with varying fog densities can preserve their performance and even enhance their accuracy in heavy fog conditions. Our corresponding results of the training, on the base of YOLOv5s, are shown in Table 4. We are utilizing the YOLO loss function (refer to Equation (1)).
We have separated this dataset of objects, labeled within 50 m, into 80% for training and 20% for validation with an image size of 640 × 640 pixels.
These results represent the performance of our models across six object classes. It is important to note that the accuracy is not uniform across all classes with some classes performing better than others. This is due to a number of factors, including the shape, size, and texture of the objects, as well as the presence of other objects in the scene (refer to Table 5).
This procedure effectively preserved the precision (refer to Equation (2)) of object detection in heavy fog conditions, while the recall (refer to Equation (3)) was inversely proportional to the fog density. This trend was consistent even when the training data were expanded to include objects within longer distances, such as 100 m or more. We trained the YOLOv8m model using the same hyperparameters as the YOLOv5s model for all distances of object detection (50 m, 100 m, 150 m, 200 m) (see Table 6). This allowed us to directly compare the performance of the two models under the same conditions. We can conclude that precision remains largely unaffected when data are used beyond 50 m, but recall exhibits a decreasing trend. This can be attributed to the consistent detection of close objects, but the model’s ability to identify objects at greater distances was diminished, impacting recall. We can deduce that object detection is highly accurate for close objects, but it becomes less accurate for objects with increasing distances. This is due to the fact that the fog obscures the objects, making it harder for the model to distinguish between the objects and the background.
Table 6 shows that the model can detect objects with high precision (see Figure 3).
At greater distances, the model may miss some objects, but this is acceptable given the increased difficulty of detecting objects in fog. In case of heavy fog, driving behavior and speed are significantly affected. Aside from the speed limit imposed in heavy fog conditions, drivers adapt their driving style accordingly. The priority in heavy fog is to prioritize close objects and gradually increase the perception with distance. This is because the visibility is considerably reduced in heavy fog, making it challenging to identify objects at farther distances. Our object detection model can accurately identify objects in foggy conditions even when visibility is reduced. We achieved this by training the model on a large dataset of images taken in various fog densities. Our model can detect objects with high precision under heavy fog conditions.
In our previous work, we trained our object detection model using images with a resolution of 640 × 640 pixels. However, we noticed that using higher resolution (1280 × 1280 pixels) resulted in improved recall. The results of this experiment are summarized in Table 7. These results are essential for our work, where we implemented a special model for each fog category.
Moreover, as seen in Table 7, the larger objects (e.g., buses) exhibit higher accuracy than smaller objects (e.g., walkers), particularly in terms of recall. Note that large objects face less accuracy degradation with increasing distances compared to smaller objects, and the impact of recall degradation on small objects at high distances is more pronounced than on larger objects. However, using higher resolutions, such as 1280 × 1280 pixels, can resolve this issue. Note that there is a trade-off between resolution and latency. To address this, we can employ an appropriate model for each fog condition. Additionally, accuracy is more crucial than latency in heavy fog conditions because vehicle speeds are slower than in clear weather. On the other hand, we found that traffic lights (as objects) are detected with high accuracy despite being small objects (see Table 5 and Table 7 and Figure 3). This is likely due to the distinct features surrounding traffic lights, such as the traffic light poles, their positioning on the roadside, and the colored states of the traffic signals. Generally, the performance of our object detection model is highly accurate for fog density levels that match the fog density levels used to train the model. However, when the model is validated at fog density levels that differ from the levels used for training, the accuracy decreases (refer Table 8). As evident from Table 8, we can conclude that using a model trained for the same fog density significantly enhances the precision and recall. Notably, the highest accuracy values appear on the diagonal of the table, corresponding to the validation of models trained on the corresponding fog density categories.
In general, it should be also noted that for autonomous driving vehicles, it is of crucial importance to detect correctly the state (red, yellow, green) of a traffic light. This will be a subject of further study.

5. Conclusions

Our primary objective in this study was to enhance the perception of traffic participants and traffic lights under dense fog conditions by developing models that are tailored to specific fog density levels. This approach allows our system to prioritize the relevant features of objects in fog, leading to improved detection accuracy. Furthermore, this approach enhances the flexibility of autonomous driving (AD) in severe weather conditions by enabling the use of specialized algorithms tailored to specific fog density categories. Additionally, it enables the detection of objects that are not visible to the human eye using only RGB images. This capability becomes even more efficient when combined with other sensors such as LiDAR and radar. As we observed, the core of the algorithm focuses on creating a separate model for each fog category (clear, low fog, moderate fog, etc.), which improves recall and precision compared to a model trained for general weather conditions (see Table 8).
For future research, we intend to extend our methodology to real-world data, aiming to improve object detection under actual environmental conditions. A primary challenge in utilizing real data will involve creating specialized datasets that categorize each level of fog density in addition to performing object detection.
This study demonstrates that classifying fog density enhances perceptual accuracy by increasing recall and precision. As illustrated in Table 7, classifying fog and training each model based on fog density yields improved precision. These findings underscore the importance of fog classification, particularly given the absence of existing datasets that categorize fog levels and provide labeled bounding boxes, which are notable challenges. This study thus highlights the critical role of fog classification.

Author Contributions

Conceptualization, M.M.C.; methodology, M.M.C.; software, M.M.C.; validation, M.M.C., G.W. and J.R.; formal analysis, M.M.C.; investigation, M.M.C.; resources, M.M.C. and G.W.; data curation, M.M.C.; writing—original draft preparation, M.M.C.; writing—review and editing M.M.C., G.W. and J.R.; visualization, M.M.C.; supervision G.W.; project administration, G.W. and J.R.; funding acquisition, M.M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the data generated during our project is available on GitHub (https://github.com/Mofeed-Chaar/Improving-bouning-box-in-Carla-simulator, accessed on 2 October 2024). Additionally, we are happy to share the full dataset upon request via email.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The fog density refers to how much visibility is reduced by fog and other atmospheric particles. For instance, 100% obscuration implies very dense fog where visibility is minimal, while lower percentages represent lighter fog. The following table explains the relation between the fog density and visibility [44].
Table A1. Relation between fog density and visibility.
Table A1. Relation between fog density and visibility.
Visibility DistanceFog Category
minmaxkindpercentage
1000 m∞ mNo Fog 0 %
300 m1000 mLow Fog 25 %
100 m300 mModerate Fog 50 %
50 m100 mDense Fog 75 %
0 m50 mVery Dense Fog 100 %

Appendix B

Camera parameters [45]:
Table A2. Camera attributes.
Table A2. Camera attributes.
Blueprint AttributeValueDescription
bloom intensity 0.675 Intensity for the bloom
fov 90.0 Horizontal field of view
fstop 1.4 Opening of the camera lens. Aperture is 1 / f s t o p
image width1280in pixels
image height720in pixels
lens flare intensity 0.1 Intensity for the lens flare post-process effect.

Appendix C

Table A3. The hyperparameters that we used in YoloV5 and YoloV8.
Table A3. The hyperparameters that we used in YoloV5 and YoloV8.
Parameter NameValue
epochs50
batch16
IOU 0.7
lr0 0.01
lrf 0.01
momentum 0.937
weight decay 0.0005
warmup momentum 0.8
warmup bias lr 0.1

References

  1. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  2. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
  3. Oeljeklaus, M. An integrated approach for traffic scene understanding from monocular cameras. In Eldorado-Repositorium; Technical University of Dortmund: Dortmund, Germany, 2020. [Google Scholar] [CrossRef]
  4. Sengupta, A.; Cheng, L.; Cao, S. Robust multiobject tracking using mmwave radar-camera sensor fusion. IEEE Sens. Lett. 2022, 6, 1–4. [Google Scholar] [CrossRef]
  5. Lefsky, M.A.; Cohen, W.B.; Parker, G.G.; Harding, D.J. Lidar remote sensing for ecosystem studies: Lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. BioScience 2002, 52, 19–30. [Google Scholar] [CrossRef]
  6. Raiyn, J. Detection of road traffic anomalies based on computational data science. Discov. Internet Things 2022, 2, 6. [Google Scholar] [CrossRef]
  7. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; Levine, S., Vanhoucke, V., Goldberg, K., Eds.; ML Research Press: Zurich, Switzerland, 2017. Available online: http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf (accessed on 2 October 2024).
  8. Niranjan, D.; VinayKarthik, B.; Mohana. Deep learning based object detection model for autonomous driving research using carla simulator. In Proceedings of the 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 7–9 October 2021; pp. 1251–1258. [Google Scholar] [CrossRef]
  9. Chaar, M.M.; Weidl, G.; Raiyn, J. Analyse the Effect of Fog on the Perception; EU Science Hub: Brussels, Belgium, 2023; p. 332. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  11. Diaz-Ruiz, C.A.; Xia, Y.; You, Y.; Nino, J.; Chen, J.; Monica, J.; Chen, X.; Luo, K.; Wang, Y.; Emond, M.; et al. Ithaca365: Dataset and driving perception under repeated and challenging weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21383–21392. [Google Scholar] [CrossRef]
  12. Chaar, M.; Raiyn, J.; Weidl, G. Improve Bounding Box in Carla Simulator. In Proceedings of the 10th International Conference on Vehicle Technology and Intelligent Transport Systems-VEHITS, Angers, France, 2–4 May 2024; INSTICC, SciTePress: Setúbal, Portugal, 2024; pp. 267–275. [Google Scholar] [CrossRef]
  13. Valanarasu, J.M.J.; Yasarla, R.; Patel, V.M. Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2353–2363. [Google Scholar] [CrossRef]
  14. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  15. Truong, T.N.; Nguyen, C.T.; Zanibbi, R.; Mouchère, H.; Nakagawa, M. A survey on handwritten mathematical expression recognition: The rise of encoder-decoder and GNN models. Pattern Recognit. 2024, 153, 110531. [Google Scholar] [CrossRef]
  16. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar] [CrossRef]
  17. Li, J.; Xu, R.; Ma, J.; Zou, Q.; Ma, J.; Yu, H. Domain adaptive object detection for autonomous driving under foggy weather. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 612–622. [Google Scholar] [CrossRef]
  18. Ogunrinde, I.; Bernadin, S. A review of the impacts of defogging on deep learning-based object detectors in self-driving cars. In Proceedings of the SoutheastCon 2021, Atlanta, GA, USA, 10–13 March 2021; pp. 1–8. [Google Scholar] [CrossRef]
  19. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv 2023, arXiv:2305.09972. [Google Scholar]
  20. Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. In Proceedings of the Computer Vision and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1804, pp. 1–6. [Google Scholar]
  21. Mallett, A.; Mallett, A. Writing YAML and Basic Playbooks. Red Hat Certified Engineer (RHCE) Study Guide: Ansible Automation for the Red Hat Enterprise Linux 8 Exam (EX294); Springer: Berlin/Heidelberg, Germany, 2021; pp. 63–77. [Google Scholar] [CrossRef]
  22. Waymo. Waymo Dataset. 2023. Available online: https://waymo.com/open/ (accessed on 6 December 2023).
  23. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar] [CrossRef]
  24. Cordts, M.; Omran, M.; Ramos, S.; Scharwächter, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset. In Proceedings of the CVPR Workshop on the Future of Datasets in Vision, Boston, MA, USA, 7–12 June 2015; Volume 2. [Google Scholar]
  25. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather influence and classification with automotive lidar sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1527–1534. [Google Scholar] [CrossRef]
  26. TNO. Integrated Vehicle Safety—Autonomous Emergency Braking (AEB). 2015. Available online: https://www.youtube.com/watch?v=yNRgrOl329I&ab_channel=TNO (accessed on 6 December 2023).
  27. Sakaridis, C.; Dai, D.; Van Gool, L. Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef]
  28. Muller, R. Drivetruth: Automated autonomous driving dataset generation for security applications. In Proceedings of the Workshop on Automotive and Autonomous Vehicle Security (AutoSec), San Diego, CA, USA, 24 April 2022. [Google Scholar] [CrossRef]
  29. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
  30. Raiyn, J.; Weidl, G. Naturalistic Driving Studies Data Analysis Based on a Convolutional Neural Network. In Proceedings of the VEHITS 2023: 9th International Conference on Vehicle Technology and Intelligent Transport Systems, Prague, Czech Republic, 26–28 April 2023. [Google Scholar] [CrossRef]
  31. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  32. Sarda, A.; Dixit, S.; Bhan, A. Object detection for autonomous driving using yolo [you only look once] algorithm. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 4–6 February 2021; pp. 1370–1374. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Guo, Z.; Wu, J.; Tian, Y.; Tang, H.; Guo, X. Real-time vehicle detection based on improved yolo v5. Sustainability 2022, 14, 12274. [Google Scholar] [CrossRef]
  34. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  35. Juba, B.; Le, H.S. Precision-recall versus accuracy and the role of large data sets. Proc. Aaai Conf. Artif. Intell. 2019, 33, 4039–4048. [Google Scholar] [CrossRef]
  36. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
  37. Pereira, N. PereiraASLNet: ASL letter recognition with YOLOX taking Mean Average Precision and Inference Time considerations. In Proceedings of the 2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP), Vijayawada, India, 12–14 February 2022; pp. 1–6. [Google Scholar] [CrossRef]
  38. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. 2017. Available online: https://carla.readthedocs.io/en/latest/python_api/#carlaweatherparameters (accessed on 24 December 2023).
  39. Fu, Q.; Luo, K.; Song, Y.; Zhang, M.; Zhang, S.; Zhan, J.; Duan, J.; Li, Y. Study of sea fog environment polarization transmission characteristics. Appl. Sci. 2022, 12, 8892. [Google Scholar] [CrossRef]
  40. Ivanov, H.; Leitgeb, E. Artificial Generation of Mie Scattering Conditions for FSO Fog Chambers. In Proceedings of the 2022 13th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Porto, Portugal, 20–22 July 2022; pp. 54–58. [Google Scholar] [CrossRef]
  41. Haider, A.; Pigniczki, M.; Koyama, S.; Köhler, M.H.; Haas, L.; Fink, M.; Schardt, M.; Nagase, K.; Zeh, T.; Eryildirim, A.; et al. A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors. Sensors 2023, 23, 6891. [Google Scholar] [CrossRef] [PubMed]
  42. Ultralytics Inc. YoloV8. 2023. Available online: https://docs.ultralytics.com/models/yolov8/ (accessed on 4 November 2024).
  43. Liu, Y.; Gao, Y.; Yin, W. An improved analysis of stochastic gradient descent with momentum. Adv. Neural Inf. Process. Syst. 2020, 33, 18261–18271. [Google Scholar] [CrossRef]
  44. Negru, M.; Nedevschi, S. Assisting navigation in homogenous fog. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; Volume 2, pp. 619–626. [Google Scholar]
  45. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. 2017. Available online: https://carla.readthedocs.io/en/latest/ref_sensors/#rgb-camera (accessed on 4 November 2024).
Figure 1. For a start, determine the fog density of the input image, then use a model specifically trained for that fog density level. In this example, the fog density is 75%.
Figure 1. For a start, determine the fog density of the input image, then use a model specifically trained for that fog density level. In this example, the fog density is 75%.
Vehicles 06 00105 g001
Figure 2. An RGB camera sensor in the CARLA simulator captures an image, the horizontal field of view in degrees (fov) is 90.0 degrees, and the dimensions are 1280 × 720 (see Appendix B) with bounding boxes overlaid on the image to identify objects within the scene. The bounding boxes are generated automatically by the simulator, and they provide a visual representation of the objects’ positions and dimensions. The fog density on this image is 100%.
Figure 2. An RGB camera sensor in the CARLA simulator captures an image, the horizontal field of view in degrees (fov) is 90.0 degrees, and the dimensions are 1280 × 720 (see Appendix B) with bounding boxes overlaid on the image to identify objects within the scene. The bounding boxes are generated automatically by the simulator, and they provide a visual representation of the objects’ positions and dimensions. The fog density on this image is 100%.
Vehicles 06 00105 g002
Figure 3. We tested our object detection model in heavy fog conditions (fog density 100%) using a model that was trained with labels for all objects up to 200 m. The model trained with data under fog density 100% outperformed the model trained on clear data in detecting objects at long distances. As shown in (a), on the left, the fog-trained model successfully detected distant objects, while the clear-data model struggled to do so, as evident in (b) on the right. This difference in performance primarily increases the recall of the model. The red box describes the cars, the orange box describes the vans, and the green box describes the traffic lights.
Figure 3. We tested our object detection model in heavy fog conditions (fog density 100%) using a model that was trained with labels for all objects up to 200 m. The model trained with data under fog density 100% outperformed the model trained on clear data in detecting objects at long distances. As shown in (a), on the left, the fog-trained model successfully detected distant objects, while the clear-data model struggled to do so, as evident in (b) on the right. This difference in performance primarily increases the recall of the model. The red box describes the cars, the orange box describes the vans, and the green box describes the traffic lights.
Vehicles 06 00105 g003
Table 1. The weather parameters that we generated in our data.
Table 1. The weather parameters that we generated in our data.
Weather ParametersValueRangeNote
Sun Azimuth300.001168
Sun Altitude45.00961
Cloudiness[20–40] 1 [ 0 , 100 ]
Precipitation0.00 [ 0 , 100 ]
Precipitation Deposits0.00 [ 0 , 100 ]
Wind Intensity10.00 [ 0 , 100 ]
Fog Density[0, 25, 50, 75, 100] 2 [ 0 , 100 ]
Fog Distance 0.75 [ 0 , ] Visibility
Fog Falloff0.10 [ 0 , ] Describes the dense and heavy
Wetness0.00 [ 0 , 100 ]
Scattering Intensity [39]1.00 [ 0 , ] Light contribution to volumetric fog
Mie Scattering [40]0.03 [ 0 , ] Interaction of light with large particles
Rayleigh Scattering [41]0.0331 [ 0 , ] Interaction of light with small particles
Dust Storm0.00 [ 0 , 100 ]
1 To increase the diversity of our dataset, we randomly generated some samples with a cloudiness of 20 and others with a cloudiness of 40. 2 The fog density was a primary focus of our study, and we generated data with varying levels of fog density: 0%, 25%, 50%, 75%, and 100%. Note: All parameters are editable in our GitHub project in file weather.yaml.
Table 2. The hyperparameters that we used to optimize our model’s.
Table 2. The hyperparameters that we used to optimize our model’s.
HyperparameterEpochsBatch 1IOULearning Rate1 2Momentum 3Weight Decay 4
Value50 [ 8 , 16 ] 0.70.010.9370.0005
1 8 for training of images (1280 × 1280) pixels, and 16 for (640 × 640). 2 This is the step size that the model takes toward the negative gradient of the loss function. 3 This is a parameter that helps to stabilize the training process by smoothing out the updates to the model’s weights. 4 This is a parameter that helps to prevent the model from overfitting by penalizing large weight updates. Note: For more hyperparameters information, see Appendix C.
Table 3. The latency of our YOLOv8m model when applied to our dataset.
Table 3. The latency of our YOLOv8m model when applied to our dataset.
Image SizePreprocessInferencePostprocessTotal
12800.3 ms6.6 ms0.3 ms7.2 ms
6400.1 ms2.9 ms0.3 ms3.3 ms
In our validation process, we utilized an NVIDIA RTX 4090 GPU, an Intel Core i9-14900K processor, and 64 GB of RAM operating at a frequency of 4000 MHz.
Table 4. Accuracy of object detection for six classes on each of fog density using YOLOv5s.
Table 4. Accuracy of object detection for six classes on each of fog density using YOLOv5s.
Fog DensityPrecisionRecallmAP50mAP50-95
0%0.9680.9260.9650.841
25%0.9760.9250.9650.841
50%0.9690.9250.9680.858
75%0.9580.8610.9240.777
100%0.9520.8340.890.739
Note: 1-The optimization algorithm employed in this study is stochastic gradient descent (SGD) [43].
2-The object distances for this training were restricted to objects within 50 m.
Table 5. The accuracy of object detection for each class on each fog density is shown in the following table. YOLOv5m was used as the object detection model, and the image size was 640 × 640 pixels. The object detection model was trained on a dataset of images that included labels for all objects up to 50 m in distance. As demonstrated, we achieved high precision and recall for object detection in dense fog conditions by using a model specifically trained for high fog density.
Table 5. The accuracy of object detection for each class on each fog density is shown in the following table. YOLOv5m was used as the object detection model, and the image size was 640 × 640 pixels. The object detection model was trained on a dataset of images that included labels for all objects up to 50 m in distance. As demonstrated, we achieved high precision and recall for object detection in dense fog conditions by using a model specifically trained for high fog density.
Fog DensityClassPrecisionRecallmAP50mAP50-95
Clear weathercar0.8980.860.9290.756
bus0.9090.9780.9730.863
truck0.9350.5340.7120.531
van0.8920.8640.9050.79
walker0.8790.6240.7180.459
traffic light0.9580.920.9510.809
Fog density 25%car0.9350.8480.9290.75
bus0.9810.8910.9030.799
truck0.9210.8210.9090.785
van0.9320.8780.9490.831
walker0.9410.6390.7780.485
traffic light0.9760.920.9650.818
Fog density 50%car0.9270.9130.960.811
bus0.9690.9510.9750.843
truck0.9340.7630.8090.722
van0.9340.8930.9450.807
walker0.9330.7910.8670.591
traffic light0.9680.9220.9570.827
Fog density 75%car0.940.8650.9360.776
bus0.9680.8670.940.831
truck0.960.8450.8990.813
van0.960.8690.9370.817
walker0.9450.7910.8730.597
traffic light0.9830.9310.9660.852
Fog density 100%car0.9250.860.9190.759
bus0.9330.9340.9760.859
truck0.9640.6760.7270.628
van0.9680.8760.940.835
walker0.9420.730.820.558
traffic light0.9860.9230.960.81
Table 6. The accuracy of object detection range of labels distance at each fog density is shown in the following table. The YOLOv8m [19] model was used for the object detection model, and the image size was 640 × 640 pixels.
Table 6. The accuracy of object detection range of labels distance at each fog density is shown in the following table. The YOLOv8m [19] model was used for the object detection model, and the image size was 640 × 640 pixels.
DistanceFog DensityPrecisionRecall F 1 -ScoremAP50mAP50-95
All objects within 50 m0%0.9470.8050.870.8820.763
25%0.9270.8580.8910.9220.822
50%0.9530.8830.9160.9380.839
75%0.9590.8680.9110.930.837
100%0.9410.7190.8150.8240.723
All objects within 100 m0%0.9150.6640.7690.750.614
25%0.9240.6920.7910.7870.645
50%0.9140.7160.8030.8030.66
75%0.9450.6680.7820.7580.633
100%0.9280.6370.7550.7350.59
All objects within 150 m0%0.8880.5810.7020.6640.526
25%0.9210.6060.7310.6990.563
50%0.8870.5790.70.6680.538
75%0.9030.5490.6820.6310.519
100%0.9090.4530.6040.5250.425
All objects within 200 m0%0.890.5250.660.6110.476
25%0.8930.5520.6820.4610.5
50%0.9010.5310.6680.6180.486
75%0.9170.4830.6320.560.452
100%0.9120.440.5930.5160.413
Table 7. The accuracy of object detection for each class on each fog density is shown in the following table. YOLOv8m was used as the object detection model, and the image size was 1280 × 1280 pixels. The object detection model was trained on a dataset of images that included labels for all objects up to 200 m in distance.
Table 7. The accuracy of object detection for each class on each fog density is shown in the following table. YOLOv8m was used as the object detection model, and the image size was 1280 × 1280 pixels. The object detection model was trained on a dataset of images that included labels for all objects up to 200 m in distance.
Fog DensityClassPrecisionRecallmAP50mAP50-95
Clear weathercar0.910.7650.840.659
bus0.8810.9490.9580.869
truck0.8910.5640.6690.544
van0.8710.740.8170.674
walker0.9270.50.60.41
traffic light0.960.5380.6670.553
Fog density 25%car0.9350.8480.9290.75
bus0.9810.8910.9030.799
truck0.9210.8210.9090.785
van0.9320.8780.9490.831
walker0.9410.6390.7780.485
traffic light0.9760.920.9650.818
Fog density 50%car0.9440.6870.790.637
bus0.8530.8710.90.779
truck0.8430.6120.6690.555
van0.9180.6560.7510.617
walker0.9640.5110.610.426
traffic light0.9820.5490.6490.528
Fog density 75%car0.9150.5970.6970.558
bus0.950.8950.9280.812
truck0.8930.4870.5660.474
van0.9170.5770.6720.558
walker0.940.4190.5060.354
traffic light0.9740.5280.6160.494
Fog density 100%car0.9050.5890.6770.55
bus0.9260.6290.6930.57
truck0.9070.4240.4970.425
van0.8730.6080.6530.578
walker0.8920.4320.50.352
traffic light0.9730.5160.5850.461
Table 8. We evaluated our models across all fog categories for each model. We can conclude from the table that the precision and recall are significantly higher when using a model trained on the same fog density. The labels in this validation are for objects within 100 m using YOLOv8m.
Table 8. We evaluated our models across all fog categories for each model. We can conclude from the table that the precision and recall are significantly higher when using a model trained on the same fog density. The labels in this validation are for objects within 100 m using YOLOv8m.
Metrics Validation Data
Fog Density 0% 25% 50% 75% 100%
Trained modelPrecision0%0.9150.8950.860.8070.821
25%0.9040.924 0.8970.8970.853
50%0.8730.9070.9130.9260.933
75%0.8530.8740.9010.9450.94
100%0.8680.8590.8970.9240.931
Recall0%0.6650.580.4640.360.271
25%0.6310.6920.6520.5530.468
50%0.5370.6250.7160.6390.596
75%0.4620.5890.6780.6680.61
100%0.4340.5580.6650.6630.636
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chaar, M.M.; Raiyn, J.; Weidl, G. Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment. Vehicles 2024, 6, 2154-2169. https://doi.org/10.3390/vehicles6040105

AMA Style

Chaar MM, Raiyn J, Weidl G. Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment. Vehicles. 2024; 6(4):2154-2169. https://doi.org/10.3390/vehicles6040105

Chicago/Turabian Style

Chaar, Mohamad Mofeed, Jamal Raiyn, and Galia Weidl. 2024. "Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment" Vehicles 6, no. 4: 2154-2169. https://doi.org/10.3390/vehicles6040105

APA Style

Chaar, M. M., Raiyn, J., & Weidl, G. (2024). Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment. Vehicles, 6(4), 2154-2169. https://doi.org/10.3390/vehicles6040105

Article Metrics

Back to TopTop