Next Article in Journal
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking
Previous Article in Journal
WER-Net: A New Lightweight Wide-Spectrum Encoding and Reconstruction Neural Network Applied to Computational Spectrum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture

by
Muhammad Muzammel
1,2,
Mohd Zuki Yusoff
1,*,
Mohamad Naufal Mohamad Saad
1,
Faryal Sheikh
3 and
Muhammad Ahsan Awais
1
1
Centre for Intelligent Signal & Imaging Research (CISIR), Electrical and Electronic Engineering Department, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia
2
Laboratoire Images, Signaux et Systèmes Intelligents (LISSI), Université Paris-Est Créteil (UPEC), 94400 Vitry-sur-Seine, France
3
Department of Management Sciences, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
*
Author to whom correspondence should be addressed.
Submission received: 29 June 2022 / Revised: 7 August 2022 / Accepted: 8 August 2022 / Published: 15 August 2022
(This article belongs to the Section Vehicular Sensing)

Abstract

:
Buses and heavy vehicles have more blind spots compared to cars and other road vehicles due to their large sizes. Therefore, accidents caused by these heavy vehicles are more fatal and result in severe injuries to other road users. These possible blind-spot collisions can be identified early using vision-based object detection approaches. Yet, the existing state-of-the-art vision-based object detection models rely heavily on a single feature descriptor for making decisions. In this research, the design of two convolutional neural networks (CNNs) based on high-level feature descriptors and their integration with faster R-CNN is proposed to detect blind-spot collisions for heavy vehicles. Moreover, a fusion approach is proposed to integrate two pre-trained networks (i.e., Resnet 50 and Resnet 101) for extracting high level features for blind-spot vehicle detection. The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods. Both approaches are validated on a self-recorded blind-spot vehicle detection dataset for buses and an online LISA dataset for vehicle detection. For both proposed approaches, a false detection rate (FDR) of 3.05% and 3.49% are obtained for the self recorded dataset, making these approaches suitable for real time applications.

1. Introduction

Although bus accidents are very rare around the globe, there are still approximately 60,000 buses are involved in traffic accidents in the United States every year. These accidents lead to 14,000 non-fatal injuries and 300 fatal injuries [1]. Similarly, every year in Europe approximately 20,000 buses are involved in accidents that cause approximately 30,000 (fatal and non-fatal) injuries [2]. These accidents mostly occurred due to thrill-seeking driving, speeding, fatigue, stress, and aggressive driver behaviors [3,4]. Accidents involving buses and other road users, such as pedestrians, bicyclists, motorcyclists, or car drivers and passengers, usually cause more severe injuries to these road users [5,6,7,8].
The collision detection systems of cars mostly focus on front and rear end collision scenarios [9,10,11,12]. In addition, different drowsiness detection techniques have been proposed to detect car drivers’ sleep deprivation and prevent possible collisions [13,14]. At the same time, buses operate in a complicated environment where a significant number of unintended obstacles such as pulling out from bus stops, passengers unloading, pedestrians crossing in front of buses, and bus stop structures, etc. [15,16,17], are present. Additionally, buses have higher chances of side collisions due to constrained spaces and maneuverability [15]. Especially at turns, researchers found that the task demand on bus drivers is very high [16,17].
Further, heavy vehicles and buses, which have more blind spots compared to cars and other road users in these environments, are at higher risks of collisions [18,19,20]. Improvements to heavy vehicle and bus safety have been initiated by many countries through the installation of additional mirrors. Yet, there are still some blind-spot areas where drivers cannot see other road users [21,22]. In addition, buses may have many passengers on board. A significant number of on-board passenger incidents have been reported due to sudden braking or stopping [23]. These challenges may entail different requirements for collision detections for public/transit buses than for cars. A blind-spot collision detection system can be designed for buses to predict impending collisions in their proximity and to reduce operational interruptions. It could provide adequate time for the driver to smoothly push the brake or take any other precautionary measures to avoid such imminent collision threats as well as avoid injuries and trauma inside the bus.
Over the past few years, many types of collision detection techniques have been proposed [9,10,24,25,26]. Among these, vision-based collision detection techniques provide reliable detection of vehicles across a large area [9,10,26]. This is due to cameras that provide a wide field of view. Several vision-based blind-spot collision detection techniques for cars and other vehicles have been proposed [9,10,11,12,27,28,29]. In vision-based techniques, the position of the camera plays a significant role. Depending on the position of the installed camera, vision-based blind-spot collision detection systems are categorized as rear camera based [11,30] or side camera-based systems [26,27,28,29]. Rear camera-based vision systems detect vehicles by acquiring the images from a rear fish-eye camera. The major drawback of using a rear fish-eye camera is that the captured vehicle suffers from severe radial distortions, leading to huge differences in appearance for different positions [11].
In contrast, side camera-based vision systems have the camera installed directly at the bottom or next to the side mirrors that directly face the blind spot and detect the approaching vehicles. In these systems, the vehicle appearance drastically changes with its position; yet, it has the advantage of high resolution images for vehicle detection [11].
In vision-based blind-spot vehicle techniques, deep convolutional neural network (CNN) models often achieve better performance [10,12] compared to conventional machine learning models (based on the appearance, histogram of oriented gradients (HOG) features, etc.) [11,27,28]. This is due to convolutional layers that can extract and learn more pure features from the raw RGB channels than traditional algorithms such as HOG. However, blind-spot vehicle detection is still challenging on account of the large variations in appearance and structure, especially ubiquitous occlusions that further increase the intra-class variations.
Recently, deep learning techniques prove to be a game changer in object detection. Many deep learning models have been proposed to detect different types and sizes of objects in images [31,32,33]. Among these models, two-stage object detectors show better accuracy compared to one-stage object detectors [34,35,36]. Therefore, two-stage object detectors, such as faster R-CNN [31], seem to be more suitable for blind-spot vehicle detection. In faster R-CNN, a self-designed CNN or a pre-trained network (such as VGG16, ResNet50, and ResNet-101, etc.) is used to extract a feature map [37,38]. These networks are trained on a large dataset and are proven to be better in performance compared to simple convolutional neural networks (CNNs). In medical applications, it has been reported that multi-CNNs performed much better in residual feature extraction and classification compared to single CNNs [39,40,41].
In this paper, we propose a novel blind-spot vehicle detection technique for commercial vehicles based on multi convolutional neural networks (CNNs) and faster R-CNN. Two different convolutional neural network-based approaches/models with faster R-CNN as an object detector are proposed for blind-spot vehicle detection. In the first approach/model, two self designed CNNs networks are used to extract the features, and their outputs are concatenated and fed to another self designed CNN. Next, faster R-CNN uses these high-level features for vehicle detection. In the second approach/model, two ResNet CNN networks (ResNet-50 and ResNet-101) are concatenated with the self-designed CNN to extract features. Finally, these extracted features are fed to the faster R-CNN for blind-spot vehicle detection. The scientific contributions of this research are as follows:
1
Design of two high-level CNN based feature descriptors for blind-spot vehicle detection for heavy vehicles;
2
Design of fusion technique for different high level feature descriptors and its integration with the faster R-CNN. In addition, performance comparison with existing state-of-the-art approaches;
3
Introduction of fusion technique for pre-trained high-level feature descriptors for object detection application.

2. Related Work

The recent deep convolutional neural network (CNN) based algorithms depict extraordinary performance in various vision tasks [42,43,44,45]. Convolutional neural networks extract features from the raw images through a large amount of training with high flexibility and generalization capabilities. The first CNN based object detection and classification system was presented in 2013 [46,47]. Up to now, many deep learning-based object detection and classification models have been proposed, including region based convolutional neural network (R-CNN) [48], fast R-CNN [49], faster R-CNN [31], single shot multibox detector (SSD) [50], R-FCN [51], you only look once (YOLO) [32], and YOLOv2 [33].
R-CNN models achieve promising detection performance and are a commonly employed paradigm for object detection [48]. They have essential steps, such as object regional proposal generation with selective search (SS), CNN feature extraction, selected objects classification, and regression based on the obtained CNN features. However, there are large time and computation costs to train the network due to repeated extraction of CNN features for thousands of object proposals [52].
In fast R-CNN [49], the feature extraction process is accelerated by sharing the forward pass computation. Due to the regional proposal generation by selective search (SS), it still appears to be slow and requires significant computational capacity to train it. In faster R-CNN [31], “regional proposal generation using SS” was replaced by the “proposal generation using CNN”. This increases the computational capacity of the network and makes it efficient and quick compared to the R-CNN and fast R-CNN.
YOLO [32] frame object detection is a regression problem to separate bounding boxes and associated class probabilities. In YOLO, a single CNN predicts the bounding boxes and class probabilities for these boxes. It utilizes a custom network based on the GoogLeNet architecture. An improved model called YOLOv2 [33] achieves comparable results on standard tasks. YOLOv2 employs a new model called Darknet-19, which has 19 convolutional layers and 5 max-pooling layers. This new model only takes 5.58 s to compute results. However, the YOLOv2 network still lacks some important elements, it has no residual blocks, no skip connections, and no up-sampling, etc.
The YOLOv3 network is the advanced version of YOLOv2 and incorporates all of these important elements. YOLOv3 is a 53 layer network trained on Imagenet. For object detection, YOLOv3 has 53 more layers stacked onto it and gives us a 106 layer fully convolutional underlying architecture [53]. Recently, two new versions of YOLO were introduced, named YOLOv4 and YOLOv5, respectively [54,55]. Other than YOLO, there are also other one-stage object detectors, such as SSD [50] and RetinaNet [34].
Recent studies show that two-stage object detectors obtained better accuracy compared to one-stage object detectors [34,35], thus, making faster R-CNN a suitable candidate for blind-spot vehicle detection. However, in these object detectors, the whole system accuracy is profoundly dependent on the feature set obtained from the neural networks. In recent object detectors, it has also been proposed to collect features from different stages of the neural network to improve the system performance [56,57]. In medical applications, it has been demonstrated that the usage of multiple feature extractors can significantly improve system accuracy [39,40,41].
Thus, to increase system accuracy, in this research multiple CNN networks based blind-spot vehicle detection approaches are proposed. Along with the fusion of a self-designed convolutional neural network, system performance is also investigated using a fusion approach for pre-trained convolutional neural networks.

3. Proposed Methodology

The proposed methodology comprises several steps, including pre-processing of datasets, anchor boxes estimation, data augmentation, and multi CNN network design, as shown in Figure 1.

3.1. Pre-Processing

For the self-recorded dataset, image labels were created using MATLAB 2019a “Ground Truth Labeller App”, whereas for the online dataset, ground truths were provided with the image set. Next, images were resized to 224 × 224 × 3 to enhance the computation performance of the proposed deep neural networks.

3.2. Anchor Boxes Estimation

Anchor boxes are important parameters of deep learning object recognition. The shape, scale, and the number of anchor boxes impact the efficiency and accuracy of the object detector. Figure 2 indicates the plot of aspect ratio and box area of the self-recorded dataset.
The anchor boxes plot reveals that many vehicles have a similar size and shape. However, vehicle shapes are still spread out, indicating the difficulty of choosing anchor boxes manually. Therefore, a clustering algorithm presented in [33] was used to estimate anchor boxes. It groups similar boxes together using a meaningful metric.

3.3. Data Augmentation

In this work, data augmentation is performed to minimize the over-fitting problem and to improve the proposed network’s robustness against noise. Random brightness augmentation technique is considered to perturb the images. The brightness of the images is augmented by randomly darkening and brightening the images. The darkening and brightening values randomly range from [0.5, 1.0] and [1.0, 1.5], respectively.

3.4. Proposed CNNs and Their Integration with Faster R-CNN

Initially, the same images are fed to two different deep learning networks to extract high-level features. Subsequently, these high-level features are fed to another CNN architecture to combine and smooth these features. Finally, faster R-CNN based object detection is performed to detect impending collisions. The layer wise connection of deep learning architectures and their integration with faster R-CNN are shown in Figure 3.

3.4.1. Proposed High Level Feature Descriptors Architecture

Two different approaches are used to extract deep features: (1) self-designed convolutional neural networks and (2) pre-trained convolutional networks, as shown in Figure 3. Additional details of these feature descriptors are given below.

Self-Designed High-Level Feature Descriptors

In first approach, multiple self-designed convolutional neural networks are connected with the faster R-CNN network. The layer wise connection of two self-designed CNN networks (named DConNet and VeDConNet) is shown in Figure 4. Initially, DConNet and VeDConNet are used to extract deep features, and their output is provided to the third 2D CNN architecture for the purpose of features addition and smoothness.
Both DConNet and VeDConNet architectures consist of five convolutional blocks. In DConNet, all five blocks are composed of a two 2D convolutional and ReLU layers. In addition, at the end of each block there is a max-pooling layer. In VeDConNet, the initial two blocks are similar to DConNet as they consist of two 2D convolutional layers, each followed by a ReLU activation function, where a max-pooling layer is also available after the second ReLU activation function. The other three blocks of VeDConNet comprise four convolutional layers, each followed by the ReLU layer and maxpooling layer after the fourth ReLU activation function.

Pre-Trained Feature Descriptors

In the second approach, two pre-trained convolutional networks (i.e., Resnet 101 and Resnet 50) are linked with the third CNN architecture, which is further connected with the faster R-CNN network for the purpose of vehicle detection. The features obtained from ReLU Res4b22 and ReLU 40 layers of ResNet 101 and ResNet 50, respectively, as given in Figure 5.

3.4.2. Features Addition and Smoothness

The high level features obtained from the two self-designed/pre-trained CNN architectures are added together through the addition layer, as shown in Figure 3. Let F 1 ( x ) and F 2 ( x ) be the output of the first and second deep neural networks, then their addition H ( x ) is given as:
H ( x ) = F 1 ( x ) + F 2 ( x )
The addition layer is followed by the convolutional layer and ReLU activation function for the features smoothness.

3.4.3. Integration with Faster R-CNN

As shown in Figure 3, faster R-CNN takes high level features from the ReLU layer to perform the blind-spot vehicle detection. The obtained features map is fed to region proposal network (RPN) and ROI pooling layer of the faster R-CNN. The loss function of faster R-CNN can be divided into two parts: R-CNN loss [49] and RPN loss [31], which is shown in the equations below:
L ( p , u , t u , v ) = L c l s ( p , u ) + λ L r e g ( t u , v )
L ( { p i } , { t i } ) = 1 L c l s i L c l s ( p i , p i * ) + λ 1 L r e g i p i * L r e g ( t i , t i * )
The detailed description of the faster R-CNN architecture and the above equations is given in references [31,49,58].

4. Results and Discussion

In this section, the vehicle detection using the proposed deep learning models is discussed in detail. We compared the performance of both approaches with each other and with the state-of-the-art benchmark approaches. This section also includes the dataset description along with the details of the proposed network implementation.

4.1. Dataset

A blind-spot collision dataset was recorded by attaching cameras to the side mirrors of a bus. The placement of cameras is shown in Figure 6.
The dataset was recorded in Ipoh, Seri Iskandar and along Ipoh-Lumut highway in Perak, Malaysia. Ipoh is a city in northwestern Malaysia, whereas Seri Iskandar is located about 40 km southwest of Ipoh. Universiti Teknologi PETRONAS is also located in the new township of Seri Iskandar. Data were recorded in multiple round trips from Seri Iskandar to Ipoh for different lighting conditions. In addition, data were recorded in the cities of Ipoh and Seri Iskandar for dense traffic scenarios. Moreover, Malaysia has a tropical climate and the rainfall remains high year-round, thus allowing us to easily record data in different weather conditions. Finally, a set of 3000 images from the self-recorded dataset was selected in which vehicles appeared in blind-spot areas.
To the best of our knowledge, there is no publicly available online dataset for heavy vehicles. Therefore, a publicly available online dataset named “Laboratory for Intelligent and Safe Automobiles (LISA)” [59] for cars was used to validate the proposed method. In the LISA dataset, the camera was installed at the front of the car. The detailed description of both datasets is given in Table 1. Both datasets are divided randomly into 80% for training and 20% for testing.

4.2. Network Implementation Details

The proposed work was implemented on the Intel® Xeon(R) E-2124G CPU @ 3.40 GHz (installed memory 32 GB), with a NVIDIA Corporation GP104GL [Quadro P4000] graphics card. MATLAB 2019a was used as platform to investigate the proposed methodology.
In the first approach, both CNN based feature extraction architectures (i.e., DConNet and VeDConNet) have five blocks with N number of convolutional filters for each block. Therefore, the number of convolutional filters of the five blocks from the input to the output is equal to N = [64, 128, 256, 512, 512]. Moreover, after the addition layer, there was also convolution layer with a total of 512 filters. For all these convolutional layers, the filter size was 3 × 3, and ReLU was used as an activation function. At the same time, the stride and the pool size of the max-pooling layer was 2 × 2.
In the second approach, for Resnet 101 and Resnet 50, standard weights were used. Moreover, after the addition layer, there was a convolution layer with a total of 512 filters and ReLU as an activation function.
In both approaches, we used an SGDM optimizer with a learning rate of 10 3 and a momentum of 0.9. The batch size was set to 20 samples, and the verbose frequency was set to 20. Negative training samples are set equal to the samples that overlap with the ground truth boxes by 0 to 0.3. However, positive training samples are set equal to the samples that overlap with the ground truth boxes by 0.6 to 1.0.

4.3. Evaluation Matrix

The existing state-of-the-art approaches measure the performance in terms of true positive rate (TPR), false detection rate (FDR), and frame rate [59,60,61,62]. Therefore, the same parameters are used to evaluate the performance of the proposed models. TPR (also known as sensitivity) is the ability to correctly detect blind-spot vehicles. FDR refers to the false blind-spot vehicle detection among the total detection incidents. Moreover, the frame rate is defined as the total number of frames processed in one second [60]. If TP, FN, and FP represent the true positive, false negative, and false positive, respectively, then the formulas for TPR and FDR are given as:
TPR ( % ) = TP TP + FN × 100
FDR ( % ) = FP TP + FP × 100

4.4. Results Analysis

The proposed approaches/models appeared to be successful in detecting the vehicles for both self-recorded and online datasets. A few of the images from blind-spot detection are shown in Figure 7.
Figure 7 shows that the proposed CNN based models were successfully able to detect different types of vehicles, including light and heavy vehicles and motor bikes, in different scenarios and lighting conditions. The proposed work was successful enough to recognize multiple vehicles simultaneously, as shown in Figure 7a,b. These figures also show the presence of shadows along with the vehicles. It reveals the significance of the proposed vehicle detection algorithm, as it was capable of differentiating remarkably between real vehicles and their shadows; this leads to the notable reduction of possible false detection.
Furthermore, it is shown in Figure 7c,d that the proposed technique detects a motorcyclist approaching and driving very close to the bus. A small mistake by the bus driver in such scenarios could lead to a fatal accident. Therefore, the blind-spot collision detection systems are very important for heavy vehicles.
Similarly, vehicle detection from the online LISA dataset [59] is shown in Figure 8. From Figure 8, our models were successfully able to detect all types of vehicles in different scenarios using the LISA dataset. Figure 8a,b show the detection of vehicles in dense scenarios. The proposed models were reliable enough to detect multiple vehicles simultaneously in dense scenarios, even in the presence of vehicle shadows on the road. Figure 8c,d exhibit the detection of vehicles on a highway, and Figure 8e,f convey the detection of vehicle in urban areas. In both figures, we can see the presence of lane markers on the road, which were successfully neglected by the proposed systems. Furthermore, Figure 8f shows a person crossing the road; this could lead to a false detection. However, our models managed to identify the vehicle and successfully differentiated between the person and vehicle. In the LISA dataset, labels were only provided for vehicles. Therefore, the proposed model only detected the vehicle.
The visual analysis of the true positive rate (TPR) and false detection rate (FDR) for proposed approaches against different sets of data is presented in Figure 9; the figure shows that both approaches delivered reliable outcomes for the self-recorded as well as online datasets. The TPR value obtained from the faster R-CNN with pre-trained fused (Resnet 101 and Resnet 50) high-level feature descriptors is slightly higher compared to the the faster R-CNN with the proposed fused (DConNet and VeDConNet) feature descriptors. However, the faster R-CNN with the proposed feature descriptors provides a lower value of FDR for the self-recorded dataset and gives a comparable FDR for the LISA-Urban dataset.
The frame rate (frames per second) for each type of dataset used in both approaches is given in Table 2, which shows that the first model has a comparatively better frame rate. The pre-trained model (i.e., faster R-CNN with high-level feature descriptors of ResNet 101 and ResNet 50) took more time to compute features compared to the model presented in the first approach (i.e., faster R-CNN with high-level feature descriptors of DConNet and VeDConNet). Hence, the model presented in first approach is capable of providing significant performance for the vehicle detection scenarios where less computational time is required.
The detailed comparisons of different parameters, including TPR, FDR, and frame rate from the existing state-of-the-art techniques and our proposed models, are presented in Table 3. In addition, the graphical representation of true positive and false detection rates (i.e., TPR and FDR) of both models and their comparisons with the existing state-of-the-art approaches are given in Figure 10.
From Table 3, it can be deduced that our model achieved significantly higher results compared to the existing methods (deep learning and machine learning models). The deep learning model presented by S. Roychowdhury et al. (2018) [61] was able to achieve 100% and 98% TPR for LISA-Urban and LISA-Sunny datasets, respectively. The proposed model (i.e., faster R-CNN with high-level feature descriptors of DConNet and VeDConNet) was able to get a higher TPR value for the LISA-Sunny dataset but a significantly close TPR value for the LISA-Urban dataset. Our model outperformed all the existing methods in terms of FDR. A very low false detection rate was obtained for all three online datasets (LISA-Dense, LISA-Sunny, and LISA-Urban) compared to the existing machine/deep learning techniques. Moreover, higher TPR values have been acquired for all three LISA datasets compared to the existing machine learning techniques.
From Figure 10, one can see that, for the first model, the FDR is less than 4% for all datasets, making it suitable for real time applications. Further, TPR values are almost constant for all types of datasets. It shows that the model achieved a reliable result for all types of scenarios.

4.5. Discussion

The proposed approaches were successfully able to detect different types of vehicles, such as motorcycles, cars, trucks, etc. In addition, both approaches proved to be reliable for dense traffic conditions for the online LISA dataset. The fusion of pre-trained networks provided a higher accuracy for both the self-recorded and online datasets compared to the first approach in which two self designed CNNs are used. However, for the first approach, a lower frame rate was obtained compared to the second approach.
For the online datasets, both approaches obtained either a high or comparable accuracy compared to the existing state-of-the-art approaches, as given in Table 3. For LISA-Dense, the highest TPR value of 98.06% was obtained by the second proposed approach, followed by the first approach with a value of 97.87%. Further, the machine learning approaches proposed by M. Muzammel et al. (2017) [60], R. K. Satzoda (2016) [62], and S. Sivaraman [59] (2010) reported TPR values of 95.01%, 94.50%, and 95%, respectively. S. Roychowdhury et al. (2018) [61] did not report any results for the LISA-Dense dataset. For LISA-Urban, the highest TPR value was obtained by S. Roychowdhury et al. (2018) [61], followed by the proposed second approach. For LISA-Urban, the lowest TPR value of 91.70% was obtained by S. Sivaraman [59].
From Figure 10, the fusion of features significantly improved the performance of faster R-CNN. A notable reduction in false detection was found for the online datasets compared to the deep learning [61] and machine learning approaches [59,60,62]. A system with lower false detection rate will provide fewer false warnings and thus increase the trust of the drivers for the system. It has been found in the literature that collision warnings reduce the attention resources required for processing the target correctly [63]. In addition, collision warnings facilitate the sensory processing of the target [64,65]. Finally, our fusion technique results are in line with studies in [39,40,41].
With regard to the comparison between both approaches, the model presented in the first approach obtained a lower FDR compared to the model presented in the second approach for the self-recorded and LISA-Urban datasets. In addition, the model presented in the first approach has a higher frame rate for all the datasets compared to the model presented in the second approach. In other TPR and FDR values, the second approach model outperformed the first approach model. Therefore, there is a slight trade off between performance and computation time.

5. Conclusions and Future Work

In this research, we propose deep neural architectures for blind-spot vehicle detection for heavy vehicles. Two different models for feature extraction are used with the faster R-CNN network. Furthermore, the high-level features obtained from both networks are fused together in order to improve the network performance. The proposed models successfully detected blind-spot vehicles with reliable accuracy using both the self-recorded and publicly available datasets. Moreover, the fusion of feature extraction networks improved the results significantly, and a notable increment in performance is observed. In addition, we compared our fusion model with the state-of-the-art benchmark, machine learning, and deep learning approaches. Our proposed work outperformed all the existing approaches for vehicle detection in various scenarios, including dense traffics, urban surroundings, with and without pedestrians, shadows, and different weather conditions. The proposed model is capable enough to be usedfor not only buses but also other heavy vehicles such as trucks, trailers, oil tankers, etc. This research work is limited to the integration of only two convolutional neural networks with faster R-CNN. In the future, more than two convolutional neural networks may be integrated with faster R-CNN, and parametric study for accuracy and frame rate may be performed.

Author Contributions

Conceptualization, M.Z.Y.; data curation, M.M. and M.N.M.S.; formal analysis, M.A.A.; investigation, M.N.M.S.; methodology, M.M.; supervision, M.Z.Y.; validation, F.S.; visualization, M.A.A.; writing—original draft, M.M. and F.S.; writing—review and editing, M.Z.Y., M.N.M.S., F.S. and M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by Ministry of Education Malaysia under Higher Institutional Centre of Excellence (HICoE) Scheme awarded to the Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS (UTP), Malaysia; and, in part, by the Yayasan Universiti Teknologi PETRONAS (YUTP) Fund under Grant 015LC0-239.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We express our gratitude and acknowledgment to the Centre for Intelligent Signal and Imaging Research (CISIR) and Electrical and Electronic Engineering Department, Universiti Teknologi PETRONAS (UTP), Malaysia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional neural network
HOGHistogram of oriented gradients
SSSelective search
YOLOYou only look once
LISALaboratory for Intelligent and Safe Automobiles
SGDMStochastic gradient descent with momentum
TPRTrue positive rate
FDRFalse detection rate

References

  1. Feng, S.; Li, Z.; Ci, Y.; Zhang, G. Risk factors affecting fatal bus accident severity: Their impact on different types of bus drivers. Accid. Anal. Prev. 2016, 86, 29–39. [Google Scholar] [CrossRef] [PubMed]
  2. Evgenikos, P.; Yannis, G.; Folla, K.; Bauer, R.; Machata, K.; Brandstaetter, C. Characteristics and causes of heavy goods vehicles and buses accidents in Europe. Transp. Res. Procedia 2016, 14, 2158–2167. [Google Scholar] [CrossRef]
  3. Öz, B.; Özkan, T.; Lajunen, T. Professional and non-professional drivers’ stress reactions and risky driving. Transp. Res. Part F Traffic Psychol. Behav. 2010, 13, 32–40. [Google Scholar] [CrossRef]
  4. Useche, S.A.; Montoro, L.; Alonso, F.; Pastor, J.C. Psychosocial work factors, job stress and strain at the wheel: Validation of the copenhagen psychosocial questionnaire (COPSOQ) in professional drivers. Front. Psychol. 2019, 10, 1531. [Google Scholar] [CrossRef]
  5. Craig, J.L.; Lowman, A.; Schneeberger, J.D.; Burnier, C.; Lesh, M. Transit Vehicle Collision Characteristics for Connected Vehicle Applications Research: 2009-2014 Analysis of Collisions Involving Transit Vehicles and Applicability of Connected Vehicle Solutions; Technical Report, United States; Joint Program Office for Intelligent Transportation Systems: Washington, DC, USA, 2016.
  6. Charters, K.E.; Gabbe, B.J.; Mitra, B. Pedestrian traffic injury in Victoria, Australia. Injury 2018, 49, 256–260. [Google Scholar] [CrossRef]
  7. Orsi, C.; Montomoli, C.; Otte, D.; Morandi, A. Road accidents involving bicycles: Configurations and injuries. Int. J. Inj. Control. Saf. Promot. 2017, 24, 534–543. [Google Scholar] [CrossRef] [PubMed]
  8. Waseem, M.; Ahmed, A.; Saeed, T.U. Factors affecting motorcyclists’ injury severities: An empirical assessment using random parameters logit model with heterogeneity in means and variances. Accid. Anal. Prev. 2019, 123, 12–19. [Google Scholar] [CrossRef] [PubMed]
  9. Elimalech, Y.; Stein, G. Safety System for a Vehicle to Detect and Warn of a Potential Collision. U.S. Patent 10,699,138, 30 June 2020. [Google Scholar]
  10. Lee, Y.; Ansari, I.; Shim, J. Rear-approaching vehicle detection using frame similarity base on faster R-CNN. Int. J. Eng. Technol. 2018, 7, 177–180. [Google Scholar] [CrossRef]
  11. Ra, M.; Jung, H.G.; Suhr, J.K.; Kim, W.Y. Part-based vehicle detection in side-rectilinear images for blind-spot detection. Expert Syst. Appl. 2018, 101, 116–128. [Google Scholar] [CrossRef]
  12. Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-based blind spot detection with a general purpose lightweight neural network. Electronics 2019, 8, 233. [Google Scholar] [CrossRef]
  13. Abraham, S.; Luciya Joji, T.; Yuvaraj, D. Enhancing vehicle safety with drowsiness detection and collision avoidance. Int. J. Pure Appl. Math. 2018, 120, 2295–2310. [Google Scholar]
  14. Shameen, Z.; Yusoff, M.Z.; Saad, M.N.M.; Malik, A.S.; Muzammel, M. Electroencephalography (EEG) based drowsiness detection for drivers: A review. ARPN J. Eng. Appl. Sci 2018, 13, 1458–1464. [Google Scholar]
  15. McNeil, S.; Duggins, D.; Mertz, C.; Suppe, A.; Thorpe, C. A performance specification for transit bus side collision warning system. In Proceedings of the ITS2002, 9th World Congress on Intelligent Transport Systems, Chicago, IL, USA, 14–17 October 2002. [Google Scholar]
  16. Pecheux, K.K.; Strathman, J.; Kennedy, J.F. Test and Evaluation of Systems to Warn Pedestrians of Turning Buses. Transp. Res. Rec. 2016, 2539, 159–166. [Google Scholar] [CrossRef]
  17. Wei, C.; Becic, E.; Edwards, C.; Graving, J.; Manser, M. Task analysis of transit bus drivers’ left-turn maneuver: Potential countermeasures for the reduction of collisions with pedestrians. Saf. Sci. 2014, 68, 81–88. [Google Scholar] [CrossRef]
  18. Prati, G.; Marín Puchades, V.; De Angelis, M.; Fraboni, F.; Pietrantoni, L. Factors contributing to bicycle–motorised vehicle collisions: A systematic literature review. Transp. Rev. 2018, 38, 184–208. [Google Scholar] [CrossRef]
  19. Silla, A.; Leden, L.; Rämä, P.; Scholliers, J.; Van Noort, M.; Bell, D. Can cyclist safety be improved with intelligent transport systems? Accid. Anal. Prev. 2017, 105, 134–145. [Google Scholar] [CrossRef]
  20. Frampton, R.J.; Millington, J.E. Vulnerable Road User Protection from Heavy Goods Vehicles Using Direct and Indirect Vision Aids. Sustainability 2022, 14, 3317. [Google Scholar] [CrossRef]
  21. Girbes, V.; Armesto, L.; Dols, J.; Tornero, J. Haptic feedback to assist bus drivers for pedestrian safety at low speed. IEEE Trans. Haptics 2016, 9, 345–357. [Google Scholar] [CrossRef]
  22. Girbés, V.; Armesto, L.; Dols, J.; Tornero, J. An active safety system for low-speed bus braking assistance. IEEE Trans. Intell. Transp. Syst. 2016, 18, 377–387. [Google Scholar] [CrossRef]
  23. Zhang, W.B.; DeLeon, R.; Burton, F.; McLoed, B.; Chan, C.; Wang, X.; Johnson, S.; Empey, D. Develop Performance Specifications for Frontal Collision Warning System for Transit buses. In Proceedings of the 7th World Congress On Intelligent Systems, Turin, Italy, 6–9 November 2000. [Google Scholar]
  24. Wisultschew, C.; Mujica, G.; Lanza-Gutierrez, J.M.; Portilla, J. 3D-LIDAR based object detection and tracking on the edge of IoT for railway level crossing. IEEE Access 2021, 9, 35718–35729. [Google Scholar] [CrossRef]
  25. Muzammel, M.; Yusoff, M.Z.; Malik, A.S.; Saad, M.N.M.; Meriaudeau, F. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures. In Proceedings of the Thirteenth International Conference on Quality Control by Artificial Vision 2017, Tokyo, Japan, 14–16 May 2017; Volume 10338, pp. 287–294. [Google Scholar]
  26. Goodall, N.; Ohlms, P.B. Evaluation of a Transit Bus Collision Avoidance Warning System in Virginia; Virginia Transportation Research Council (VTRC): Charlottesville, VA, USA, 2022. [Google Scholar]
  27. Tseng, D.C.; Hsu, C.T.; Chen, W.S. Blind-spot vehicle detection using motion and static features. Int. J. Mach. Learn. Comput. 2014, 4, 516. [Google Scholar] [CrossRef]
  28. Wu, B.F.; Huang, H.Y.; Chen, C.J.; Chen, Y.H.; Chang, C.W.; Chen, Y.L. A vision-based blind spot warning system for daytime and nighttime driver assistance. Comput. Electr. Eng. 2013, 39, 846–862. [Google Scholar] [CrossRef]
  29. Singh, S.; Meng, R.; Nelakuditi, S.; Tong, Y.; Wang, S. SideEye: Mobile assistant for blind spot monitoring. In Proceedings of the 2014 international conference on computing, networking and communications (ICNC), Honolulu, HI, USA, 3–6 February 2014; pp. 408–412. [Google Scholar]
  30. Dooley, D.; McGinley, B.; Hughes, C.; Kilmartin, L.; Jones, E.; Glavin, M. A blind-zone detection method using a rear-mounted fisheye camera with combination of vehicle detection methods. IEEE Trans. Intell. Transp. Syst. 2015, 17, 264–278. [Google Scholar] [CrossRef]
  31. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  32. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  33. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  34. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  35. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7310–7311. [Google Scholar]
  36. Du, L.; Zhang, R.; Wang, X. Overview of two-stage object detection algorithms. J. Phys. Conf. Ser. 2020, 1544, 012033. [Google Scholar] [CrossRef]
  37. Theckedath, D.; Sedamkar, R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput. Sci. 2020, 1, 1–7. [Google Scholar] [CrossRef]
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  39. Yang, L.; Jiang, D.; Xia, X.; Pei, E.; Oveneke, M.C.; Sahli, H. Multimodal measurement of depression using deep learning models. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, Mountain View, CA, USA, 23 October 2017; pp. 53–59. [Google Scholar]
  40. Muzammel, M.; Salam, H.; Othmani, A. End-to-end multimodal clinical depression recognition using deep neural networks: A comparative analysis. Comput. Methods Programs Biomed. 2021, 211, 106433. [Google Scholar] [CrossRef]
  41. Mendels, G.; Levitan, S.I.; Lee, K.Z.; Hirschberg, J. Hybrid Acoustic-Lexical Deep Learning Approach for Deception Detection. In Proceedings of the Interspeech, Stockholm, Sweden, 20–24 August 2017; pp. 1472–1476. [Google Scholar]
  42. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  43. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
  44. Guo, X.; Singh, S.; Lee, H.; Lewis, R.L.; Wang, X. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. Adv. Neural Inf. Process. Syst. 2014, 27, 3338–3346. [Google Scholar]
  45. Cui, Z.; Chang, H.; Shan, S.; Zhong, B.; Chen, X. Deep network cascade for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 49–64. [Google Scholar]
  46. Han, Y.; Jiang, T.; Ma, Y.; Xu, C. Pretraining convolutional neural networks for image-based vehicle classification. Adv. Multimed. 2018, 2018, 3138278. [Google Scholar] [CrossRef]
  47. Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv 2013, arXiv:1312.6229. [Google Scholar]
  48. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  49. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  50. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  51. Dai, J.; Li, Y.; He, K.; Sun, J.; Fcn, R. Object Detection via Region-Based Fully Convolutional Networks. arXiv 2016, arXiv:1605.06409. [Google Scholar]
  52. Chu, W.; Liu, Y.; Shen, C.; Cai, D.; Hua, X.S. Multi-task vehicle detection with region-of-interest voting. IEEE Trans. Image Process. 2017, 27, 432–441. [Google Scholar] [CrossRef] [PubMed]
  53. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  54. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  55. Jocher, G.; Nishimura, K.; Mineeva, T.; Vilariño, R. Yolov5. Code Repository. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 28 June 2022).
  56. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  57. Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
  58. Xu, X.; Zhao, M.; Shi, P.; Ren, R.; He, X.; Wei, X.; Yang, H. Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN. Sensors 2022, 22, 1215. [Google Scholar] [CrossRef] [PubMed]
  59. Sivaraman, S.; Trivedi, M.M. A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans. Intell. Transp. Syst. 2010, 11, 267–276. [Google Scholar] [CrossRef]
  60. Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Rear-end vision-based collision detection system for motorcyclists. J. Electron. Imaging 2017, 26, 1–14. [Google Scholar] [CrossRef]
  61. Roychowdhury, S.; Muppirisetty, L.S. Fast proposals for image and video annotation using modified echo state networks. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1225–1230. [Google Scholar]
  62. Satzoda, R.K.; Trivedi, M.M. Multipart vehicle detection using symmetry-derived analysis and active learning. IEEE Trans. Intell. Transp. Syst. 2015, 17, 926–937. [Google Scholar] [CrossRef]
  63. Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Event-related potential responses of motorcyclists towards rear end collision warning system. IEEE Access 2018, 6, 31609–31620. [Google Scholar] [CrossRef]
  64. Fort, A.; Collette, B.; Bueno, M.; Deleurence, P.; Bonnard, A. Impact of totally and partially predictive alert in distracted and undistracted subjects: An event related potential study. Accid. Anal. Prev. 2013, 50, 578–586. [Google Scholar] [CrossRef]
  65. Bueno, M.; Fabrigoule, C.; Deleurence, P.; Ndiaye, D.; Fort, A. An electrophysiological study of the impact of a Forward Collision Warning System in a simulator driving task. Brain Res. 2012, 1470, 69–79. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Steps of the proposed approaches to detect blind-spot vehicles using faster R-CNN object detection.
Figure 1. Steps of the proposed approaches to detect blind-spot vehicles using faster R-CNN object detection.
Sensors 22 06088 g001
Figure 2. Anchor boxes plot to identify sizes and shapes of different vehicles for faster R-CNN object detection. Each blue circle indicates the label box area versus the label box aspect ratio.
Figure 2. Anchor boxes plot to identify sizes and shapes of different vehicles for faster R-CNN object detection. Each blue circle indicates the label box area versus the label box aspect ratio.
Sensors 22 06088 g002
Figure 3. Layer wise integration of proposed models with faster R-CNN for blind-spot vehicle detection.
Figure 3. Layer wise integration of proposed models with faster R-CNN for blind-spot vehicle detection.
Sensors 22 06088 g003
Figure 4. Proposed 2D CNN architectures to extract deep features for blind-spot vehicle detection.
Figure 4. Proposed 2D CNN architectures to extract deep features for blind-spot vehicle detection.
Sensors 22 06088 g004
Figure 5. Pre-trained Resnet 50 and Resnet 101 networks for extracting deep features.
Figure 5. Pre-trained Resnet 50 and Resnet 101 networks for extracting deep features.
Sensors 22 06088 g005
Figure 6. Cameras mounted on the bus mirrors to detect the presence of vehicles in blind spots.
Figure 6. Cameras mounted on the bus mirrors to detect the presence of vehicles in blind spots.
Sensors 22 06088 g006
Figure 7. Different types of vehicle detection from self-recorded dataset: (a) three different vehicles in a parallel lane with the bus; (b) one truck in a parallel lane and two cars in the opposite lane; (c) motorcycle at a certain distance; (d) motorcycle very close to the bus.
Figure 7. Different types of vehicle detection from self-recorded dataset: (a) three different vehicles in a parallel lane with the bus; (b) one truck in a parallel lane and two cars in the opposite lane; (c) motorcycle at a certain distance; (d) motorcycle very close to the bus.
Sensors 22 06088 g007
Figure 8. Vehicle detection from the online LISA dataset: (a) five vehicles detected in dense traffic scenario; (b) six vehicles detected in a dense traffic condition; (c) vehicle detection on highway; (d) vehicle detection on highway; (e) vehicle detection in urban area; and (f) differentiating vehicle and pedestrian.
Figure 8. Vehicle detection from the online LISA dataset: (a) five vehicles detected in dense traffic scenario; (b) six vehicles detected in a dense traffic condition; (c) vehicle detection on highway; (d) vehicle detection on highway; (e) vehicle detection in urban area; and (f) differentiating vehicle and pedestrian.
Sensors 22 06088 g008aSensors 22 06088 g008b
Figure 9. TPR (%) and FDR (%) analysis of proposed approaches for self-recorded and online LISA datasets.
Figure 9. TPR (%) and FDR (%) analysis of proposed approaches for self-recorded and online LISA datasets.
Sensors 22 06088 g009
Figure 10. TPR (%) and FDR (%) analysis of proposed experiments for self-recorded and online LISA datasets.
Figure 10. TPR (%) and FDR (%) analysis of proposed experiments for self-recorded and online LISA datasets.
Sensors 22 06088 g010
Table 1. Utilized dataset characteristics to validate the proposed deep CNN based approaches for blind-spot collision detection.
Table 1. Utilized dataset characteristics to validate the proposed deep CNN based approaches for blind-spot collision detection.
DatasetData DescriptionSource of RecordingTotal Images
Self-Recorded Dataset for Blind Spot Collision DetectionDifferent road scenarios with multiple vehicles and various traffic and lighting conditions.Bus3000
LISA-Dense [59]Multiple vehicles, dense traffic, daytime, highway.Car1600
LISA-Sunny [59]Multiple vehicles, medium traffic, daytime, highway.Car300
LISA-Urban [59]Single vehicle, urban scenario, cloudy morning.Car300
Total 5200
Table 2. Analysis of both models in terms of frame rate to validate the proposed deep neural architectures.
Table 2. Analysis of both models in terms of frame rate to validate the proposed deep neural architectures.
Proposed ModelsDatasetFrame Rate (fps)
Model 1Self-Recorded1.03
LISA-Dense1.10
LISA-Urban1.39
LISA-Sunny1.14
Model 2Self-Recorded0.89
LISA-Dense0.94
LISA-Urban1.12
LISA-Sunny1.00
Table 3. Comparisons of proposed approaches with existing state-of-the-art approaches in terms of true positive rate (TPR), false detection rate (FDR), and frame rate.
Table 3. Comparisons of proposed approaches with existing state-of-the-art approaches in terms of true positive rate (TPR), false detection rate (FDR), and frame rate.
ReferenceDatasetTPR (%)FDR (%)Frame Rate (fps)
ProposedSelf-Recorded98.723.490.89
Approach 2LISA-Dense98.063.120.94
LISA-Urban99.451.671.12
LISA-Sunny99.342.781.00
ProposedSelf-Recorded98.193.051.03
Approach 1LISA-Dense97.873.981.10
LISA-Urban99.021.661.39
LISA-Sunny98.893.171.14
S. RoychowdhuryLISA-Urban100.004.501.10
et al. (2018) [61]LISA-Sunny98.004.101.10
M. MuzammelLISA-Dense95.015.0129.04
et al. (2017) [60]LISA-Urban94.006.6025.06
LISA-Sunny97.006.0337.50
R. K. SatzodaLISA-Dense94.506.8015.50
(2016) [62]LISA-Sunny98.009.0025.40
S. SivaramanLISA-Dense95.006.40
(2010) [59]LISA-Urban91.7025.50
LISA-Sunny99.808.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muzammel, M.; Yusoff, M.Z.; Saad, M.N.M.; Sheikh, F.; Awais, M.A. Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors 2022, 22, 6088. https://doi.org/10.3390/s22166088

AMA Style

Muzammel M, Yusoff MZ, Saad MNM, Sheikh F, Awais MA. Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors. 2022; 22(16):6088. https://doi.org/10.3390/s22166088

Chicago/Turabian Style

Muzammel, Muhammad, Mohd Zuki Yusoff, Mohamad Naufal Mohamad Saad, Faryal Sheikh, and Muhammad Ahsan Awais. 2022. "Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture" Sensors 22, no. 16: 6088. https://doi.org/10.3390/s22166088

APA Style

Muzammel, M., Yusoff, M. Z., Saad, M. N. M., Sheikh, F., & Awais, M. A. (2022). Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors, 22(16), 6088. https://doi.org/10.3390/s22166088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop