Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture
Abstract
:1. Introduction
- 1
- Design of two high-level CNN based feature descriptors for blind-spot vehicle detection for heavy vehicles;
- 2
- Design of fusion technique for different high level feature descriptors and its integration with the faster R-CNN. In addition, performance comparison with existing state-of-the-art approaches;
- 3
- Introduction of fusion technique for pre-trained high-level feature descriptors for object detection application.
2. Related Work
3. Proposed Methodology
3.1. Pre-Processing
3.2. Anchor Boxes Estimation
3.3. Data Augmentation
3.4. Proposed CNNs and Their Integration with Faster R-CNN
3.4.1. Proposed High Level Feature Descriptors Architecture
Self-Designed High-Level Feature Descriptors
Pre-Trained Feature Descriptors
3.4.2. Features Addition and Smoothness
3.4.3. Integration with Faster R-CNN
4. Results and Discussion
4.1. Dataset
4.2. Network Implementation Details
4.3. Evaluation Matrix
4.4. Results Analysis
4.5. Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CNN | Convolutional neural network |
HOG | Histogram of oriented gradients |
SS | Selective search |
YOLO | You only look once |
LISA | Laboratory for Intelligent and Safe Automobiles |
SGDM | Stochastic gradient descent with momentum |
TPR | True positive rate |
FDR | False detection rate |
References
- Feng, S.; Li, Z.; Ci, Y.; Zhang, G. Risk factors affecting fatal bus accident severity: Their impact on different types of bus drivers. Accid. Anal. Prev. 2016, 86, 29–39. [Google Scholar] [CrossRef] [PubMed]
- Evgenikos, P.; Yannis, G.; Folla, K.; Bauer, R.; Machata, K.; Brandstaetter, C. Characteristics and causes of heavy goods vehicles and buses accidents in Europe. Transp. Res. Procedia 2016, 14, 2158–2167. [Google Scholar] [CrossRef]
- Öz, B.; Özkan, T.; Lajunen, T. Professional and non-professional drivers’ stress reactions and risky driving. Transp. Res. Part F Traffic Psychol. Behav. 2010, 13, 32–40. [Google Scholar] [CrossRef]
- Useche, S.A.; Montoro, L.; Alonso, F.; Pastor, J.C. Psychosocial work factors, job stress and strain at the wheel: Validation of the copenhagen psychosocial questionnaire (COPSOQ) in professional drivers. Front. Psychol. 2019, 10, 1531. [Google Scholar] [CrossRef]
- Craig, J.L.; Lowman, A.; Schneeberger, J.D.; Burnier, C.; Lesh, M. Transit Vehicle Collision Characteristics for Connected Vehicle Applications Research: 2009-2014 Analysis of Collisions Involving Transit Vehicles and Applicability of Connected Vehicle Solutions; Technical Report, United States; Joint Program Office for Intelligent Transportation Systems: Washington, DC, USA, 2016.
- Charters, K.E.; Gabbe, B.J.; Mitra, B. Pedestrian traffic injury in Victoria, Australia. Injury 2018, 49, 256–260. [Google Scholar] [CrossRef]
- Orsi, C.; Montomoli, C.; Otte, D.; Morandi, A. Road accidents involving bicycles: Configurations and injuries. Int. J. Inj. Control. Saf. Promot. 2017, 24, 534–543. [Google Scholar] [CrossRef] [PubMed]
- Waseem, M.; Ahmed, A.; Saeed, T.U. Factors affecting motorcyclists’ injury severities: An empirical assessment using random parameters logit model with heterogeneity in means and variances. Accid. Anal. Prev. 2019, 123, 12–19. [Google Scholar] [CrossRef] [PubMed]
- Elimalech, Y.; Stein, G. Safety System for a Vehicle to Detect and Warn of a Potential Collision. U.S. Patent 10,699,138, 30 June 2020. [Google Scholar]
- Lee, Y.; Ansari, I.; Shim, J. Rear-approaching vehicle detection using frame similarity base on faster R-CNN. Int. J. Eng. Technol. 2018, 7, 177–180. [Google Scholar] [CrossRef]
- Ra, M.; Jung, H.G.; Suhr, J.K.; Kim, W.Y. Part-based vehicle detection in side-rectilinear images for blind-spot detection. Expert Syst. Appl. 2018, 101, 116–128. [Google Scholar] [CrossRef]
- Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-based blind spot detection with a general purpose lightweight neural network. Electronics 2019, 8, 233. [Google Scholar] [CrossRef]
- Abraham, S.; Luciya Joji, T.; Yuvaraj, D. Enhancing vehicle safety with drowsiness detection and collision avoidance. Int. J. Pure Appl. Math. 2018, 120, 2295–2310. [Google Scholar]
- Shameen, Z.; Yusoff, M.Z.; Saad, M.N.M.; Malik, A.S.; Muzammel, M. Electroencephalography (EEG) based drowsiness detection for drivers: A review. ARPN J. Eng. Appl. Sci 2018, 13, 1458–1464. [Google Scholar]
- McNeil, S.; Duggins, D.; Mertz, C.; Suppe, A.; Thorpe, C. A performance specification for transit bus side collision warning system. In Proceedings of the ITS2002, 9th World Congress on Intelligent Transport Systems, Chicago, IL, USA, 14–17 October 2002. [Google Scholar]
- Pecheux, K.K.; Strathman, J.; Kennedy, J.F. Test and Evaluation of Systems to Warn Pedestrians of Turning Buses. Transp. Res. Rec. 2016, 2539, 159–166. [Google Scholar] [CrossRef]
- Wei, C.; Becic, E.; Edwards, C.; Graving, J.; Manser, M. Task analysis of transit bus drivers’ left-turn maneuver: Potential countermeasures for the reduction of collisions with pedestrians. Saf. Sci. 2014, 68, 81–88. [Google Scholar] [CrossRef]
- Prati, G.; Marín Puchades, V.; De Angelis, M.; Fraboni, F.; Pietrantoni, L. Factors contributing to bicycle–motorised vehicle collisions: A systematic literature review. Transp. Rev. 2018, 38, 184–208. [Google Scholar] [CrossRef]
- Silla, A.; Leden, L.; Rämä, P.; Scholliers, J.; Van Noort, M.; Bell, D. Can cyclist safety be improved with intelligent transport systems? Accid. Anal. Prev. 2017, 105, 134–145. [Google Scholar] [CrossRef]
- Frampton, R.J.; Millington, J.E. Vulnerable Road User Protection from Heavy Goods Vehicles Using Direct and Indirect Vision Aids. Sustainability 2022, 14, 3317. [Google Scholar] [CrossRef]
- Girbes, V.; Armesto, L.; Dols, J.; Tornero, J. Haptic feedback to assist bus drivers for pedestrian safety at low speed. IEEE Trans. Haptics 2016, 9, 345–357. [Google Scholar] [CrossRef]
- Girbés, V.; Armesto, L.; Dols, J.; Tornero, J. An active safety system for low-speed bus braking assistance. IEEE Trans. Intell. Transp. Syst. 2016, 18, 377–387. [Google Scholar] [CrossRef]
- Zhang, W.B.; DeLeon, R.; Burton, F.; McLoed, B.; Chan, C.; Wang, X.; Johnson, S.; Empey, D. Develop Performance Specifications for Frontal Collision Warning System for Transit buses. In Proceedings of the 7th World Congress On Intelligent Systems, Turin, Italy, 6–9 November 2000. [Google Scholar]
- Wisultschew, C.; Mujica, G.; Lanza-Gutierrez, J.M.; Portilla, J. 3D-LIDAR based object detection and tracking on the edge of IoT for railway level crossing. IEEE Access 2021, 9, 35718–35729. [Google Scholar] [CrossRef]
- Muzammel, M.; Yusoff, M.Z.; Malik, A.S.; Saad, M.N.M.; Meriaudeau, F. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures. In Proceedings of the Thirteenth International Conference on Quality Control by Artificial Vision 2017, Tokyo, Japan, 14–16 May 2017; Volume 10338, pp. 287–294. [Google Scholar]
- Goodall, N.; Ohlms, P.B. Evaluation of a Transit Bus Collision Avoidance Warning System in Virginia; Virginia Transportation Research Council (VTRC): Charlottesville, VA, USA, 2022. [Google Scholar]
- Tseng, D.C.; Hsu, C.T.; Chen, W.S. Blind-spot vehicle detection using motion and static features. Int. J. Mach. Learn. Comput. 2014, 4, 516. [Google Scholar] [CrossRef]
- Wu, B.F.; Huang, H.Y.; Chen, C.J.; Chen, Y.H.; Chang, C.W.; Chen, Y.L. A vision-based blind spot warning system for daytime and nighttime driver assistance. Comput. Electr. Eng. 2013, 39, 846–862. [Google Scholar] [CrossRef]
- Singh, S.; Meng, R.; Nelakuditi, S.; Tong, Y.; Wang, S. SideEye: Mobile assistant for blind spot monitoring. In Proceedings of the 2014 international conference on computing, networking and communications (ICNC), Honolulu, HI, USA, 3–6 February 2014; pp. 408–412. [Google Scholar]
- Dooley, D.; McGinley, B.; Hughes, C.; Kilmartin, L.; Jones, E.; Glavin, M. A blind-zone detection method using a rear-mounted fisheye camera with combination of vehicle detection methods. IEEE Trans. Intell. Transp. Syst. 2015, 17, 264–278. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7310–7311. [Google Scholar]
- Du, L.; Zhang, R.; Wang, X. Overview of two-stage object detection algorithms. J. Phys. Conf. Ser. 2020, 1544, 012033. [Google Scholar] [CrossRef]
- Theckedath, D.; Sedamkar, R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput. Sci. 2020, 1, 1–7. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Yang, L.; Jiang, D.; Xia, X.; Pei, E.; Oveneke, M.C.; Sahli, H. Multimodal measurement of depression using deep learning models. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, Mountain View, CA, USA, 23 October 2017; pp. 53–59. [Google Scholar]
- Muzammel, M.; Salam, H.; Othmani, A. End-to-end multimodal clinical depression recognition using deep neural networks: A comparative analysis. Comput. Methods Programs Biomed. 2021, 211, 106433. [Google Scholar] [CrossRef]
- Mendels, G.; Levitan, S.I.; Lee, K.Z.; Hirschberg, J. Hybrid Acoustic-Lexical Deep Learning Approach for Deception Detection. In Proceedings of the Interspeech, Stockholm, Sweden, 20–24 August 2017; pp. 1472–1476. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
- Guo, X.; Singh, S.; Lee, H.; Lewis, R.L.; Wang, X. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. Adv. Neural Inf. Process. Syst. 2014, 27, 3338–3346. [Google Scholar]
- Cui, Z.; Chang, H.; Shan, S.; Zhong, B.; Chen, X. Deep network cascade for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 49–64. [Google Scholar]
- Han, Y.; Jiang, T.; Ma, Y.; Xu, C. Pretraining convolutional neural networks for image-based vehicle classification. Adv. Multimed. 2018, 2018, 3138278. [Google Scholar] [CrossRef]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv 2013, arXiv:1312.6229. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Dai, J.; Li, Y.; He, K.; Sun, J.; Fcn, R. Object Detection via Region-Based Fully Convolutional Networks. arXiv 2016, arXiv:1605.06409. [Google Scholar]
- Chu, W.; Liu, Y.; Shen, C.; Cai, D.; Hua, X.S. Multi-task vehicle detection with region-of-interest voting. IEEE Trans. Image Process. 2017, 27, 432–441. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G.; Nishimura, K.; Mineeva, T.; Vilariño, R. Yolov5. Code Repository. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 28 June 2022).
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Xu, X.; Zhao, M.; Shi, P.; Ren, R.; He, X.; Wei, X.; Yang, H. Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN. Sensors 2022, 22, 1215. [Google Scholar] [CrossRef] [PubMed]
- Sivaraman, S.; Trivedi, M.M. A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans. Intell. Transp. Syst. 2010, 11, 267–276. [Google Scholar] [CrossRef]
- Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Rear-end vision-based collision detection system for motorcyclists. J. Electron. Imaging 2017, 26, 1–14. [Google Scholar] [CrossRef]
- Roychowdhury, S.; Muppirisetty, L.S. Fast proposals for image and video annotation using modified echo state networks. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1225–1230. [Google Scholar]
- Satzoda, R.K.; Trivedi, M.M. Multipart vehicle detection using symmetry-derived analysis and active learning. IEEE Trans. Intell. Transp. Syst. 2015, 17, 926–937. [Google Scholar] [CrossRef]
- Muzammel, M.; Yusoff, M.Z.; Meriaudeau, F. Event-related potential responses of motorcyclists towards rear end collision warning system. IEEE Access 2018, 6, 31609–31620. [Google Scholar] [CrossRef]
- Fort, A.; Collette, B.; Bueno, M.; Deleurence, P.; Bonnard, A. Impact of totally and partially predictive alert in distracted and undistracted subjects: An event related potential study. Accid. Anal. Prev. 2013, 50, 578–586. [Google Scholar] [CrossRef]
- Bueno, M.; Fabrigoule, C.; Deleurence, P.; Ndiaye, D.; Fort, A. An electrophysiological study of the impact of a Forward Collision Warning System in a simulator driving task. Brain Res. 2012, 1470, 69–79. [Google Scholar] [CrossRef] [PubMed]
Dataset | Data Description | Source of Recording | Total Images |
---|---|---|---|
Self-Recorded Dataset for Blind Spot Collision Detection | Different road scenarios with multiple vehicles and various traffic and lighting conditions. | Bus | 3000 |
LISA-Dense [59] | Multiple vehicles, dense traffic, daytime, highway. | Car | 1600 |
LISA-Sunny [59] | Multiple vehicles, medium traffic, daytime, highway. | Car | 300 |
LISA-Urban [59] | Single vehicle, urban scenario, cloudy morning. | Car | 300 |
Total | 5200 |
Proposed Models | Dataset | Frame Rate (fps) |
---|---|---|
Model 1 | Self-Recorded | 1.03 |
LISA-Dense | 1.10 | |
LISA-Urban | 1.39 | |
LISA-Sunny | 1.14 | |
Model 2 | Self-Recorded | 0.89 |
LISA-Dense | 0.94 | |
LISA-Urban | 1.12 | |
LISA-Sunny | 1.00 |
Reference | Dataset | TPR (%) | FDR (%) | Frame Rate (fps) |
---|---|---|---|---|
Proposed | Self-Recorded | 98.72 | 3.49 | 0.89 |
Approach 2 | LISA-Dense | 98.06 | 3.12 | 0.94 |
LISA-Urban | 99.45 | 1.67 | 1.12 | |
LISA-Sunny | 99.34 | 2.78 | 1.00 | |
Proposed | Self-Recorded | 98.19 | 3.05 | 1.03 |
Approach 1 | LISA-Dense | 97.87 | 3.98 | 1.10 |
LISA-Urban | 99.02 | 1.66 | 1.39 | |
LISA-Sunny | 98.89 | 3.17 | 1.14 | |
S. Roychowdhury | LISA-Urban | 100.00 | 4.50 | 1.10 |
et al. (2018) [61] | LISA-Sunny | 98.00 | 4.10 | 1.10 |
M. Muzammel | LISA-Dense | 95.01 | 5.01 | 29.04 |
et al. (2017) [60] | LISA-Urban | 94.00 | 6.60 | 25.06 |
LISA-Sunny | 97.00 | 6.03 | 37.50 | |
R. K. Satzoda | LISA-Dense | 94.50 | 6.80 | 15.50 |
(2016) [62] | LISA-Sunny | 98.00 | 9.00 | 25.40 |
S. Sivaraman | LISA-Dense | 95.00 | 6.40 | — |
(2010) [59] | LISA-Urban | 91.70 | 25.50 | — |
LISA-Sunny | 99.80 | 8.50 | — |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Muzammel, M.; Yusoff, M.Z.; Saad, M.N.M.; Sheikh, F.; Awais, M.A. Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors 2022, 22, 6088. https://doi.org/10.3390/s22166088
Muzammel M, Yusoff MZ, Saad MNM, Sheikh F, Awais MA. Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors. 2022; 22(16):6088. https://doi.org/10.3390/s22166088
Chicago/Turabian StyleMuzammel, Muhammad, Mohd Zuki Yusoff, Mohamad Naufal Mohamad Saad, Faryal Sheikh, and Muhammad Ahsan Awais. 2022. "Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture" Sensors 22, no. 16: 6088. https://doi.org/10.3390/s22166088
APA StyleMuzammel, M., Yusoff, M. Z., Saad, M. N. M., Sheikh, F., & Awais, M. A. (2022). Blind-Spot Collision Detection System for Commercial Vehicles Using Multi Deep CNN Architecture. Sensors, 22(16), 6088. https://doi.org/10.3390/s22166088