skip to main content
research-article

Railway Traffic Object Detection Using Differential Feature Fusion Convolution Neural Network

Published: 01 March 2021 Publication History

Abstract

Railway shunting accidents, in which trains collide with obstacles, often occur because of human error or fatigue. It is therefore necessary to detect traffic objects in front of the trains and inform the driver to take timely action. To detect these objects in railways, we proposed an object-detection method using a differential feature fusion convolutional neural network (DFF-Net). DFF-Net includes two modules: the prior object-detection module and the object-detection module. The prior module produces initial anchor boxes for the subsequent detection module. Taking the initial anchor boxes as input, the object-detection module applies a differential feature fusion sub-module to enrich the sematic information for object detection, enhancing the detection performance, particularly for small objects. In experiments conducted on a railway traffic dataset, compared with the current state-of-the-art detectors, the proposed method exhibited significant higher performance and was more effective and more efficient than the other methods for object detection in railway tracks. Additionally, evaluation results based on PASCAL VOC2007 and VOC2012 indicated that the proposed method was significantly better than the state-of-the-art methods.

References

[1]
T. Ye, B. Wang, P. Song, and J. Li, “Automatic railway traffic object detection system using feature fusion refine neural network under shunting mode,” Sensors, vol. 18, no. 6, p. 1916, 2018.
[2]
Z. Qi, Y. Tian, and Y. Shi, “Efficient railway tracks detection and turnouts recognition method using HOG features,” Neural Comput. Appl., vol. 23, no. 1, pp. 245–254, Jul. 2013.
[3]
J. Mccall and M. Trivedi, “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006.
[4]
T. Zhu and J. M. M. S. De Pedro, “Railway traffic conflict detection via a state transition prediction approach,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 5, pp. 1268–1278, May 2017.
[5]
D. Sinha and F. Feroz, “Obstacle detection on railway tracks using vibration sensors and signal filtering using Bayesian analysis,” IEEE Sensors J., vol. 16, no. 3, pp. 642–649, Feb. 2016.
[6]
H. Salmane, L. Khoudour, and Y. Ruichek, “A video-analysis-based railway–road safety system for detecting hazard situations at level crossings=,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 2, pp. 596–609, Apr. 2015.
[7]
R. Danescu and S. Nedevschi, “Probabilistic lane tracking in difficult road scenarios using stereovision,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 2, pp. 272–282, Jun. 2009.
[8]
J. Li, F. Zhou, and T. Ye, “Real-world railway traffic detection based on faster better network,” IEEE Access, vol. 6, pp. 68730–68739, 2018.
[9]
Y. Lecun, Y. Bengio, and G. E. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444, May 2015.
[10]
Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, pp. 27–48, Apr. 2016.
[11]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
[12]
W. Liuet al., “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV). Cham, Switzerland: Springer, Oct. 2016, pp. 21–37.
[13]
Anti Collision Device Network [EB/OL]. Accessed: Jun.9, 2018. [Online]. Available: http://en.wikipedia.org/wiki/Anti_Collision_Device
[14]
J. J. Garciaet al., “Efficient multisensory barrier for obstacle detection on railways,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 3, pp. 702–713, Sep. 2010.
[15]
Z. Šilar and M. Dobrovolný, “The obstacle detection on the railway crossing based on optical flow and clustering,” in Proc. 36th Int. Conf. Telecommun. Signal Process. (TSP), 2013, pp. 755–759.
[16]
Y.-R. Pu, L.-W. Chen, and S.-H. Lee, “Study of moving obstacle detection at railway crossing by machine vision,” Inf. Technol. J., vol. 13, no. 16, pp. 2611–2618, Dec. 2014.
[17]
R. Nakasoneet al., “Frontal obstacle detection using background subtraction and frame registration,” Q. Rep. RTRI, vol. 58, no. 4, pp. 298–302, 2017.
[18]
F. Kaleli and Y. S. Akgul, “Vision-based railroad track extraction using dynamic programming,” in Proc. 12th Int. IEEE Conf. Intell. Transp. Syst., Oct. 2009, pp. 1–6.
[19]
B. T. Nassu and M. Ukai, “Rail extraction for driver support in railways,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2011, pp. 83–88.
[20]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1097–1105.
[21]
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2015, pp. 1440–1448.
[22]
K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, Sep. 2015.
[23]
P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “OverFeat: Integrated recognition, localization and detection using convolutional networks,” 2013, arXiv:1312.6229. [Online]. Available: https://arxiv.org/abs/1312.6229
[24]
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788.
[25]
C. Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “DSSD: Deconvolutional single shot detector,” 2017, arXiv:1701.06659. [Online]. Available: https://arxiv.org/abs/1701.06659
[26]
T. Kong, F. Sun, A. Yao, H. Liu, M. Lu, and Y. Chen, “RON: Reverse connection with objectness prior networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 5936–5944.
[27]
A. G. Howardet al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” 2017, arXiv:1704.04861. [Online]. Available: https://arxiv.org/abs/1704.04861
[28]
T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 2117–2125.
[29]
C. Szegedyet al., “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9.
[30]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 770–778.
[31]
S. Targ, D. Almeida, and K. Lyman, “Resnet in resnet: Generalizing residual architectures,” 2016, arXiv:1603.08029. [Online]. Available: https://arxiv.org/abs/1603.08029

Cited By

View all

Index Terms

  1. Railway Traffic Object Detection Using Differential Feature Fusion Convolution Neural Network
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Intelligent Transportation Systems
      IEEE Transactions on Intelligent Transportation Systems  Volume 22, Issue 3
      March 2021
      611 pages

      Publisher

      IEEE Press

      Publication History

      Published: 01 March 2021

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media