A Novel and Practical Scheme for Resolving the Quality of Samples in Background Modeling
Abstract
:1. Introduction
2. Related Work
3. Methodology
3.1. Bi-Variance
3.2. NoIV-1
3.3. Algorithm
Algorithm 1 The implementation of bi-variance and NoIV-1. |
|
4. Experiment
- The basic one is that the original codes of these methods were executed without any modification. Note that because the starting points of the test of each method were different (such as KDE and SuBSENSE testing after their training frames, while ViBe from the first frame and PBAS with the first one blank), we conventionally obeyed the rule of setting the training number, which is in each scene’s configuration file of CDnet 2014, as the starting points of the test.
- The second test utilized the bi-variance to get the “right” number of frames for training based on the basic one.
- The third scenario was a combination of the basic one with our methods, bi-variance and NoIV-1.
4.1. Quantitative Results and Analysis
4.2. Qualitative Results and Analysis
4.3. Convergent Frames and Analysis
- the convergent frame of every scene was totally different from the others;
- there was a big gap between the minimal convergent frame, 32, and the maximal convergent frame, 380, of the whole scenes;
- the challenging scenes, like badWeather, dynamicBackground, PTZ, turbulence, etc., evolved longer to obtain convergence.
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Uddin, M.A.; Joolee, J.B.; Alam, A.; Lee, Y.K. Human action recognition using adaptive local motion descriptor in spark. IEEE Access 2017, 5, 21157–21167. [Google Scholar] [CrossRef]
- Lin, L.; Zhu, M. Efficient Tracking of Moving Target Based on an Improved Fast Differential Evolution Algorithm. IEEE Access 2018, 6, 6820–6828. [Google Scholar] [CrossRef]
- Singh, J.P.; Jain, S.; Arora, S.; Singh, U.P. Vision-Based Gait Recognition: A Survey. IEEE Access 2018. [Google Scholar] [CrossRef]
- Chou, K.P.; Prasad, M.; Wu, D.; Sharma, N.; Li, D.L.; Lin, Y.F.; Blumenstein, M.; Lin, W.C.; Lin, C.T. Robust Feature-Based Automated Multi-View Human Action Recognition System. IEEE Access 2018, 6, 15283–15296. [Google Scholar] [CrossRef]
- Sheikh, Y.; Shah, M. Bayesian modeling of dynamic scenes for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1778–1792. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Jodoin, P.M.; Porikli, F.; Konrad, J.; Benezeth, Y.; Ishwar, P. CDnet 2014: An expanded change detection benchmark dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 387–394. [Google Scholar]
- Stauffer, C.; Grimson, W.E.L. Adaptive background mixture models for real-time tracking. In Proceedings of the Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; Volume 2, pp. 246–252. [Google Scholar]
- Li, S.Z. Markov random field models in computer vision. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 1994; pp. 361–370. [Google Scholar]
- Elgammal, A.; Harwood, D.; Davis, L. Non-parametric model for background subtraction. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2000; pp. 751–767. [Google Scholar]
- Barnich, O.; Van Droogenbroeck, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 2011, 20, 1709–1724. [Google Scholar] [CrossRef]
- St-Charles, P.L.; Bilodeau, G.A.; Bergevin, R. Subsense: A universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 2015, 24, 359–373. [Google Scholar] [CrossRef]
- Yang, L.; Li, J.; Luo, Y.; Zhao, Y.; Cheng, H.; Li, J. Deep Background Modeling Using Fully Convolutional Network. IEEE Trans. Intell. Transp. Syst. 2018, 19, 254–262. [Google Scholar] [CrossRef]
- Hofmann, M.; Tiefenbacher, P.; Rigoll, G. Background segmentation with feedback: The pixel-based adaptive segmenter. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Providence, RI, USA, 16–21 June 2012; pp. 38–43. [Google Scholar]
- Zeng, D.; Zhu, M. Background Subtraction Using Multiscale Fully Convolutional Network. IEEE Access 2018, 6, 16010–16021. [Google Scholar] [CrossRef]
- Varga, D.; Havasi, L.; Szirányi, T. Pedestrian detection in surveillance videos based on CS-LBP feature. In Proceedings of the 2015 International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Budapest, Hungary, 3–5 June 2015; pp. 413–417. [Google Scholar]
- Chiranjeevi, P.; Sengupta, S. Robust detection of moving objects in video sequences through rough set theory framework. Image Vis. Comput. 2012, 30, 829–842. [Google Scholar] [CrossRef]
- Wang, S.; Wang, Y.; Chen, Y.; Pan, P.; Sun, Z.; He, G. Robust PCA Using Matrix Factorization for Background/Foreground Separation. IEEE Access 2018, 6, 18945–18953. [Google Scholar] [CrossRef]
- Fang, W.; Zhang, T.; Zhao, C.; Soomro, D.B.; Taj, R.; Hu, H. Background Subtraction Based on Random Superpixels Under Multiple Scales for Video Analytics. IEEE Access 2018, 6, 33376–33386. [Google Scholar] [CrossRef]
- Kim, W.; Jung, C. Illumination-invariant background subtraction: Comparative review, models, and prospects. IEEE Access 2017, 5, 8369–8384. [Google Scholar] [CrossRef]
- Cuevas, C.; Martínez, R.; García, N. Detection of stationary foreground objects: A survey. Comput. Vis. Image Underst. 2016, 152, 41–57. [Google Scholar] [CrossRef]
- Maddalena, L.; Petrosino, A. Background subtraction for moving object detection in rgbd data: A survey. J. Imaging 2018, 4, 71. [Google Scholar] [CrossRef]
- Bouwmans, T. Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 2014, 11, 31–66. [Google Scholar] [CrossRef]
- Bouwmans, T.; Javed, S.; Sultana, M.; Jung, S.K. Deep Neural Network Concepts for Background Subtraction: A Systematic Review and Comparative Evaluation. arXiv, 2018; arXiv:1811.05255. [Google Scholar]
- Zhong, J.; Sclaroff, S. Segmenting foreground objects from a dynamic textured background via a robust kalman filter. In Proceedings of the Ninth IEEE International Conference on Computer Vision 2003, Nice, France, 13–16 October 2003; pp. 44–50. [Google Scholar]
- Wang, H.; Suter, D. Background subtraction based on a robust consensus method. In Proceedings of the 18th International Conference on Pattern recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 1, pp. 223–226. [Google Scholar]
- Han, G.; Cai, X.; Wang, J. Object detection based on combination of visible and thermal videos using a joint sample consensus background model. J. Softw. 2013, 8, 987–995. [Google Scholar] [CrossRef]
- St-Charles, P.L.; Bilodeau, G.A.; Bergevin, R. Universal background subtraction using word consensus models. IEEE Trans. Image Process. 2016, 25, 4768–4781. [Google Scholar] [CrossRef]
- Luo, H.; Li, B.; Zhou, Z. Improved background subtraction based on word consensus models. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 689–694. [Google Scholar]
- Javed, S.; Oh, S.; Jung, S. IPBAS: Improved pixel based adaptive background segmenter for background subtraction. In Proceedings of the Conference on Human Computer Interaction, Lisbon, Portugal, 15–17 July 2014. [Google Scholar]
- Kim, K.; Chalidabhongse, T.H.; Harwood, D.; Davis, L. Real-time foreground–background segmentation using codebook model. Real-Time Imaging 2005, 11, 172–185. [Google Scholar] [CrossRef] [Green Version]
- Wern, C.; Azarbayejani, A.; Darrell, T.; Pentland, A.P. Pfinder: Real-time tracking of human body. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 780–785. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef]
- Varadarajan, S.; Miller, P.; Zhou, H. Region-based Mixture of Gaussians modelling for foreground detection in dynamic scenes. Pattern Recogn. 2015, 48, 3488–3503. [Google Scholar] [CrossRef] [Green Version]
- Toyama, K.; Krumm, J.; Brumitt, B.; Meyers, B. Wallflower: Principles and practice of background maintenance. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 255–261. [Google Scholar]
- Elgammal, A.; Duraiswami, R.; Harwood, D.; Davis, L.S. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 2002, 90, 1151–1163. [Google Scholar] [CrossRef] [Green Version]
- Bloisi, D.; Iocchi, L. Independent multimodal background subtraction. CompIMAGE 2012, 39–44. [Google Scholar] [CrossRef] [Green Version]
- St-Charles, P.L.; Bilodeau, G.A. Improving background subtraction using local binary similarity patterns. In Proceedings of the 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), Steamboat Springs, CO, USA, 24–26 March 2014; pp. 509–515. [Google Scholar]
- Soatto, S.; Doretto, G.; Wu, Y.N. Dynamic textures. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 439–446. [Google Scholar]
- López-Rubio, F.J.; López-Rubio, E. Features for stochastic approximation based foreground detection. Comput. Vis. Image Underst. 2015, 133, 30–50. [Google Scholar] [CrossRef]
- Elgammal, A. Figure-ground segmentation–pixel-based. In Visual Analysis of Humans; Springer: Berlin/Heidelberg, Germany, 2011; pp. 31–51. [Google Scholar]
- Crivelli, T.; Bouthemy, P.; Cernuschi-Frías, B.; Yao, J.F. Simultaneous motion detection and background reconstruction with a conditional mixed-state Markov random field. Int. J. Comput. Vis. 2011, 94, 295–316. [Google Scholar] [CrossRef]
- Haines, T.S.; Xiang, T. Background subtraction with dirichlet processes. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 99–113. [Google Scholar]
- Makantasis, K.; Nikitakis, A.; Doulamis, A.D.; Doulamis, N.D.; Papaefstathiou, I. Data-driven background subtraction algorithm for in-camera acceleration in thermal imagery. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2090–2104. [Google Scholar] [CrossRef]
- Tavakkoli, A.; Nicolescu, M.; Bebis, G.; Nicolescu, M. Non-parametric statistical background modeling for efficient foreground region detection. Mach. Vis. Appl. 2009, 20, 395–409. [Google Scholar] [CrossRef]
- Mumtaz, A.; Zhang, W.; Chan, A.B. Joint motion segmentation and background estimation in dynamic scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 368–375. [Google Scholar]
- Paragios, N.; Ramesh, V. A MRF-Based Approach for Real-Time Subway Monitoring. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001; pp. 1034–1040. [Google Scholar] [CrossRef]
- Li, L.; Huang, W.; Gu, I.Y.H.; Tian, Q. Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 2004, 13, 1459–1472. [Google Scholar] [CrossRef]
- Noh, S.; Jeon, M. A new framework for background subtraction using multiple cues. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 493–506. [Google Scholar]
- Fisher, R.A. XV.—The correlation between relatives on the supposition of Mendelian inheritance. Earth Environ. Sci. Trans. R. Soc. Edinb. 1919, 52, 399–433. [Google Scholar] [CrossRef]
- Gaughan, E. Introduction to Analysis; American Mathematical Soc.: Providence, RI, USA, 1998; Volume 1. [Google Scholar]
- Thompson, S.; Gardner, M. Calculus Made Easy; St. Martin’s Press: New York, NY, USA, 1998; Volume 1. [Google Scholar]
- Spivak, M. Calculus. World Student Series; Addison Wesley: Boston, MA, USA, 1967. [Google Scholar]
Scenario | Times | Recall | Precision | F-Measure |
---|---|---|---|---|
basic | - | 0.677320 | 0.460483 | 0.489317 |
Bi-Variance only | - | 0.702139 | 0.457460 | 0.475546 |
Bi-Variance and NoIV-1 | 1 | 0.734267 | 0.500651 | 0.525397 |
2 | 0.744162 | 0.514061 | 0.536388 | |
4 | 0.750847 | 0.582704 | 0.586779 | |
7 | 0.749623 | 0.521745 | 0.542624 | |
ratio | - | 10.86% | 26.54% | 19.92% |
Scenario | Times | Recall | Precision | F-Measure |
---|---|---|---|---|
basic | - | 0.536554 | 0.518769 | 0.442704 |
Bi-Variance only | - | 0.536887 | 0.602184 | 0.499898 |
Bi-Variance and NoIV-1 | 1 | 0.540093 | 0.602429 | 0.501373 |
2 | 0.550113 | 0.614631 | 0.510812 | |
4 | 0.581746 | 0.633618 | 0.533265 | |
7 | 0.563545 | 0.621112 | 0.518979 | |
ratio | - | 8.42% | 22.14% | 20.46% |
Scenario | Times | Recall | Precision | F-Measure |
---|---|---|---|---|
basic | - | 0.703802 | 0.538357 | 0.556289 |
Bi-Variance only | - | 0.712157 | 0.577949 | 0.579673 |
Bi-Variance and NoIV-1 | 1 | 0.718419 | 0.614902 | 0.607733 |
2 | 0.724719 | 0.624375 | 0.614255 | |
4 | 0.738673 | 0.635568 | 0.625628 | |
7 | 0.732899 | 0.630746 | 0.620714 | |
ratio | - | 4.95% | 18.01% | 12.46% |
Scenario | Times | Recall | Precision | F-Measure |
---|---|---|---|---|
basic | - | 0.706819 | 0.742927 | 0.681875 |
Bi-Variance only | - | 0.709695 | 0.762506 | 0.693169 |
Bi-Variance and NoIV-1 | 1 | 0.717996 | 0.774682 | 0.699669 |
2 | 0.730749 | 0.777897 | 0.706701 | |
4 | 0.823209 | 0.783015 | 0.767367 | |
7 | 0.738729 | 0.780936 | 0.712979 | |
ratio | - | 16.47% | 5.39% | 12.54% |
Category | Scene | Convergent Frame |
---|---|---|
badWeather | blizzard skating snowFall wetSnow | 38 125 66 139 |
baseline | highway office pedestrians PETS2006 | 64 60 58 51 |
cameraJitter | badminton boulevard sidewalk traffic | 116 92 88 96 |
dynamic Background | boats canoe fall fountain01 fountain02 overpass | 237 32 87 67 124 141 |
intermittent ObjectMotion | abandonedBox parking sofa streetLight tramstop winterDriveway | 154 76 62 38 172 132 |
lowFramerate | port_0_17fps tramCrossroad_1fps tunnelExit_0_35fps turnpike_0_5fps | 233 89 238 118 |
nightVideos | bridgeEnry busyBoulevard fluidHighway streetCornerAtNight tramStation winterStreet | 218 163 43 152 116 214 |
PTZ | continuousPan intermittentPan twoPositionPTZCam zoomInZoomOut | 60 195 79 59 |
shadow | backdoor bungalows busStation copyMachine cubicle peopleInShade | 48 37 166 85 128 80 |
thermal | corridor diningRoom lakeSide library park | 44 47 42 40 104 |
turbulence | turbulence0 turbulence1 turbulence2 turbulence3 | 373 380 149 338 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, G.; Yuan, Z.; Tong, Q.; Wang, Q. A Novel and Practical Scheme for Resolving the Quality of Samples in Background Modeling. Sensors 2019, 19, 1352. https://doi.org/10.3390/s19061352
Zhang G, Yuan Z, Tong Q, Wang Q. A Novel and Practical Scheme for Resolving the Quality of Samples in Background Modeling. Sensors. 2019; 19(6):1352. https://doi.org/10.3390/s19061352
Chicago/Turabian StyleZhang, Guian, Zhiyong Yuan, Qianqian Tong, and Qiong Wang. 2019. "A Novel and Practical Scheme for Resolving the Quality of Samples in Background Modeling" Sensors 19, no. 6: 1352. https://doi.org/10.3390/s19061352
APA StyleZhang, G., Yuan, Z., Tong, Q., & Wang, Q. (2019). A Novel and Practical Scheme for Resolving the Quality of Samples in Background Modeling. Sensors, 19(6), 1352. https://doi.org/10.3390/s19061352