skip to main content
research-article

A survey of variational and CNN-based optical flow techniques

Published: 01 March 2019 Publication History

Abstract

Dense motion estimations obtained from optical flow techniques play a significant role in many image processing and computer vision tasks. Remarkable progress has been made in both theory and its application in practice. In this paper, we provide a systematic review of recent optical flow techniques with a focus on the variational method and approaches based on Convolutional Neural Networks (CNNs). These two categories have led to state-of-the-art performance. We discuss recent modifications and extensions of the original model, and highlight remaining challenges. For the first time, we provide an overview of recent CNN-based optical flow methods and discuss their potential and current limitations.

Highlights

Introducing optical flow: the basic concepts, the characteristics of the variational and CNN-based techniques, and the evaluation measures.
Discussing developments of the variational method, analyzing the challenges and illustrating the corresponding treating strategies of it.
Describing the conception of the CNN-based technique, and give a detailed discussion of the issues of this technique.

References

[1]
Horn B., Schunck B., Determining optical flow, Artif. Intell. 17 (1–3) (1981) 185–203.
[2]
B. Lucas, T. Kanade, An iterative image registration technique with an application to stereo vision, in: Proc. IJCAI, 1981, pp. 674–679.
[3]
Brox T., Malik J., Large displacement optical flow: descriptor matching in variational motion estimation, IEEE Trans. Pattern Anal. Mach. Intell. 33 (3) (2011) 500–513.
[4]
Barron J., Fleet D., Beauchemin S., Performance of optical flow techniques, Int. J. Comput. Vis. 12 (1) (1994) 43–77.
[5]
Baker S., Schar D., Lewis J., Roth S., Black M., Szeliski R., A database and evaluation methodology for optical flow, Int. J. Comput. Vis. 92 (1) (2011) 1–31.
[6]
Bruhn A., Variational Optic Flow Computation: Accurate Modelling and Efficient Numerics, (Ph.D. thesis) Department of Mathematics and Computer Science, Saarland University, 2006.
[7]
Gibson J.J., The Perception of the Visual World, first ed., Houghton Mifflin Company, Boston, 1950.
[8]
Poggio T., Reiehardt W., Visual control orientation behavior in the fly: Part II. Towards underlying neural interactions, Q. Rev. Biophys. 9 (1976) 377–438.
[9]
I. Kajo, A. Malik, N. Kamel, An evaluation of optical flow algorithms for crowd analytics in surveillance system, in: Proc. Int. Conf. Intelligent and Advanced Systems, 2016, 2016, pp. 1–6.
[10]
Yilmaz A., Javed O., Shah M., Object tracking: a survey, ACM Comput. Surv. 38 (4) (2006) 1–45.
[11]
Xu C., Ze Y., Shu T., Cheng L., Text detection, tracking and recognition in video: A comprehensive survey, IEEE Trans. Image Process. 25 (6) (2016) 2752–2773.
[12]
F. Xiao, Y. Lee, Track and segment: An iterative unsupervised approach for video object proposals, in: Proc. CVPR, 2016, pp. 933–942.
[13]
Y. Tsai, M. Yang, M. Black, Video segmentation via object flow, in: Proc. CVPR, 2016, pp. 3899–3908.
[14]
Tu Z., Xie W., Yan M., Veltkamp R.C., Li B., Yuan J., Fusing disparate object signatures for salient object detection in video, Pattern Recognit. 72 (2017) 285–299.
[15]
K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, in: Proc. NIPS, 2014, pp. 568–576.
[16]
Z. Tu, J. Cao, Y. Li, B. Li, MSR-CNN: Applying motion salient region based descriptors for action Recognition, in: Proc. ICPR, 2016, pp. 3524–3529.
[17]
Colque R., Caetano C., Andrade M., Schwartz W., Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos, IEEE Trans. Circuits Syst. Video Technol. 27 (3) (2017) 673–682.
[18]
Xu D., Ricci E., Yan Y., Song J., Sebe N., Detecting anomalous events in videos by learning deep representations of appearance and motion, Comput. Vis. Image Underst. 156 (2017) 117–127.
[19]
Font F., Ortiz A., Oliver G., Visual navigation for mobile robots: A survey, J. Intell. Robot. Syst. 53 (3) (2008) 263–296.
[20]
Desouza G., Kak A., Vision for mobile robot navigation: A survey, IEEE Trans. Pattern Anal. Mach. Intell. 24 (2) (2002) 237–267.
[21]
J. Victor, G. Sandini, F. Curotto, S. Garibaldi, Divergence stereo for robot navigation: Learning from bees, in: Proc. CVPR, 1993, pp. 434–439.
[22]
H. Ho, C. Wagter, B. Remes, G. de Croon, Optical flow for self-supervised learning of obstacle appearance, in: Proc. Int. Conf. Intell. Robots and Systems, 2015, pp. 3098–3104.
[23]
Barry A., High-Speed Autonomous Obstacle Avoidance with Pushbroom Stereo, (Ph.D. thesis) Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 2016.
[24]
Zhou D., Zhou J., Fei W., Goto S., Ultra-high-throughput VLSI architecture of H.265/HEVC CABAC encoder for UHDTV applications, IEEE Trans. Circuits Syst. Video Technol. 27 (2) (2017) 380–393.
[25]
Chen K., Lorenz D.A., Image sequence interpolation based on optical flow, segmentation, and optimal control, IEEE Trans. Image Process. 21 (3) (2012) 1020–1030.
[26]
S. Niklaus, L. Mai, F. Liu, Video frame interpolation via adaptive convolution, in: Proc. CVPR, 2017, pp. 670–679.
[27]
C. Liu, D. Sun, A bayesian approach to adaptive video super resolution, in: Proc. CVPR, 2011, pp. 209–216.
[28]
O. Makansi, E. Ilg, T. Brox, End-to-end learning of video super-resolution with motion compensation, in: Proc. GCPR, 2017.
[29]
Caren M., Sandgathe S., Optical flow for verification, Weather Forecast. 25 (2010) 1479–1494.
[30]
Heas P., Memin E., 3D motion estimation of atmospheric layers from image sequences, IEEE Trans. Geosci. Remote Sens. 46 (8) (2008) 2385–2396.
[31]
Xiong J., Idoughi R., Aguirre-Pablo A., Aljedaani A., Dun X., Fu Q., Thoroddsen S., Heidrich W., Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging, ACM Trans. Graph. 36 (4) (2017) 36:1–14.
[32]
Weickert J., Bruhn A., Brox T., Papenberg N., A survey on variational optic flow methods for small displacements, Math. Models Regist. Appl. Med. Imaging 10 (2006) 103–136.
[33]
Trobin W., Local, semi-Global, and Global Optimization for Motion Estimation, (Ph.D. thesis) Institute for Computer Graphics and Vision, Graz University of Technology, Austria, 2009.
[34]
Tu Z., Variational Optical Flow Algorithms for Motion Estimation, (Ph.D. thesis) Department of Information and Computing Sciences, Utrecht University, Netherlands, 2015.
[35]
T. Brox, A. Bruhn, N. Papenberg, J. Weickert, High accuracy optical flow estimation based on a theory for warping, in: Proc. ECCV, 2004, pp. 25–36.
[36]
Bruhn A., Weickert J., Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods, Int. J. Comput. Vis. 61 (3) (2005) 211–231.
[37]
Aubert G., Deriche R., Kornprobst P., Computing optical flow via variational techniques, SIAM J. Appl. Math. 60 (1) (1999) 156–182.
[38]
Black M., Anandan P., The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields, Comput. Vis. Image Underst. 63 (1) (1996) 75–104.
[39]
Weickert J., Schnorr C., Variational optic flow computation with a spatio-temporal smoothness constraint, J. Math. Imaging Vision 14 (3) (2001) 245–255.
[40]
Fortun D., Bouthemy P., Kervrann C., Optical flow modeling and computation: A survey, Comput. Vis. Image Underst. 134 (2015) 1–21.
[41]
Y. Boykov, O. Veksler, R. Zabih, Markov random fields with efficient approximations, in: Proc. CVPR, 1998, pp. 648–655.
[42]
W. Li, D. Cosker, M. Brown, R. Tang, Optical flow estimation using Laplacian mesh energy, in: Proc. CVPR, 2013, pp. 2435–2442.
[43]
Mozerov M., Constrained optical flow estimation as a matching problem, IEEE Trans. Image Process. 22 (5) (2013) 2044–2055.
[44]
M. Hornaek, F. Besse, J. Kautz, A. Fitzgibbon, C. Rother, Highly over parameterized optical flow using patchmatch belief propagation, in: Proc. ECCV, 2014, pp. 220–234.
[45]
Roth S., Black M., On the spatial statistics of optical flow, Int. J. Comput. Vis. 74 (1) (2007) 33–50.
[46]
V. Lempitsky, S. Roth, C. Rother, Fusion flow: Discrete continuous optimization for optical flow estimation, in: Proc. CVPR, 2008, pp. 1–8.
[47]
M. Menze, C. Heipke, A. Geiger, Discrete optimization for optical flow, in: Proc. GCPR, 2015, pp. 16–28.
[48]
Bruhn A., Weickert J., Kohlberger T., Schnorr C., A multigrid platform for real-time motion computation with discontinuity–preserving variational methods, Int. J. Comput. Vis. 70 (3) (2006) 257–277.
[49]
C. Zach, T. Pock, H. Bischof, A duality based approach for realtime TV-L1 optical flow, in: DAGM conf. PR, 2007, pp. 214–223.
[50]
S. Oron, A. Hillel, S. Avidan, Extended lucas-kanade tracking, in: Proc. ECCV, 2014, pp. 142–156.
[51]
Baker S., Matthews I., Lucas-Kanade 20 years on: A unifying framework, Int. J. Comput. Vis. 56 (3) (2004) 221–255.
[52]
Anandan P., A computational framework and an algorithm for the measurement of visual motion, Int. J. Comput. Vis. 2 (3) (1989) 283–310.
[53]
Lewis J., Fast Normalized Cross-Correlation, Canadian Image Process. Pattern Recognit. Society, 1995, pp. 120–123.
[54]
J. Wills, S. Belongie, A feature-based approach for determining dense long range correspondences, in: Proc. ECCV, 2004, pp. 170–182.
[55]
Aodha O., Humayun A., Pollefeys M., Brostow G., Learning a confidence measure for optical flow, IEEE Trans. Pattern Anal. Mach. Intell. 35 (5) (2013) 1107–1120.
[56]
Wills J., Agarwal S., Belongie S., A feature-based approach for dense segmentation and estimation of large disparity motion, Int. J. Comput. Vis. 68 (2) (2006) 125–143.
[57]
Beauchemin S., Barron J., The computation of optical flow, ACM Comput. Surv. 27 (3) (1995) 433–467.
[58]
Heeger D., Optical flow using spatiotemporal filters, Int. J. Comput. Vis. 1 (4) (1988) 279–302.
[59]
Watson A., Ahumada A., A Look at Motion in the Frequency Domain, National Aeronautics and Space Administration, Ames Research Center, New York, 1983, pp. 1–10.
[60]
Adelson E., Bergen J., Spatiotemporal energy models for the perception of motion, J. Opt. Soc. Amer. 2 (2) (1985) 284–299.
[61]
Fleet D., Jepson A., Computation of component image velocity from local phase information, Int. J. Comput. Vis. 5 (1) (1990) 77–104.
[62]
Freeman W., Adelson E., The design and use of steerable filters, IEEE Trans. Pattern Anal. Mach. Intell. 13 (9) (1991) 891–906.
[63]
A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazırbas, V. Golkov, P. Smagt, D. Cremers, T. Brox, FlowNet: Learning optical flow with convolutional networks, in: Proc. ICCV, 2015, pp. 2758–2766.
[64]
Z. Ren, J. Yan, B. Ni, B. Liu, X. Yang, H. Zha, Unsupervised deep learning for optical flow estimation, in: Proc. AAAI, 2017.
[65]
F. Guney, A. Geiger, Deep discrete flow, in: Proc. ACCV, 2016, pp. 207–224.
[66]
S. Zweigand, L. Wolf, InterpoNet, a brain inspired neural network for optical flow dense interpolation, in: Proc. CVPR, 2017, pp. 4563–4572.
[67]
E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, T. Brox, FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks, in: Proc. CVPR, 2017, pp. 1647–1655.
[68]
M. Otte, H. Nagel, Optical flow estimation: advances and comparisons, in: Proc. ECCV, 1994, pp. 51–60.
[69]
B. Galvin, B. McCane, K. Novins, D. Mason, S. Mills, Recovering motion fields: An Evaluation of eight optical flow algorithms, in: Proc. BMVC, 1998, pp. 195–204.
[70]
McCane B., Novins K., Crannitch D., Galvin B., On benchmarking optical flow, Comput. Vis. Image Underst. 84 (1) (2001) 126–143.
[71]
R. Szeliski, Prediction error as a quality metric for motion and stereo, in: Proc. ICCV, 1999, pp. 781–788.
[72]
M. Menze, A. Geiger, Object scene flow for autonomous vehicles, in: Proc. CVPR, 2015, pp. 3061–3070.
[73]
D. Butler, J.W.G. Stanley, M. Black, A naturalistic open source movie for optical flow evaluation, in: Proc. ECCV, 2012, pp. 611–625.
[74]
M. Aubry, D. Maturana, A. Efros, B. Russell, J. Sivic, Seeing 3D chairs: Exemplar part-based 2D–3D alignment using a large dataset of CAD models, in: Proc. CVPR, 2014, pp. 3762–3769.
[75]
C. Vogel, S. Roth, K. Schindler, An evaluation of data costs for optical flow, in: Proc. GCPR, 2013, pp. 343–353.
[76]
D. Sun, S. Roth, M. Black, Secrets of optical flow estimation and their principles, in: Proc. CVPR, 2010, pp. 2432–2439.
[77]
Xu L., Jia J., Matsushita Y., Motion detail preserving optical flow estimation, IEEE Trans. Pattern Anal. Mach. Intell. 34 (9) (2012) 1–14.
[78]
Diaz J., Ros E., Pelayo F., Ortigosa E., Mota S., FPGA-based real-time optical flow system, IEEE Trans. Circuits Syst. Video Technol. 16 (2) (2006) 274–279.
[79]
L. Bao, Q. Yang, H. Jin, Fast edge-preserving patchmatch for large displacement optical flow, in: Proc. CVPR, 2014, pp. 3534–3541.
[80]
Alvarez L., Deriche R., Papad J., Sanchez T., Symmetrical dense optical flow estimation with occlusions detection, Int. J. Comput. Vis. 75 (3) (2007) 371–385.
[81]
Ayvaci A., Raptis M., Soatto S., Sparse occlusion detection with optical flow, Int. J. Comput. Vis. 97 (3) (2012) 322–338.
[82]
Zimmer H., Bruhn A., Weickert J., Optic flow in harmony, Int. J. Comput. Vis. 93 (3) (2011) 368–388.
[83]
Y. Mileva, A. Bruhn, J. Weickert, Illumination-robust variational optical flow with photometric invariants, in: DAGM PR Symposium, 2007, pp. 152–162.
[84]
Tu Z., Poppe R., Veltkamp R.C., Estimating accurate optical flow in the presence of motion blur, J. EI 24 (5) (2015).
[85]
Papenberg N., Bruhn A., Brox T., Didas S., Weickert J., Highly accurate optic flow computation with theoretically justified warping, Int. J. Comput. Vis. 67 (2) (2006) 141–158.
[86]
Alvarez L., Sanchez J., Weickert J., A scale-space approach to nonlocal optical flow calculations, Scale-Space Theories in Comput. Vis., vol. 1682, 1999, pp. 235–246.
[87]
Tu Z., Xie W., Hurst W., Qin Q., Weighted root mean square approach to select the optimal smoothness parameter of the variational optical flow algorithms, Opt. Eng. 51 (3) (2012).
[88]
H. Zimmer, A. Bruhn, J. Weickert, L. Valgaerts, A. Salgado, B. Rosenhahn, H. Seidel, Complementary optic flow, in: Proc. EMMCVPR, 2009, pp. 207–220.
[89]
J. Weijer, T. Gevers, Robust optical flow from photometric invariants, in: Proc. ICIP, 2004, pp. 1835–1838.
[90]
Mohamed M., Rashwan H., Mertsching B., Garcia M., Puig D., Illumination-Robust optical flow using a local directional pattern, IEEE Trans. Circuits Syst. Video Technol. 24 (9) (2014) 1499–1508.
[91]
Aujol J., Gilboa G., Chan T., Osher S., Structure-texture image decomposition-modeling, algorithms, and parameter selection, Int. J. Comput. Vis. 67 (1) (2006) 111–136.
[92]
Wedel A., Pock T., Zach C., Cremers D., Bischof H., An improved algorithm for tv-l1 optical flow, Sta. and Geometrical Appl. to Vis. Motion Anal. 5064 (2008) 23–45.
[93]
Rudin L., Osher S., Fatemi E., Nonlinear total variation based noise removal algorithms, Physica D 60 (1992) 259–268.
[94]
Golland P., Bruckstein A., Motion from color, Comput. Vis. Image Underst. 68 (3) (1997) 346–362.
[95]
M.J. Black, P. Anandan, Robust dynamic motion estimation over time, in: Proc. CVPR, 1991, pp. 292–302.
[96]
P. Charbonnier, L. Blanc-Feraud, G. Aubert, M. Barlaud, Two deterministic half-quadratic regularization algorithms for computed imaging, in: Proc. ICIP, 1994, pp. 168–172.
[97]
Odobez J., Bouthemy P., Robust multiresolution estimation of parametric motion models, J. Vis. Commun. Image Represent. 6 (4) (1995) 348–365.
[98]
Memin E., Perez P., Dense estimation and object-based segmentation of the optical flow with robust techniques, IEEE Trans. Image Process. 7 (5) (1998) 703–719.
[99]
Senst T., Eiselein V., Sikora T., Robust local optical flow for feature tracking, IEEE Trans. Circuits Syst. Video Technol. 22 (9) (2012) 1377–1387.
[100]
Sun D., Roth S., Black M., A quantitative analysis of current practices in optical flow estimation and the principles behind them, Int. J. Comput. Vis. 106 (2) (2014) 115–137.
[101]
Blake A., Zisserman A., Visual Reconstruction, The MIT Press, Cambridge, MA, 1987.
[102]
Weickert J., Schnorr C., A theoretical framework for convex regularizers in PDE-based computation of image motion, Int. J. Comput. Vis. 45 (3) (2001) 245–264.
[103]
Alvarez L., Weickert J., Sanchez J., Reliable estimation of dense optical flow fields with large displacements, Int. J. Comput. Vis. 39 (1) (2000) 41–56.
[104]
C. Schnorr, Segmentation of visual motion by minimizing convex non-quadratic functionals, in: Proc. ICPR, 1994, pp. 661–663.
[105]
M. Werlberger, T. Pock, H. Bischof, Motion estimation with non-local total variation regularization, in: Proc. ICCV, 2010, pp. 2464–2471.
[106]
P. Krahenbuhl, V. Koltun, Efficient nonlocal regularization for optical flow, in: Proc. ECCV, 2012, pp. 356–369.
[107]
S. Volz, A. Bruhn, L. Valgaerts, H. Zimmer, Modeling temporal coherence for optical flow, in: Proc. ICCV, 2011, pp. 1116–1123.
[108]
R. Garg, A. Roussos, L. Agapito, Robust trajectory-space TV-L1 optical flow for non-rigid sequences, in: Proc. EMMCVPR, 2011, pp. 300–314.
[109]
Nagel H., Enkelmann W., An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences, IEEE Trans. Pattern Anal. Mach. Intell. 8 (5) (1986) 565–593.
[110]
V. Solo, A sure-fired way to choose smoothing parameters in ill-conditioned inverse problems, in: Proc. ICIP, 1996, pp. 89–92.
[111]
L. Ng, V. Solo, A data-driven method for choosing smoothing parameters in optical flow problems, in: Proc. ICIP, 1997, pp. 360–363.
[112]
L. Raket, Local smoothness for global optical flow, in: Proc. ICIP, 2012, pp. 1–4.
[113]
Tu Z., Poppe R., Veltkamp R.C., Weighted local intensity fusion method for variational optical flow estimation, Pattern Recognit. 50 (2016) 223–232.
[114]
D. Sun, E. Sudderth, H. Pfister, Layered RGBD scene flow estimation, in: Proc. CVPR, 2015, pp. 548–556.
[115]
J. Rua, T. Crivelli, P. Bouthemy, P. Perez, Determining occlusions from space and time image reconstructions, in: Proc. CVPR, 2016, pp. 1382–1391.
[116]
Estellers V., Soatto S., Detecting occlusions as an inverse problem, J. Math. Imaging Vision 54 (2) (2015) 181–198.
[117]
J. Xiao, H. Cheng, H. Sawhney, C. Rao, M. Isnardi, Bilateral filtering–based optical flow estimation with occlusion detection, in: Proc. ECCV, 2006, pp. 211–224.
[118]
D. Sun, S. Roth, J. Lewis, J. Black, Learning optical flow, in: Proc. ECCV, 2008, pp. 83–97.
[119]
Zitnick C., Kanade T., A cooperative algorithm for stereo matching and occlusion detection, IEEE Trans. Pattern Anal. Mach. Intell. 22 (7) (2000) 675–684.
[120]
Ince S., Konrad J., Occlusion-aware optical flow estimation, IEEE Trans. Image Process. 17 (8) (2008) 1443–1451.
[121]
Sand P., Teller S., Particle video: Long-range motion estimation using point trajectories, Int. J. Comput. Vis. 80 (1) (2008) 72–91.
[122]
Smith P., Drummond T., Cipolla R., Layered motion segmentation and depth ordering by tracking edges, IEEE Trans. Pattern Anal. Mach. Intell. 26 (4) (2004) 479–493.
[123]
Stein A., Hebert M., Occlusion boundaries from motion: Low-level detection and mid-level reasoning, Int. J. Comput. Vis. 82 (3) (2009) 325–357.
[124]
P. Sundberg, T. Brox, M. Maire, P. Arbelaez, J. Malik, Occlusion boundary detection and figure/ground assignment from optical flow, in: Proc. CVPR, 2011, pp. 2233–2240.
[125]
A. Humayun, O. Aodha, G. Brostow, Learning to find occlusion regions, in: Proc. CVPR, 2011, pp. 2161–2168.
[126]
C. Kondermann, R. Mester, C. Garbe, A statistical confidence measure for optical flows, in: Proc. ECCV, 2008, pp. 290–301.
[127]
Xu F., Dai Q., Occlusion-aware motion layer extraction under large interframe motions, IEEE Trans. Image Process. 20 (9) (2011) 2615–2626.
[128]
D. Sun, E. Sudderth, M. Black, Layered segmentation and optical flow estimation over time, in: Proc. CVPR, 2012, pp. 1768–1775.
[129]
E. Lobaton, R. Vasudevan, R. Bajcsy, R. Alterovitz, Local occlusion detection under deformations using topological invariants, in: Proc. ECCV, 2010, pp. 101–114.
[130]
Thompson W., Exploiting discontinuities in optical flow, Int. J. Comput. Vis. 30 (3) (1998) 163–173.
[131]
Zhang C., Chen Z., Wang M., Li M., Jiang S., Robust non-local TV-L1 optical flow estimation with occlusion detection, IEEE Trans. Image Process. 26 (8) (2017) 4055–4066.
[132]
X. Shen, Y. Wu, Sparsity model for robust optical flow estimation at motion discontinuities, in: Proc. CVPR, 2010, pp. 2456–2463.
[133]
K. Jia, X. Wang, X. Tang, Optical flow estimation using learned sparse model, in: Proc. ICCV, 2011, pp. 2391–2398.
[134]
Memin E., Perez P., Hierarchical estimation and segmentation of dense motion fields, Int. J. Comput. Vis. 46 (2) (2002) 129–155.
[135]
Amiaz T., Kiryati N., Piecewise-smooth dense optical flow via level sets, Int. J. Comput. Vis. 68 (2) (2006) 111–124.
[136]
L. Xu, J. Chen, J. Jia, A segmentation based variational model for accurate optical flow estimation, in: Proc. ECCV, 2008, pp. 671–684.
[137]
L. Lara, D. Sun, V. Jampani, M. Black, Optical flow with semantic segmentation and localized layers, in: Proc. CVPR, 2016, pp. 3889–3898.
[138]
F. Steinbrucker, T. Pock, Large displacement optical flow computation without warping, in: Proc. CVPR, 2009, pp. 1069–1074.
[139]
Z. Chen, H. Jin, Z. Lin, S. Cohen, Y. Wu, Large displacement optical flow from nearest neighbor fields, in: Proc. CVPR, 2013, pp. 2443–2450.
[140]
Tu Z., van der Aa N., van Gemeren C., Veltkamp R.C., A combined post-filtering method to improve accuracy of variational optical flow estimation, Pattern Recognit. 47 (5) (2014) 1926–1940.
[141]
Anandan P., Measuring Vision Motion from Image Sequence, (Ph.D. thesis) University of Massachusetts, US, 1987.
[142]
Bruhn A., Weickert J., Feddern C., Kohlberger T., Schnorr C., Variational optical flow computation in real time, IEEE Trans. Image Process. 14 (5) (2005) 608–615.
[143]
Y. Yang, S. Soatto, S2F: Slow-To-Fast Interpolator Flow, in: Proc. CVPR, 2017, pp. 2087–2096.
[144]
Liu C., Yuen J., Torralba A., SIFT flow: Dense correspondence across different scenes and its applications, IEEE Trans. Pattern Anal. Mach. Intell. 33 (5) (2011) 978–994.
[145]
D. Lowe, Object recognition from local scale-invariant features, in: Proc. ICCV, 1999, pp. 1150–1157.
[146]
P. Weinzaepfel, J. Revaud, Z. Harchaoui, C. Schmid, DeepFlow: Large displacement optical flow with deep matching, in: Proc. ICCV, 2013, pp. 1385–1392.
[147]
Heas P., Memin E., Papadakis N., Szantai A., Layered estimation of atmospheric mesoscale dynamics from satellite imagery, IEEE Trans. Geosci. Remote Sens. 45 (12) (2007) 4087–4104.
[148]
M. Stoll, S. Volz, A. Bruhn, Adaptive integration of feature matches into variational optical flow methods, in: Proc. ACCV, 2012, pp. 1–14.
[149]
J. Zin, R. Dupont, A. Bartoli, A general dense image matching framework combining direct and feature-based costs, in: Proc. ICCV, 2013, pp. 185–192.
[150]
Wang Z., Wu F., Hu Z., MSLD: A robust descriptor for line matching, Pattern Recognit 42 (5) (2009) 941–953.
[151]
Revaud J., Weinzaepfel P., Harchaoui Z., Schmid C., DeepMatching: hierarchical deformable dense matching, Int. J. Comput. Vis. 120 (3) (2016) 300–323.
[152]
T. Kroeger, R. Timofte, D. Dai, L. Gool, Fast optical flow using dense inverse search, in: Proc. ECCV, 2016, pp. 471–488.
[153]
C. Barnes, E. Shechtman, D. Goldman, A. Finkelstein, The generalized patchmatch correspondence algorithm, in: Proc. ECCV, 2010, pp. 29–43.
[154]
J. Revaud, P. Weinzaepfel, Z. Harchaoui, C. Schmid, EpicFlow: Edge-preserving interpolation of correspondences for optical flow, in: Proc. CVPR, 2015, pp. 1164–1172.
[155]
Y. Hu, Y. Li, R. Song, Robust interpolation of correspondences for large displacement optical flow, in: Proc. CVPR, 2017, pp. 481–489.
[156]
Y. Li, Pyramidal gradient matching for optical flow estimation, arXiv preprint, 2017, arXiv:1704.03217.
[157]
Lu J., Li Y., Yang H., Min D., Eng W., Do M., PatchMatch filter: Edge-aware filtering meets randomized search for visual correspondence, IEEE Trans. Pattern Anal. Mach. Intell. 39 (9) (2016) 1866–1879.
[158]
C. Bailer, B. Taetz, D. Stricker, Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation, in: Proc. ICCV, 2015, pp. 4015–4023.
[159]
Barnes C., Shechtman E., Finkelstein A., Goldman D., PatchMatch: A randomized correspondence algorithm for structural image editing, ACM Trans. on Graphics 28 (3) (2009) 24:1–11.
[160]
Y. Hu, R. Song, Y. Li, Efficient coarse-to-fine patchmatch for large displacement optical flow, in: Proc. CVPR, 2016, pp. 5704–5712.
[161]
Hammer P., Hansen P., Simeone B., Roof duality, complementation and persistency in quadratic 0-1 optimization, Math. Program. 28 (1984) 121–155.
[162]
C. Rother, V. Kolmogorov, V. Lempitsky, M. Szummer, Optimizing binary MRFs via extended roof duality, in: Proc. CVPR, 2007, pp. 1–8.
[163]
Fermuller C., Shulman D., Aloimonos Y., The statistics of optical flow, Comput. Vis. Image Underst. 82 (1) (2001) 1–32.
[164]
Hadiashar A., Tennakoon R., Bruijne M., Quantification of Smoothing Requirement for 3D optic flow calculation of volumetric images, IEEE Trans. Image Process. 22 (6) (2013) 2128–2137.
[165]
Weber J., Malik J., Robust computation of optical flow in a multi-scale differential framework, Int. J. Comput. Vis. 14 (1) (1995) 67–81.
[166]
Lempitsky V., Rother C., Roth S., Blake A., Fusion moves for Markov random field optimization, IEEE Trans. Pattern Anal. Mach. Intell. 32 (8) (2010) 1392–1405.
[167]
Song X., Seneviratne L.D., Althoefer K., A Kalman filter-integrated optical flow method for velocity sensing of mobile robots, IEEE Trans. Mechatronics 16 (3) (2011) 551–563.
[168]
He K., Sun J., Tang X., Guided image filtering, IEEE Trans. Pattern Anal. Mach. Intell. 35 (6) (2013) 1397–1409.
[169]
Tu Z., Poppe R., Veltkamp R.C., Adaptive guided image filter for warping in variational optical flow, Signal Proc. 127 (2016) 253–265.
[170]
A. Buades, B. Coll, J. Morel, A non-local algorithm for image denoising, in: Proc. CVPR, 2005, pp. 60–65.
[171]
C. Liu, W. Freeman, A high-quality video denoising algorithm based on reliable motion estimation, in: Proc. ECCV, 2010, pp. 706–719.
[172]
A. Wedel, D. Cremers, T. Pock, H. Bischof, Structure- and motion-adaptive regularization for high accuracy optic flow, in: Proc. ICCV, 2009, pp. 1663–1668.
[173]
Z. Tu, C. van Gemeren, R.C. Veltkamp, Improved color patch similarity measure based weighted median filter, in: Proc. ACCV, 2015, pp. 1–15.
[174]
C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in: Proc. ICCV, 1998, pp. 839–846.
[175]
Elad M., Feuer A., Recursive optical flow estimation-adaptive filtering approach, J. Vis. Commun. Image Represent. 9 (2) (1998) 119–138.
[176]
C. Rabe, T. Muller, A. Wedel, U. Franke, Dense, robust and accurate motion field estimation from stereo image sequences in real-time, in: Proc. ECCV, 2010, pp. 582–595.
[177]
T. Portz, L. Zhang, H. Jiang, Optical flow in the presence of spatially-varying motion blur, in: Proc. CVPR, 2012, pp. 1752–1759.
[178]
Tu Z., Xie W., Cao J., Poppe R., Veltkamp R.C., Variational method for joint optical flow estimation and edge-aware image restoration, Pattern Recognit. 65 (2017) 11–25.
[179]
J. Xu, R. Ranftl, V. Koltun, Accurate optical flow via direct cost volume processing, in: Proc. CVPR, 2017, pp. 1289–1297.
[180]
W. Trobin, T. Pock, D. Cremers, H. Bischof, An unbiased second-order prior for high-accuracy motion estimation, in: DAGM Symposium on PR, 2008, pp. 396–405.
[181]
T. Muller, J. Rannacher, C. Rabe, U. Franke, Feature-and depth-supported modified total variation optical flow for 3D motion field estimation in real scenes, in: Proc. CVPR, 2011, pp. 1193–1200.
[182]
Palomares R., Llopis E., Ballester C., A new minimization strategy for large displacement variational optical flow, J. Math. Imaging Vision 58 (2017) 27–46.
[183]
Niu Y., OPtical Flow Estimation in the Presence of Fast or Discontinuous Motion, School of Computer Science (Ph.D. thesis) University of Adelaide, Australia, 2010.
[184]
Aubert G., Kornprobst P., Mathematical problems in image processing: Partial differential equations and the calculus of variations, in: Applied mathematical sciences, Springer–Verlag New York, 2006.
[185]
H. Ho, R. Goecke, Optical flow estimation using Fourier Mellin Transform, in: Proc. CVPR, 2008, pp. 1–8.
[186]
Niu Y., Dick A., Brooks M., Locally oriented optical flow computation, IEEE Trans. Image Process. 21 (4) (2012) 1573–1586.
[187]
W. Qiu, X. Wang, X. Bai, A. Yuille, Z. Tu, Scale-space SIFT flow, in: Proc. WACV, 2014, pp. 1112–1119.
[188]
Baghaie A., Souza R., Yu Z., Dense descriptors for optical flow estimation: A comparative study, J. Imaging 3 (1) (2017) 1–19.
[189]
S. Meister, J. Hur, S. Roth, Unflow: unsupervised learning of optical flow with a bidirectional census loss, in: Proc. AAAI, 2018.
[190]
E. Ilg, O. Cicek, S. Galesso, A. Klein, O. Makansi, F. Hutter, T. Brox, Uncertainty estimates and multi-hypotheses networks for optical flow, in: Proc. ECCV, 2018, pp. 652–667.
[191]
N. Mayer, P.H. E. Ilg, P. Fischer, D. Cremers, A. Dosovitskiy, T. Brox, A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation, in: Proc. CVPR, 2016, pp. 4040–4048.
[192]
Xiang X., Zhai M., Zhang R., Qiao Y., Saddi A.E., Deep optical flow supervised learning with prior assumptions, IEEE Access 6 (2018) 43222–43232.
[193]
A. Ranjan, M. Black, Optical flow estimation using a spatial pyramid network, in: Proc. CVPR, 2017, pp. 4161–4170.
[194]
D. Sun, X. Yang, M. Liu, J. Kautz, PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume, in: Proc. CVPR, 2018, pp. 8934–8943.
[195]
T. Hui, X. Tang, C. Loy, Liteflownet: A lightweight convolutional neural network for optical flow estimation, in: Proc. CVPR, 2018, pp. 8981–8989.
[196]
S. Zhao, L. X, O. Bourahla, Deep optical flow estimation via multi-scale correspondence structure learning, in: Proc. IJCAI, 2017.
[197]
D. Gadot, L. Wolf, PatchBatch: A batch augmented loss for optical flow, in: Proc. CVPR, 2016, pp. 4236–4245.
[198]
T. Schuster, L. Wolf, D. Gadot, Optical flow requires multiple strategies (but only one network), in: Proc. CVPR, 2017, pp. 4950–4959.
[199]
M. Bai, W. Luo, K. Kundu, R. Urtasun, Exploiting semantic information and deep matching for optical flow, in: Proc. ECCV, 2016, pp. 154–170.
[200]
C. Bailer, K. Varanasi, D. Stricker, CNN-based patch matching for optical flow with thresholded hinge embedding loss, in: Proc. CVPR, 2017, pp. 3250–3259.
[201]
Hu P., Wang G., Tan Y.P., Recurrent spatial pyramid cnn for optical flow estimation, IEEE Trans. Multimedia 20 (10) (2018) 2814–2823.
[202]
A. Ahmadi, I. Patras, Unsupervised convolutional neural networks for motion estimation, in: Proc. ICIP, 2016, pp. 1629–1633.
[203]
J.J. Yu, A.W. Harley, K.G. Derpanis, Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness, in: Proc. ECCVW, 2016, pp. 3–10.
[204]
Y. Zhu, Z. Lan, S. Newsamy, A. Hauptmann, Guided optical flow learning, in: Proc. CVPRW, 2017.
[205]
M. Jaderberg, K. Simonyan, A. Zisserman, K. Kavukcuoglu, Spatial transformer networks, in: Proc. NIPS, 2015, pp. 2017–2025.
[206]
L. Fan, W. Huang, C. Gan, S. Ermon, B. Gong, J. Huang, End-to-end learning of motion representation for video understanding, in: Proc. CVPR, 2018, pp. 6016–6025.
[207]
Y. Wang, Y. Yang, Z. Yang, L. Zhao, P. Wang, W. Xu, Occlusion aware unsupervised learning of optical flow, in: Proc. CVPR, 2018, pp. 4884–4893.
[208]
M. Caron, P. Bojanowski, A. Joulin, M. Douze, Deep clustering for unsupervised learning of visual features, in: Proc. ECCV, 2018, pp. 132–149.
[209]
W. Lai, J. Huang, M. Yang, Semi-supervised learning for optical flow with generative adversarial networks, in: Proc. NIPS, 2017.
[210]
Y. Yang, S. Soatto, Conditional prior networks for optical flow, in: Proc. ECCV, 2018, pp. 271–287.

Cited By

View all

Index Terms

  1. A survey of variational and CNN-based optical flow techniques
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Image Communication
          Image Communication  Volume 72, Issue C
          Mar 2019
          148 pages

          Publisher

          Elsevier Science Inc.

          United States

          Publication History

          Published: 01 March 2019

          Author Tags

          1. Optical flow
          2. Variational method
          3. CNN-based method
          4. Evaluation measures
          5. Challenges

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 24 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media