skip to main content
research-article

Salient Object Segmentation via Effective Integration of Saliency and Objectness

Published: 01 August 2017 Publication History

Abstract

This paper proposes an effective salient object segmentation method via the graph-based integration of saliency and objectness. Based on the superpixel segmentation result of the input image, a graph is built to represent superpixels using regular vertex, background seed vertex with the addition of a terminal vertex. The edge weights on the graph are defined by integrating the difference of appearance, saliency, and objectness between superpixels. Then, the object probability of each superpixel is measured by finding the shortest path from the corresponding vertex to the terminal vertex on the graph, and the resultant object probability map can generally better highlight salient objects and suppress background regions compared to both saliency map and objectness map. Finally, the object probability map is used to initialize salient object and background, and effectively incorporated into the framework of graph cut to obtain the final salient object segmentation result. Extensive experimental results on three public benchmark datasets show that the proposed method consistently improves the salient object segmentation performance and outperforms the state-of-the-art salient object segmentation methods. Furthermore, experimental results also demonstrate that the proposed graph-based integration method is more effective than other fusion schemes and robust to saliency maps generated using various saliency models.

References

[1]
S. M. Hu, T. Chen, K. Xu, M. M. Cheng, and R. R. Martin, “Internet visual media processing: A survey with graphics and vision applications,” Vis. Comput., vol. 29, no.  5, pp. 393–405, May 2013.
[2]
M. M. Cheng, N. J. Mitra, X. Huang, and S. M. Hu, “SalientShape: Group saliency in image collections,” Vis. Comput., vol. 30, no. 4, pp. 443–453, Apr. 2014.
[3]
Y. Yang, L. Yang, G. Wu, and S. Li, “Image relevance prediction using query-context bag-of-object retrieval model,” IEEE Trans. Multimedia , vol. 16, no. 6, pp. 1700–1712, Oct. 2014.
[4]
V. Setlur, T. Lechner, M. Nienhaus, and B. Gooch, “Retargeting images and video for preserving information saliency,” IEEE Comput. Graph. Appl., vol. 27, no. 5, pp. 80–88, Sep. 2007.
[5]
S. Lin, I. Yeh, C. Lin, and T. Lee, “Patch-based image warping for content-aware retargeting,” IEEE Trans. Multimedia, vol. 15, no. 2, pp. 359–368, Feb. 2013.
[6]
R. Shah and P. J. Narayanan, “Interactive video manipulation using object trajectories and scene backgrounds,” IEEE Trans. Circuits Syst. Video Technol., vol.  23, no. 9, pp. 1565–1576, Sep.  2013.
[7]
J. Xue, C. Li, and N. Zheng, “Proto-object based rate control for JPEG2000: An approach to content-based scalability,” IEEE Trans. Image Process., vol. 20, no. 4, pp. 1177 –1184, Apr. 2011.
[8]
L. Shen, Z. Liu, and Z. Zhang, “A novel H.264 rate control algorithm with consideration of visual attention,” Multimedia Tools Appl., vol. 63, no. 3, pp. 709–727, Apr. 2013.
[9]
U. Rutishauser, D. Walther, C. Koch, and P. Perona, “Is bottom-up attention useful for object recognition?” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun. 2004, vol. 2, pp. 37 –44.
[10]
J. Lei et al., “Depth sensation enhancement for multiple virtual view rendering,” IEEE Trans. Multimedia, vol. 17, no. 4, pp. 457–469, Apr.  2015.
[11]
L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp.  1254–1259, Nov. 1998.
[12]
X. Hou and L. Zhang, “ Saliency detection: A spectral residual approach,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun. 2007, pp. 1–8.
[13]
J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proc. Neural Inf. Process. Syst., Dec.  2006, pp. 545–552.
[14]
C. Guo and L. Zhang, “ A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression,” IEEE Trans. Image Process., vol.  19, no. 1, pp. 185–198, Jan.  2010.
[15]
A. Borji, D. N. Sihite, and L. Itti, “Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study,” IEEE Trans. Image Process., vol. 22, no. 1, pp. 55–69, Jan. 2013.
[16]
Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.  2014, pp. 280–287.
[17]
T. Liu et al., “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no.  2, pp. 353–367, Feb. 2011.
[18]
Y. Fang et al., “Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum,” IEEE Trans. Multimedia, vol. 14, no. 1, pp. 187– 198, Feb. 2012.
[19]
N. Imamoglu, W. Lin, and Y. Fang, “A saliency detection model using low-level features based on wavelet transform,” IEEE Trans. Multimedia, vol. 15, no. 1, pp. 96–105, Jan. 2013.
[20]
S. Goferman, L. Zelnik-Manor, and A. Tal, “ Context-aware saliency detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun. 2010, pp. 2376– 2383.
[21]
Z. Liu, Y. Xue, L. Shen, and Z. Zhang, “Nonparametric saliency detection using kernel density estimation,” in Proc. IEEE Int. Conf. Image Process., Sep. 2010, pp. 253–256.
[22]
Z. Liu et al., “Unsupervised salient object segmentation based on kernel density estimation and two-phase graph cut,” IEEE Trans. Multimedia , vol. 14, no. 4, pp. 1275–1289, Aug. 2012.
[23]
M. M. Cheng, G. X. Zhang, N. J. Mitra, X. Huang, and S. M. Hu, “Global contrast based salient region detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.  2011, pp. 409–416.
[24]
M. M. Cheng, N. J. Mitra, X. Huang, P. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, pp. 569–582, Mar. 2015.
[25]
W. Zou et al., “Unsupervised joint salient region detection and object segmentation,” IEEE Trans. Image Process., vol.  24, no. 11, pp. 3858–3873, Nov.  2015.
[26]
W. Zhang, Q. M. J. Wu, G. Wang, and H. Yin, “An adaptive computational model for salient object detection,” IEEE Trans. Multimedia, vol.  12, no. 4, pp. 300–316, Jun.  2010.
[27]
C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.  2013, pp. 3166–3173.
[28]
Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in Proc. 12th Eur. Conf. Comput. Vis., Oct. 2012, vol.  3, pp. 29–42.
[29]
W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog. , Jun. 2014, pp. 2814–2821.
[30]
B. Jiang, L. Zhang, H. Lu, C. Yang, and M.-H. Yang, “Saliency detection via absorbing Markov chain,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013, pp. 1665–1672.
[31]
X. Li, H. Lu, L. Zhang, X. Ruan, and M.-H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013, pp. 2976–2983.
[32]
H. Jiang et al., “Automatic salient object segmentation based on context and shape prior,” in Proc. Brit. Mach. Vis. Conf., Aug.  2011, Paper 110.
[33]
W. Zou, K. Kpalma, Z. Liu, and J. Ronsin, “Segmentation driven low-rank matrix recovery for saliency detection,” in Proc. Brit. Mach. Vis. Conf., Sep.  2013, Paper 79.
[34]
C. Aytekin, E. C. Ozan, S. Kiranyaz, and M. Gabbouj, “Visual saliency by extended quantum cuts,” in Proc. IEEE Int. Conf. Image Process., Sep.  2015, pp. 1692–1696.
[35]
Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.  2013, pp. 1155–1162.
[36]
H. Jiang et al., “Salient object detection: A discriminative regional feature integration approach,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun. 2013, pp. 2083 –2090.
[37]
Z. Liu, W. Zou, and O. Le Meur, “Saliency tree: A novel saliency detection framework,” IEEE Trans. Image Process., vol. 23, no. 5, pp. 1937–1952, May 2014.
[38]
R. Shi, Z. Liu, H. Du, X. Zhang, and L. Shen, “Region diversity maximization for salient object detection,” IEEE Signal Process. Lett., vol. 19, no. 4, pp. 215–218, Apr. 2012.
[39]
A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: A benchmark,” in Proc. 12th Eur. Conf. Comput. Vis., Oct. 2012, pp. 414–429.
[40]
A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: A benchmark,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5706–5722, Dec.  2015.
[41]
B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the objectness of image windows,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2189– 2202, Nov. 2012.
[42]
M. M. Cheng, Z. Zhang, and P. Torr, “BING: Binarized normed gradients for objectness estimation at 300fps,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun. 2014, pp. 3286–3293.
[43]
K. Y. Chang, T. L. Liu, H. T. Chen, and S. H. Lai, “Fusing generic objectness and visual saliency for salient object detection,” in Proc. IEEE Int. Conf. Comput. Vis., Nov. 2011, pp. 914–921.
[44]
L. Ye, Z. Liu, and L, Li, “Evaluation on fusion of saliency and objectness for salient object segmentation,” in Proc. 7th Int. Conf. Internet Multimedia Comput. Service, Aug. 2015, pp. 1 –4.
[45]
Y. Boykov and M. Jolly, “Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images,” in Proc. IEEE Int. Conf. Comput. Vis., Jul.  2001, pp. 105–112.
[46]
S. Marat et al., “Modelling spatio-temporal saliency to predict gaze direction for short videos,” Int. J. Comput. Vis., vol. 82, no. 3, pp. 231–243, May  2009.
[47]
Y. Fang, W. Lin, Z. Chen, C.-M. Tsai, and C.-W. Lin, “A video saliency detection model in compressed domain,” IEEE Trans. Circuits Syst. Video Technol., vol.  24, no. 1, pp. 27–38, Jan.  2014.
[48]
R. Achanta et al., “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2281, Nov. 2012.
[49]
N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man, Cybern., vol. 9, no. 1, pp. 62–66, Jan.  1979.
[50]
C. Rother, V. Kolmogorov, and A. Blake, “ GrabCut: Interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. , vol. 23, no. 3, pp. 309–314, Aug. 2004.
[51]
Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell., vol.  26, no. 9, pp. 1124–1137, Sep.  2004.
[52]
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, Jun. 2010.
[53]
C. L. Zitnick and P. Dollar, “Edge Boxes: locating object proposals from edges,” in Proc. 13th Eur. Conf. Comput. Vis., Sep. 2014, pp.  391–405.
[54]
Z. Li and J. Chen, “ Superpixel segmentation using linear spectral clustering,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.  2015, pp. 1356–1363.
[55]
Z. Liu, X. Zhang, S. Luo, and O. Le Meur, “Superpixel-based spatiotemporal saliency detection,” IEEE Trans. Circuits Syst. Video Technol., vol.  24, no. 9, pp. 1522–1540, Sep.  2014.

Cited By

View all

Index Terms

  1. Salient Object Segmentation via Effective Integration of Saliency and Objectness
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image IEEE Transactions on Multimedia
            IEEE Transactions on Multimedia  Volume 19, Issue 8
            Aug. 2017
            273 pages

            Publisher

            IEEE Press

            Publication History

            Published: 01 August 2017

            Qualifiers

            • Research-article

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)0
            • Downloads (Last 6 weeks)0
            Reflects downloads up to 29 Jan 2025

            Other Metrics

            Citations

            Cited By

            View all

            View Options

            View options

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media