| Peer-Reviewed

Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer

Received: 20 April 2017    Accepted:     Published: 20 April 2017
Views:       Downloads:
Abstract

Although a lot of studies in salient motion detection have achieved great success in recent years, many challenges still exist toward the video saliency detection over the non-stationary videos and videos with slowly-moving objects, which supposes to exhibit significant influence on its corresponding subsequent applications. Thus, it urgently needs a more robust, stable, and precise method to solve the above mentioned limitations. In fact, inspired from the basic visualization rule of the human vision system, the human’s attention can be easily attracted by two independent factors: the motion saliency clue and the color saliency clue. Hence, this paper develops a novel salient motion detection method by fusing the motion saliency with the color saliency, which refines the preliminary saliency map by self-adaptive transfer via the newly designed intra-frame correlation. Also, comprehensive experimental results of our method toward the state-of-the-art methods over 4 public available benchmarks demonstrate the superiority of our method both in its robustness and high detection precision.

Published in Science Discovery (Volume 5, Issue 2)
DOI 10.11648/j.sd.20170502.14
Page(s) 100-107
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Saliency Detection, Contrast, Self-Adaptive, Saliency-Transfer

References
[1] Zhang D, Javed O, Shah M. Video object segmentation through spatially accurate and temporally dense extraction of primary object regions [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013: 628-635.
[2] Chen C, Li S, Qin H, et al. Real-time and robust object tracking in video via low-rank coherency analysis in feature space [J]. Pattern Recognition, 2015, 48 (9): 2885-2905.
[3] Zhou F, Bing Kang S, Cohen M. F. Time-mapping using space-time saliency [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014: 3358-3365.
[4] Liang D, Kaneko S, Hashimoto M, et al. Robust object detection in severe imaging conditions using co-occurrence background model [J]. International Journal of Optomechatronics, 2014, 8 (1): 14-29.
[5] St-Charles P. L, Bilodeau G. A, Bergevin R. Subsense: A universal change detection method with local adaptive sensitivity [J]. IEEE Transactions on Image Processing, 2015, 24 (1): 359-373.
[6] Chen C, Li S, Qin H, et al. Robust salient motion detection in non-stationary videos via novel integrated strategies of spatio-temporal coherency clues and low-rank analysis [J]. Pattern Recognition, 2016, 52: 410-432.
[7] Gao Z, Cheong L. F, Wang Y. X. Block-sparse RPCA for salient motion detection [J]. IEEE transactions on pattern analysis and machine intelligence, 2014, 36 (10): 1975-1987.
[8] Cheng M. M, Mitra N. J, Huang X, et al. Global contrast based salient region detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37 (3): 569-582.
[9] Gastal E. S. L, Oliveira M. M. Domain transform for edge-aware image and video processing [C]// ACM Transactions on Graphics (TOG). ACM, 2011, 30 (4): 69.
[10] Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods [J]. IEEE transactions on pattern analysis and machine intelligence, 2012, 34 (11): 2274-2282.
[11] Fang Y, Wang Z, Lin W, et al. Video saliency incorporating spatiotemporal cues and uncertainty weighting [J]. IEEE Transactions on Image Processing, 2014, 23 (9): 3910-3921.
[12] Wang W, Shen J, Porikli F. Saliency-aware geodesic video object segmentation [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 3395-3402.
[13] Liu C. Beyond pixels: exploring new representations and applications for motion analysis [D]. Massachusetts Institute of Technology, 2009.
[14] Wang W, Shen J, Shao L. Consistent video saliency using local gradient flow optimization and global refinement [J]. IEEE Transactions on Image Processing, 2015, 24 (11): 4185-4196.
[15] Tsai D, Flagg M, Nakazawa A, et al. Motion coherent tracking using multi-label MRF optimization [J]. International journal of computer vision, 2012, 100 (2): 190-202.
[16] Li F, Kim T, Humayun A, et al. Video segmentation by tracking many figure-ground segments [C]// Proceedings of the IEEE International Conference on Computer Vision. 2013: 2192-2199.
[17] Brox T, Malik J. Object segmentation by long term analysis of point trajectories [C]// European conference on computer vision. Springer Berlin Heidelberg, 2010: 282-295.
[18] Fukuchi K, Miyazato K, Kimura A, et al. Saliency-based video segmentation with graph cuts and sequentially updated priors [C]// IEEE International Conference on Multimedia and Expo. IEEE, 2009: 638-641.
[19] Kim H, Kim Y, Sim J. Y, et al. Spatiotemporal saliency detection for video sequences based on random walk with restart [J]. IEEE Transactions on Image Processing, 2015, 24 (8): 2552-2564.
[20] Itti L, Koch C, Niebur E. A Model of Saliency-Based Visual Attention for Rapid Scene Analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20 (11): 1254-1259.
[21] Hou X, Zhang L. Saliency detection: A spectral residual approach [C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007: 1-8.
[22] Achanta R, Estrada F, Wils P, et al. Salient region detection and segmentation [C]// International conference on computer vision systems. Springer Berlin Heidelberg, 2008: 66-75.
[23] Cheng M. M, Mitra N. J, Huang X, et al. Global contrast based salient region detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37 (3): 569-582.
[24] Perazzi F, Krähenbühl P, Pritch Y, et al. Saliency filters: Contrast based filtering for salient region detection [C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 733-740.
Cite This Article
  • APA Style

    Wang Yongguang, Hao Aimin, Li Shuai. (2017). Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer. Science Discovery, 5(2), 100-107. https://doi.org/10.11648/j.sd.20170502.14

    Copy | Download

    ACS Style

    Wang Yongguang; Hao Aimin; Li Shuai. Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer. Sci. Discov. 2017, 5(2), 100-107. doi: 10.11648/j.sd.20170502.14

    Copy | Download

    AMA Style

    Wang Yongguang, Hao Aimin, Li Shuai. Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer. Sci Discov. 2017;5(2):100-107. doi: 10.11648/j.sd.20170502.14

    Copy | Download

  • @article{10.11648/j.sd.20170502.14,
      author = {Wang Yongguang and Hao Aimin and Li Shuai},
      title = {Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer},
      journal = {Science Discovery},
      volume = {5},
      number = {2},
      pages = {100-107},
      doi = {10.11648/j.sd.20170502.14},
      url = {https://doi.org/10.11648/j.sd.20170502.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.sd.20170502.14},
      abstract = {Although a lot of studies in salient motion detection have achieved great success in recent years, many challenges still exist toward the video saliency detection over the non-stationary videos and videos with slowly-moving objects, which supposes to exhibit significant influence on its corresponding subsequent applications. Thus, it urgently needs a more robust, stable, and precise method to solve the above mentioned limitations. In fact, inspired from the basic visualization rule of the human vision system, the human’s attention can be easily attracted by two independent factors: the motion saliency clue and the color saliency clue. Hence, this paper develops a novel salient motion detection method by fusing the motion saliency with the color saliency, which refines the preliminary saliency map by self-adaptive transfer via the newly designed intra-frame correlation. Also, comprehensive experimental results of our method toward the state-of-the-art methods over 4 public available benchmarks demonstrate the superiority of our method both in its robustness and high detection precision.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Research on Video Saliency Detection Via Contrast and Self-Adaptive Transfer
    AU  - Wang Yongguang
    AU  - Hao Aimin
    AU  - Li Shuai
    Y1  - 2017/04/20
    PY  - 2017
    N1  - https://doi.org/10.11648/j.sd.20170502.14
    DO  - 10.11648/j.sd.20170502.14
    T2  - Science Discovery
    JF  - Science Discovery
    JO  - Science Discovery
    SP  - 100
    EP  - 107
    PB  - Science Publishing Group
    SN  - 2331-0650
    UR  - https://doi.org/10.11648/j.sd.20170502.14
    AB  - Although a lot of studies in salient motion detection have achieved great success in recent years, many challenges still exist toward the video saliency detection over the non-stationary videos and videos with slowly-moving objects, which supposes to exhibit significant influence on its corresponding subsequent applications. Thus, it urgently needs a more robust, stable, and precise method to solve the above mentioned limitations. In fact, inspired from the basic visualization rule of the human vision system, the human’s attention can be easily attracted by two independent factors: the motion saliency clue and the color saliency clue. Hence, this paper develops a novel salient motion detection method by fusing the motion saliency with the color saliency, which refines the preliminary saliency map by self-adaptive transfer via the newly designed intra-frame correlation. Also, comprehensive experimental results of our method toward the state-of-the-art methods over 4 public available benchmarks demonstrate the superiority of our method both in its robustness and high detection precision.
    VL  - 5
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China

  • State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China

  • State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China

  • Sections