skip to main content
10.1145/3664647.3681239acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article
Open access

Unraveling Motion Uncertainty for Local Motion Deblurring

Published: 28 October 2024 Publication History

Abstract

In real-world photography, local motion blur often arises from the interplay between moving objects and stationary backgrounds during exposure. Existing deblurring methods face challenges in addressing local motion deblurring due to (i) the presence of arbitrary localized blurs and uncertain blur extents; (ii) the limited ability to accurately identify specific blurs resulting from ambiguous motion boundaries. These limitations often lead to suboptimal solutions when estimating blur maps and generating final deblurred images. To that end, we propose a novel method named Motion-Uncertainty-Guided Network (MUGNet), which harnesses a probabilistic representational model to explicitly address the intricacies stemming from motion uncertainties. Specifically, MUGNet consists of two key components, i.e., motion-uncertainty quantification (MUQ) module and motion-masked separable attention (M2SA) module, serving for complementary purposes. Concretely, MUQ aims to learn a conditional distribution for accurate and reliable blur map estimation, while the M2SA module is to enhance the representation of regions influenced by local motion blur and static background, which is achieved by promoting the establishment of extensive global interactions. We demonstrate the superiority of our MUGNet with extensive experiments. The code is publicly available at: https://github.com/zeyuxiao1997/MUGNet.

References

[1]
Abdullah Abuolaim and Michael S Brown. 2020. Defocus Deblurring Using Dual-Pixel Data. In Eur. Conf. Comput. Vis.
[2]
Abdullah Abuolaim, Michael S Brown, and Radu Timofte. 2021. NTIRE 2021 Challenge for Defocus Deblurring Using Dual-pixel Images: Methods and Results. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh.
[3]
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural network. PMLR, 1613--1622.
[4]
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. 2022. Simple Baselines for Image Restoration. In Eur. Conf. Comput. Vis.
[5]
Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, and Chengpeng Chen. 2021. Hinet: Half instance normalization network for image restoration. In IEEE Conf. Comput. Vis. Pattern Recog. 182--192.
[6]
Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. 2021. Rethinking coarse-to-fine approach in single image deblurring. In Int. Conf. Comput. Vis. 4641--4650.
[7]
Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. 2021. Rethinking Coarse-To-Fine Approach in Single Image Deblurring. In Int. Conf. Comput. Vis.
[8]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).
[9]
Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T Roweis, and William T Freeman. 2006. Removing camera shake from a single photograph. In SIGGRAPH.
[10]
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. PMLR, 1050--1059.
[11]
Maedeh Jamali, Nader Karimi, and Shadrokh Samavi. 2021. Weighted Fuzzy-Based PSNR for Watermarking. arXiv preprint arXiv:2101.08502 (2021).
[12]
Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. 2015. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680 (2015).
[13]
Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? Adv. Neural Inform. Process. Syst., Vol. 30 (2017).
[14]
Durk P Kingma, Tim Salimans, and Max Welling. 2015. Variational dropout and the local reparameterization trick. Adv. Neural Inform. Process. Syst., Vol. 28 (2015).
[15]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[16]
Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. 2023. Efficient frequency domain-based transformers for high-quality image deblurring. In IEEE Conf. Comput. Vis. Pattern Recog. 5886--5895.
[17]
Dilip Krishnan, Terence Tay, and Rob Fergus. 2011. Blind deconvolution using a normalized sparsity measure. In IEEE Conf. Comput. Vis. Pattern Recog.
[18]
Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jivrí Matas. 2018. Deblurgan: Blind motion deblurring using conditional adversarial networks. In IEEE Conf. Comput. Vis. Pattern Recog.
[19]
Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. 2019. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Int. Conf. Comput. Vis.
[20]
Junyong Lee, Sungkil Lee, Sunghyun Cho, and Seungyong Lee. 2019. Deep defocus map estimation using domain adaptation. In IEEE Conf. Comput. Vis. Pattern Recog.
[21]
Haoying Li, Ziran Zhang, Tingting Jiang, Peng Luo, Huajun Feng, and Zhihai Xu. 2023. Real-world deep local motion deblurring, Vol. 37. 1314--1322.
[22]
Haoying Li, Jixin Zhao, Shangchen Zhou, Huajun Feng, Chongyi Li, and Chen Change Loy. 2023. Adaptive Window Pruning for Efficient Local Motion Deblurring. arXiv preprint arXiv:2306.14268 (2023).
[23]
Zhihe Lu, Da Li, Yi-Zhe Song, Tao Xiang, and Timothy M Hospedales. 2023. Uncertainty-aware Source-free Domain Adaptive Semantic Segmentation. IEEE Trans. Image Process. (2023).
[24]
Zhihe Lu, Zeyu Xiao, Jiawang Bai, Zhiwei Xiong, and Xinchao Wang. 2023. Can sam boost video super-resolution? arXiv preprint arXiv:2305.06524 (2023).
[25]
Zhihe Lu, Yongxin Yang, Xiatian Zhu, Cong Liu, Yi-Zhe Song, and Tao Xiang. 2020. Stochastic classifiers for unsupervised domain adaptation. In IEEE Conf. Comput. Vis. Pattern Recog.
[26]
Xintian Mao, Yiming Liu, Wei Shen, Qingli Li, and Yan Wang. 2021. Deep Residual Fourier Transformation for Single Image Deblurring. arXiv preprint arXiv:2111.11745 (2021).
[27]
Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. 2017. Deep multi-scale convolutional neural network for dynamic scene deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[28]
Jinshan Pan, Deqing Sun, Hanspeter Pfister, and Ming-Hsuan Yang. 2016. Blind image deblurring using dark channel prior. In IEEE Conf. Comput. Vis. Pattern Recog.
[29]
Liyuan Pan, Yuchao Dai, and Miaomiao Liu. 2019. Single image deblurring and camera motion estimation with depth map. In WACV.
[30]
Sainandan Ramakrishnan, Shubham Pachori, Aalok Gangopadhyay, and Shanmuganathan Raman. 2017. Deep generative filter for motion deblurring. In Int. Conf. Comput. Vis. Worksh.
[31]
Wenqi Ren, Jiawei Zhang, Jinshan Pan, Sifei Liu, Jimmy S Ren, Junping Du, Xiaochun Cao, and Ming-Hsuan Yang. 2021. Deblurring dynamic scenes via spatially varying recurrent neural networks. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 44, 8 (2021), 3974--3987.
[32]
Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho. 2020. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Eur. Conf. Comput. Vis. Springer, 184--201.
[33]
Qi Shan, Jiaya Jia, and Aseem Agarwala. 2008. High-quality motion deblurring from a single image. Acm Transactions on Graphics (TOG), Vol. 27, 3 (2008), 1--10.
[34]
Shuochen Su, Mauricio Delbracio, Jue Wang, Guillermo Sapiro, Wolfgang Heidrich, and Oliver Wang. 2017. Deep video deblurring for hand-held cameras. In IEEE Conf. Comput. Vis. Pattern Recog.
[35]
Maitreya Suin, Kuldeep Purohit, and A. N. Rajagopalan. 2020. Spatially-Attentive Patch-Hierarchical Network for Adaptive Motion Deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[36]
Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. 2018. Scale-recurrent network for deep image deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[37]
Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. 2018. Scale-Recurrent Network for Deep Image Deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[38]
Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. 2022. Stripformer: Strip Transformer for Fast Image Deblurring. In Eur. Conf. Comput. Vis.
[39]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Adv. Neural Inform. Process. Syst.
[40]
Boyang Wang, Fengyu Yang, Xihang Yu, Chao Zhang, and Hanbin Zhao. 2024. APISR: Anime Production Inspired Real-World Anime Super-Resolution. In IEEE Conf. Comput. Vis. Pattern Recog. 25574--25584.
[41]
Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. 2022. Uformer: A general u-shaped transformer for image restoration. In IEEE Conf. Comput. Vis. Pattern Recog. 17683--17693.
[42]
Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. 2022. Uformer: A General U-Shaped Transformer for Image Restoration. In IEEE Conf. Comput. Vis. Pattern Recog.
[43]
Patrick Wieschollek, Michael Hirsch, Bernhard Scholkopf, and Hendrik Lensch. 2017. Learning blind motion deblurring. In Int. Conf. Comput. Vis.
[44]
Zeyu Xiao, Jiawang Bai, Zhihe Lu, and Zhiwei Xiong. 2023. A dive into sam prior in image restoration. arXiv preprint arXiv:2305.13620 (2023).
[45]
Zeyu Xiao, Zhen Cheng, and Zhiwei Xiong. 2023. Space-time super-resolution for light field videos. IEEE Trans. Image Process. (2023).
[46]
Zeyu Xiao, Xueyang Fu, Jie Huang, Zhen Cheng, and Zhiwei Xiong. 2021. Space-time distillation for video super-resolution. In IEEE Conf. Comput. Vis. Pattern Recog. 2113--2122.
[47]
Zeyu Xiao, Ruisheng Gao, Yutong Liu, Yueyi Zhang, and Zhiwei Xiong. 2023. Toward real-world light field super-resolution. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh. 3408--3418.
[48]
Zeyu Xiao, Yutong Liu, Ruisheng Gao, and Zhiwei Xiong. 2023 d. Cutmib: Boosting light field super-resolution via multi-view image blending. In IEEE Conf. Comput. Vis. Pattern Recog. 1672--1682.
[49]
Li Xu, Shicheng Zheng, and Jiaya Jia. 2013. Unnatural l0 sparse representation for natural image deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[50]
Ruikang Xu, Zeyu Xiao, Jie Huang, Yueyi Zhang, and Zhiwei Xiong. 2021. EDPN: Enhanced deep pyramid network for blurry image restoration. In IEEE Conf. Comput. Vis. Pattern Recog. 414--423.
[51]
Fan Yang, Qiang Zhai, Xin Li, Rui Huang, Ao Luo, Hong Cheng, and Deng-Ping Fan. 2021. Uncertainty-guided transformer reasoning for camouflaged object detection. In Int. Conf. Comput. Vis. 4146--4155.
[52]
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. 2022. Restormer: Efficient transformer for high-resolution image restoration. In IEEE Conf. Comput. Vis. Pattern Recog. 5728--5739.
[53]
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. 2022. Restormer: Efficient Transformer for High-Resolution Image Restoration. In IEEE Conf. Comput. Vis. Pattern Recog.
[54]
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. 2021. Multi-Stage Progressive Image Restoration. In IEEE Conf. Comput. Vis. Pattern Recog.
[55]
Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. 2019. Deep Stacked Hierarchical Multi-patch Network for Image Deblurring. In IEEE Conf. Comput. Vis. Pattern Recog.
[56]
Jiawei Zhang, Jinshan Pan, Jimmy Ren, Yibing Song, Linchao Bao, Rynson WH Lau, and Ming-Hsuan Yang. 2018. Dynamic scene deblurring using spatially variant recurrent neural networks. In IEEE Conf. Comput. Vis. Pattern Recog.
[57]
Jiawei Zhang, Jinshan Pan, Daoye Wang, Shangchen Zhou, Xing Wei, Furong Zhao, Jianbo Liu, and Jimmy Ren. 2021. Deep dynamic scene deblurring from optical flow. IEEE Trans. Circuit Syst. Video Technol., Vol. 32, 12 (2021), 8250--8260.
[58]
Shanghang Zhang, Xiaohui Shen, Zhe Lin, Radomír Mvech, Joao P Costeira, and José MF Moura. 2018. Learning to understand image blur. In IEEE Conf. Comput. Vis. Pattern Recog. 6586--6595.
[59]
Olga Zhaxybayeva and J Peter Gogarten. 2002. Bootstrap, Bayesian probability and maximum likelihood mapping: exploring new tools for comparative genome analyses. BMC genomics, Vol. 3, 1 (2002), 1--15.
[60]
Minghang Zheng, Peng Gao, Renrui Zhang, Kunchang Li, Xiaogang Wang, Hongsheng Li, and Hao Dong. 2020. End-to-end object detection with adaptive clustering transformer. arXiv preprint arXiv:2011.09315 (2020).
[61]
Shangchen Zhou, Chongyi Li, and Chen Change Loy. 2022. Lednet: Joint low-light enhancement and deblurring in the dark. In Eur. Conf. Comput. Vis. Springer, 573--589.

Index Terms

  1. Unraveling Motion Uncertainty for Local Motion Deblurring

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '24: Proceedings of the 32nd ACM International Conference on Multimedia
    October 2024
    11719 pages
    ISBN:9798400706868
    DOI:10.1145/3664647
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 October 2024

    Check for updates

    Author Tags

    1. image deblurring
    2. image restoration
    3. local deblurring
    4. motion uncertainty

    Qualifiers

    • Research-article

    Conference

    MM '24
    Sponsor:
    MM '24: The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne VIC, Australia

    Acceptance Rates

    MM '24 Paper Acceptance Rate 1,150 of 4,385 submissions, 26%;
    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 72
      Total Downloads
    • Downloads (Last 12 months)72
    • Downloads (Last 6 weeks)58
    Reflects downloads up to 28 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media