Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications
Abstract
:1. Introduction
1.1. Stochastic Hard Thresholding Methods
1.2. Our Contributions
- (1)
- It is is non-trivial that we analyze the statistical estimation performance of SRGSP under mild assumptions, and the theoretical results show that SRGSP obtains a fast linear convergence rate.
- (2)
- Benefiting from less hard thresholding operations than existing algorithms such as SVRGHT, the average per-iteration cost of our algorithm is much lower ( for SRGSP vs. for SVRGHT), which leads to faster convergence.
- (3)
- Moreover, less usage of hard thresholding operators to the current variable results in retain of gradient optimization information, which improves empirical performances. Stochastic recursive gradient support pursuit leads to a new trend to reduce the complexity of head thresholding operation while maintaining or even improving the performance.
- (4)
- We also evaluate the empirical performance of our SRGSP method on sparse linear and logistic regression tasks as well as real-world applications such as image denoising and face recognition. Our experimental results show the efficiency and effectiveness of SRGSP.
2. Related Work
2.1. Notation
2.2. Sparse Representation-Based Image Denoising
2.3. Sparse Representation-Based Face Recognition
3. Our Stochastic Recursive Gradient Support Pursuit Method
Algorithm 1: Stochastic Recursive Gradient Support Pursuit (SRGSP) |
|
4. Convergence Analysis
4.1. Convergence Property of Our Sub-solver
4.2. Convergence Property of SRGSP
5. Experimental Results
5.1. Baseline Methods
5.2. Synthetic Data
5.3. Real-World Data
5.4. Image Denoising
5.5. Face Recognition
5.5.1. Datasets
5.5.2. Experimental Setup
5.5.3. Results on Real-World Face Data
6. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A
Sparse Representation-Based Classification (SRC)
Algorithm A1: Sparse Representation-based Classification |
|
References
- Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: Algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
- Liu, S.; Hu, Q.; Li, P.; Zhao, J.; Wang, C.; Zhu, Z. Speckle Suppression Based on Sparse Representation with Non-Local Priors. Remote Sens. 2018, 10, 439. [Google Scholar] [CrossRef]
- Tu, B.; Zhang, X.; Kang, X.; Zhang, G.; Wang, J.; Wu, J. Hyperspectral Image Classification via Fusing Correlation Coefficient and Joint Sparse Representation. IEEE Geoence Remote Sens. Lett. 2018, 15, 340–344. [Google Scholar] [CrossRef]
- Liu, S.; Liu, M.; Li, P.; Zhao, J.; Zhu, Z.; Wang, X. SAR Image Denoising via Sparse Representation in Shearlet Domain Based on Continuous Cycle Spinning. IEEE Trans. Geoence Remote Sens. 2017, 55, 2985–2992. [Google Scholar] [CrossRef]
- Shao, L.; Yan, R.; Li, X.; Liu, Y. From heuristic optimization to dictionary learning: A review and comprehensive comparison of image denoising algorithms. IEEE Trans. Cybern. 2014, 44, 1001–1013. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K.O. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
- Yan, R.; Shao, L.; Cvetkovic, S.D.; Klijn, J. Improved nonlocal means based on pre-classification and invariant block matching. J. Disp. Technol. 2012, 8, 212–218. [Google Scholar] [CrossRef]
- Elad, M.; Aharon, M. Image denoising via learned dictionaries and sparse representation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 895–900. [Google Scholar]
- Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV), Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
- Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef]
- Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D.J. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 711–720. [Google Scholar] [CrossRef] [Green Version]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [Green Version]
- Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
- Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
- Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef] [Green Version]
- Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
- Needell, D.; Vershynin, R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. arXiv 2007, arXiv:0712.1360. [Google Scholar] [CrossRef] [Green Version]
- Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef] [Green Version]
- Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 2010, 53, 93–100. [Google Scholar] [CrossRef]
- Blumensath, T.; Davies, M. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef] [Green Version]
- Yuan, X.; Li, P.; Zhang, T. Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization. Available online: http://proceedings.mlr.press/v32/yuan14.pdf (accessed on 29 August 2020).
- Bahmani, S.; Raj, B.; Boufounos, P.T. Greedy sparsity-constrained optimization. J. Mach. Learn. Res. 2013, 14, 807–841. [Google Scholar]
- Nguyen, N.; Needell, D.; Woolf, T. Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints. IEEE Trans. Inf. Theory 2017, 63, 6869–6895. [Google Scholar] [CrossRef]
- Li, X.; Zhao, T.; Arora, R.; Liu, H.; Haupt, J. Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning. Available online: http://proceedings.mlr.press/v48/lid16.pdf (accessed on 29 August 2020).
- Johnson, R.; Zhang, T. Accelerating Stochastic Gradient Descent Using Predictive Variance Reduction. Available online: http://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction (accessed on 29 August 2020).
- Shen, J.; Li, P. A tight bound of hard thresholding. J. Mach. Learn. Res. 2017, 18, 7650–7691. [Google Scholar]
- Chen, J.; Gu, Q. Accelerated Stochastic Block Coordinate Gradient Descent for Sparsity Constrained Nonconvex Optimization. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, New York, NY, USA, 25–29 June 2016. [Google Scholar]
- Gao, H.; Huang, H. Stochastic Second-Order Method for Large-Scale Nonconvex Sparse Learning Models. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), Stockholm, Sweden, 13–19 July 2018; pp. 2128–2134. [Google Scholar]
- Chen, J.; Gu, Q. Fast newton hard thresholding pursuit for sparsity constrained nonconvex optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 15–17 August 2017; ACM: New York, NY, USA, 2017; pp. 757–766. [Google Scholar]
- Shang, F.; Liu, Y.; Cheng, J.; Zhuo, J. Fast stochastic variance reduced gradient method with momentum acceleration for machine learning. arXiv 2017, arXiv:1703.07948. [Google Scholar]
- Liang, G.; Tong, Q.; Zhu, C.; Bi, J. An Effective Hard Thresholding Method Based on Stochastic Variance Reduction for Nonconvex Sparse Learning. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 1585–1592. [Google Scholar]
- Shang, F.; Zhou, K.; Liu, H.; Cheng, J.; Tsang, I.; Zhang, L.; Tao, D.; Jiao, L. VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning. IEEE Trans. Knowl. Data Eng. 2020, 32, 188–202. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Shang, F.; Liu, H.; Kong, L.; Jiao, L.; Lin, Z. Accelerated Variance Reduction Stochastic ADMM for Large-Scale Machine Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef]
- Liu, X.; Wei, B.; Shang, F.; Liu, H. Loopless Semi-Stochastic Gradient Descent with Less Hard Thresholding for Sparse Learning. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; ACM: New York, NY, USA, 2019; pp. 881–890. [Google Scholar]
- Nguyen, L.M.; Liu, J.; Scheinberg, K.; Takáč, M. SARAH: A novel method for machine learning problems using stochastic recursive gradient. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2613–2621. [Google Scholar]
- Zhao, Y.B. Optimal k-thresholding algorithms for sparse optimization problems. SIAM J. Optim. 2020, 30, 31–55. [Google Scholar] [CrossRef] [Green Version]
- Engan, K.; Rao, B.D.; Kreutz-Delgado, K. Frame design using FOCUSS with method of optimal directions (MOD). In Proceedings of the NORSIG, Oslo, Norway, 9–11 September 1999; pp. 65–69. [Google Scholar]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
- Ahmed, N.; Natarajan, T.; Rao, K.R. Discrete cosine transform. IEEE Trans. Comput. 1974, 100, 90–93. [Google Scholar] [CrossRef]
- Zhou, K.; Shang, F.; Cheng, J. A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5975–5984. [Google Scholar]
- Shang, F.; Jiao, L.; Zhou, K.; Cheng, J.; Ren, Y.; Jin, Y. ASVRG: Accelerated Proximal SVRG. In Proceedings of the Asian Conference on Machine Learning, Beijing, China, 14–16 November 2018; pp. 815–830. [Google Scholar]
- Yuan, H.; Lian, X.; Li, C.J.; Liu, J.; Hu, W. Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent. In Advances in Neural Information Processing Systems; NIPS: Vancouver, BC, Canada, 2019; pp. 6926–6935. [Google Scholar]
- Zhou, P.; Yuan, X.T.; Yan, S.; Feng, J. Faster First-Order Methods for Stochastic Non-Convex Optimization on Riemannian Manifolds. IEEE Trans. Pattern Anal. Mach. Intell. 2019. [Google Scholar] [CrossRef] [Green Version]
- Karimi, H.; Nutini, J.; Schmidt, M. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer: Berlin/Heidelberg, Germany, 2016; pp. 795–811. [Google Scholar]
- Guleryuz, O.G. Nonlinear approximation based image recovery using adaptive sparse reconstructions. In Proceedings of the 2003 International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003; pp. 713–716. [Google Scholar]
- Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
- Martinez, A.M. The AR face database. Available online: https://ci.nii.ac.jp/naid/10011462458/ (accessed on 29 August 2020).
- Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D.J. From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 643–660. [Google Scholar] [CrossRef] [Green Version]
- Shang, F.; Cheng, J.; Liu, Y.; Luo, Z.Q.; Lin, Z. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2066–2080. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H. Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion. Available online: http://papers.nips.cc/paper/5476-generalized-higher-order-orthogonal-iteration-for-tensor-decomposition-and-completion (accessed on 29 August 2020).
- Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2551–2563. [Google Scholar] [CrossRef] [PubMed]
Algorithms | Peppers | Cameraman | House | Man | Hill | Boat | |
---|---|---|---|---|---|---|---|
5 | SRGSP | 34.45/0.9259 | 34.12/0.9195 | 35.64/0.9322 | 34.15/0.9149 | 34.25/0.9132 | 34.68/0.9256 |
LSSG-HT | 33.56/0.9022 | 32.65/0.8932 | 34.12/0.9123 | 33.26/0.8865 | 33.35/0.8995 | 33.32/0.9203 | |
SVRGHT | 33.95/0.9135 | 33.25/0.9065 | 34.01/0.9023 | 33.95/0.8997 | 33.39/0.9012 | 33.61/0.9165 | |
SG-HT | 25.56/0.7801 | 26.62/0.7725 | 27.56/0.7835 | 27.65/0.7832 | 27.85/0.7532 | 25.89/0.7710 | |
GraSP | 27.89/0.8832 | 24.72/0.8632 | 30.85/0.8857 | 28.25/0.7755 | 28.65/0.8278 | 26.91/0.7897 | |
OMP | 34.02/0.8932 | 33.25/0.8706 | 34.23/0.8769 | 32.35/0.8562 | 33.35/0.8623 | 33.73/0.9143 | |
10 | SRGSP | 32.64/0.8942 | 32.05/0.8867 | 33.95/0.8932 | 32.13/0.8549 | 32.85/0.8535 | 32.56/0.8691 |
LSSG-HT | 31.68/0.8822 | 30.22/0.7823 | 33.15/0.8734 | 31.56/0.8305 | 31.56/0.8462 | 31.98/0.85.65 | |
SVRGHT | 31.95/0.8835 | 29.56/0.7656 | 33.05/0.8721 | 31.35/0.8497 | 31.12/0.8342 | 31.56/0.8479 | |
SG-HT | 24.48/0.7532 | 24.98/0.7272 | 26.85/0.7373 | 26.95/0.7326 | 26.89/0.7265 | 25.56/0.7235 | |
GraSP | 27.56/0.8596 | 24.25/0.8323 | 30.15/0.8657 | 27.65/0.7651 | 28.26/0.7578 | 26.54/0.7589 | |
OMP | 30.25/0.7656 | 29.56/0.7685 | 29.52/0.7685 | 28.36/0.7265 | 29.65/0.7552 | 29.89/0.7551 | |
15 | SRGSP | 30.57/0.8759 | 29.10/0.8707 | 32.81/0.8622 | 30.34/0.8249 | 30.65/0.7990 | 30.53/0.8191 |
LSSG-HT | 29.95/0.8432 | 27.26/0.7102 | 32.12/0.8234 | 29.96/0.8105 | 30.24/0.7895 | 30.35/0.7956 | |
SVRGHT | 30.35/0.8585 | 27.84/0.6927 | 32.26/0.8513 | 30.09/0.8197 | 30.34/0.7897 | 30.34/0.8079 | |
SG-HT | 23.07/0.7311 | 22.92/0.6872 | 25.53/0.7073 | 26.12/0.7066 | 26.57/0.6721 | 24.89/0.6635 | |
GraSP | 27.04/0.8449 | 23.46/0.8086 | 29.65/0.8357 | 26.97/0.7455 | 27.87/0.7178 | 26.11/0.7207 | |
OMP | 27.76/0.6602 | 27.32/0.6006 | 27.80/0.5769 | 27.69/0.6504 | 27.70/0.6592 | 26.37/0.6051 | |
25 | SRGSP | 28.19/0.8232 | 27.37/0.8186 | 30.39/0.8224 | 28.10/0.7470 | 28.53/0.7164 | 28.18/0.7445 |
LSSG-HT | 27.35/0.7785 | 26.35/0.5121 | 29.56/0.7806 | 27.85/0.7531 | 27.86/0.6842 | 27.96/0.7095 | |
SVRGHT | 27.85/0.7606 | 26.85/0.5232 | 29.48/0.7797 | 27.92/0.7323 | 28.09/0.6998 | 27.82/0.7012 | |
SG-HT | 22.34/0.6386 | 22.40/0.5810 | 24.68/0.5974 | 25.15/0.6030 | 25.57/0.5757 | 24.13/0.5711 | |
GraSP | 26.12/0.8078 | 24.66/0.7715 | 28.50/0.8034 | 27.09/0.7188 | 27.00/0.6617 | 25.54/0.6748 | |
OMP | 23.40/0.4696 | 23.23/0.4304 | 23.35/0.3838 | 23.30/0.4458 | 23.31/0.4457 | 21.91/0.4065 | |
35 | SRGSP | 26.55/0.7851 | 25.84/0.7674 | 28.62/0.7909 | 26.84/0.6966 | 27.29/0.6642 | 26.74/0.6920 |
LSSG-HT | 25.16/0.7023 | 25.32/0.6812 | 27.65/0.7126 | 26.48/0.6610 | 26.56/0.6215 | 26.25/0.6126 | |
SVRGHT | 25.94/0.7390 | 25.46/0.6907 | 27.46/0.7046 | 26.52/0.6627 | 26.85/0.6371 | 26.05/0.6025 | |
SG-HT | 21.68/0.5514 | 21.77/0.4840 | 23.78/0.4916 | 24.28/0.5159 | 24.68/0.4950 | 23.37/0.4870 | |
GraSP | 23.96/0.7546 | 22.91/0.7080 | 26.33/0.7549 | 25.67/0.6639 | 26.29/0.6269 | 25.02/0.6442 | |
OMP | 20.46/0.3529 | 20.24/0.3264 | 20.40/0.2780 | 20.42/0.3215 | 20.38/0.3121 | 18.94/0.2882 | |
45 | SRGSP | 25.26/0.7501 | 24.76/0.7258 | 27.53/0.7601 | 25.83/0.6575 | 26.40/0.6247 | 25.71/0.6519 |
LSSG-HT | 24.05/0.6502 | 24.12/0.6321 | 26.64/0.6532 | 25.32/0.5962 | 25.62/0.5933 | 25.51/0.5321 | |
SVRGHT | 24.44/0.6722 | 24.24/0.6151 | 26.28/0.6375 | 25.46/0.6009 | 25.95/0.5807 | 24.71/0.5172 | |
SG-HT | 21.05/0.4754 | 21.24/0.457 | 23.04/0.4120 | 23.38/0.4386 | 23.80/0.4207 | 22.63/0.4135 | |
GraSP | 23.30/0.7264 | 22.70/0.6848 | 25.76/0.7330 | 25.03/0.6355 | 25.71/0.6014 | 24.43/0.6146 | |
OMP | 18.27/0.2724 | 18.20/0.2604 | 18.24/0.2134 | 18.20/0.2384 | 18.18/0.2261 | 16.77/0.2176 | |
55 | SRGSP | 24.28/0.7254 | 23.90/0.6946 | 26.28/0.7203 | 25.16/0.6271 | 25.74/0.5988 | 24.84/0.6179 |
LSSG-HT | 23.35/0.6212 | 23.01/0.5621 | 25.43/0.5963 | 24.52/0.5632 | 25.01/0.5423 | 24.32/0.4623 | |
SVRGHT | 23.62/0.6183 | 23.22/0.5519 | 25.23/0.5708 | 24.74/0.5480 | 25.27/0.5349 | 23.61/0.4462 | |
SG-HT | 20.51/0.4127 | 20.60/0.3484 | 22.25/0.3449 | 22.59/0.3774 | 22.98/0.3635 | 21.91/0.3546 | |
GraSP | 22.86/0.7050 | 22.36/0.6646 | 25.04/0.7050 | 24.57/0.6136 | 25.30/0.5854 | 23.94/0.5918 | |
OMP | 16.50/0.2203 | 16.43/0.2174 | 16.44/0.1665 | 16.48/0.1852 | 16.47/0.1720 | 15.02/0.1667 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shang, F.; Wei, B.; Liu, Y.; Liu, H.; Wang, S.; Jiao, L. Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications. Sensors 2020, 20, 4902. https://doi.org/10.3390/s20174902
Shang F, Wei B, Liu Y, Liu H, Wang S, Jiao L. Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications. Sensors. 2020; 20(17):4902. https://doi.org/10.3390/s20174902
Chicago/Turabian StyleShang, Fanhua, Bingkun Wei, Yuanyuan Liu, Hongying Liu, Shuang Wang, and Licheng Jiao. 2020. "Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications" Sensors 20, no. 17: 4902. https://doi.org/10.3390/s20174902
APA StyleShang, F., Wei, B., Liu, Y., Liu, H., Wang, S., & Jiao, L. (2020). Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications. Sensors, 20(17), 4902. https://doi.org/10.3390/s20174902