A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network
Abstract
:1. Introduction
2. Methods
2.1. Design of Image Fusion Method
2.1.1. Image Fusion Process Design
2.1.2. Design of Image Fusion Loss Function
2.1.3. The FTSGAN Network Structure Design
2.2. Design of Face Recognition Method
3. Experiments and Discussion
3.1. Preparation of Experiments
3.2. Experimental Procedure
3.3. FTSGAN Training Process
3.4. Face Recognition Authentication Process
3.5. Analysis of Experimental Results
3.5.1. Experimental Analysis of Face Image Fusion
3.5.2. Face Recognition Effect Analysis
- (1)
- Selection of training face dataset
- (2)
- Experimental results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ali, W.; Tian, W.; Din, S.U. Classical and modern face recognition approaches: A complete review. Multimed. Tools Appl. 2021, 80, 4825–4880. [Google Scholar] [CrossRef]
- Harrington, K.J.; Aroldi, F.; Sacco, J.J.; Milhem, M.M.; Curti, B.D.; Vanderwalde, A.M.; Baum, S.; Samson, A.; Pavlick, A.C.; Chesney, J.A.; et al. Abstract LB180: Clinical biomarker studies with two fusion-enhanced versions of oncolytic HSV (RP1 and RP2) alone and in combination with nivolumab in cancer patients indicate potent immune activation. Immunology 2021, 81, LB180. [Google Scholar]
- Yi, W.; Zeng, Y.; Wang, Y.; Deng, J.; Su, W.; Yuan, Z. An improved IHS fusion method of GF-2 remote sensing images. In Proceedings of the International Conference on Signal Image Processing and Communication (ICSIPC 2021), Chengdu, China, 16–18 April 2021; Volume 11848, pp. 238–246. [Google Scholar]
- Mo, Y.; Kang, X.; Duan, P.; Sun, B.; Li, S. Attribute filter based infrared and visible image fusion. Inf. Fusion 2021, 75, 41–54. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Assoc. Comput. Mach. 2020, 63, 139–144. [Google Scholar]
- Omar, Z.; Stathaki, T. Image Fusion: An Overview. In Proceedings of the 2014 5th International Conference on Intelligent Systems, Hunan, China, 15–16 June 2014; pp. 306–310. [Google Scholar]
- Shahdoosti, H.R.; Ghassemian, H. Spatial PCA as a new method for image fusion. In Proceedings of the 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP 2012), Shiraz, Iran, 2–3 May 2012; pp. 90–94. [Google Scholar]
- Li, B.; Wei, J. Remote sensing image fusion based on IHS transform, wavelet transform, and HPF. Image Process. Pattern Recognit. Remote Sens. 2003, 4898, 25–30. [Google Scholar]
- Kamel, B.; Bonnin, P.; de Cabrol, A. Data image fusion using combinatorial maps. Appl. Digit. Image Process. XXVIII 2005, 5909, 481–488. [Google Scholar]
- Luo, R.C.; Kay, M.G. A tutorial on multisensor integration and fusion. In Proceedings of the 16th Annual Conference of IEEE Industrial Electronics Society, Pacific Grove, CA, USA, 27–30 November 1990; Volume 1, pp. 707–722. [Google Scholar]
- Zeiler, M.D.; Fergus, R.; David, F.; Tomas, P.; Bernt, S.; Tinne, T. Visualizing and Understanding Convolutional Networks. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Volume 8689, pp. 818–833. [Google Scholar]
- Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.-P. DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
- Azarang, A.; Manoochehri, H.E.; Kehtarnavaz, N. Convolutional autoencoder-based multispectral image fusion. IEEE Access 2019, 7, 35673–35683. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Wirsing, E. On the theorem of Gauss-Kusmin-Lévy and a Frobenius-type theorem for function spaces. Acta Arith. 1974, 24, 507–528. [Google Scholar] [CrossRef] [Green Version]
- Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Zeiler, M.D.; Krishnan, D.; Graham; Taylor, W.; Fergus, R. Deconvolutional Networks. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2528–2535. [Google Scholar]
- Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely connected convolutional networks. Comput. Vis. Pattern Recognit. 2017, 5, 4700–4708. [Google Scholar]
- Rodriguez, J.D.; Perez, A.; Lozano, J.A. Sensitivity Analysis of k-Fold Cross Validation in Prediction Error Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 569–575. [Google Scholar] [CrossRef]
- Wong, T. Performance evaluation of classification algorithms by k-fold and leave-one-out cross- validation. Pattern Recognit. 2015, 48, 2839–2846. [Google Scholar] [CrossRef]
- Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
- Bansal, A.; Mehta, K.; Arora, S. Face Recognition Using PCA and LDA Algorithm. In Proceedings of the 2012 Second International Conference on Advanced Computing & Communication Technologies, Rohtak, India, 7–8 January 2012; pp. 251–254. [Google Scholar]
- Zhao, W.; Krishnaswamy, A.; Chellappa, R.; Swets, D.L.; Weng, J. Discriminant Analysis of Principal Components for Face Recognition. Face Recognit. Theory Appl. 1998, 163, 73–85. [Google Scholar]
- Borade, S.N.; Deshmukh, R.R.; Ramu, S. Face recognition using fusion of PCA and LDA: Borda count approach. In Proceedings of the 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 1164–1167. [Google Scholar]
- Marcialis, G.L.; Roli, F. Fusion of LDA and PCA for Face Verification. In Proceedings of the International Workshop on Biometric Authentication, Copenhagen, Denmark, 1 June 2002; pp. 30–37. [Google Scholar]
- Zuo, W.; Zhang, D.; Yang, J.; Wang, K. BDPCA plus LDA: A novel fast feature extraction technique for face recognition. IEEE Trans. Syst. 2006, 36, 946–953. [Google Scholar]
- Karen, P.; Wan, Q.; Agaian, S.; Rajeev, S.; Kamath, S.; Rajendran, R.; Rao, S. A comprehensive database for benchmarking imaging systems. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 509–520. [Google Scholar]
- Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolut. Inf. Process. 2018, 16, 1850018. [Google Scholar] [CrossRef]
- Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
- Xu, H.; Ma, J.; Zhang, X. MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks. IEEE Trans. Image Process. 2020, 29, 7203–7216. [Google Scholar] [CrossRef]
- Ram Prabhakar, K.; Sai Srikar, V.; Venkatesh Babu, R. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 October 2017; pp. 4714–4722. [Google Scholar]
- Roberts, J.W.; van Aardt, J.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classifification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar]
- Eskicioglu, A.M.; Fisher, P.S. Image quality measures and their performance. IEEE Trans. Image Process. 1995, 43, 2959–2965. [Google Scholar] [CrossRef] [Green Version]
- Xydeas, C.A.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
Equipment | Graphics Board | Central Processing Unit |
Equipment parameters | RTX2060 (notebook)-6GB | INTEL Core-I7-9750H |
Data Source | Properties | |||||||
---|---|---|---|---|---|---|---|---|
Tufts University face data | Sex | Shelter | Visible Light | Infrared Light | Aligned or Not | Pixel | ||
male | female | glass | no_glass | 480(40 × 12) | 480(40 × 12) | Aligned | 128 × 128 | |
20 | 20 | 20 pairs | 20 pairs |
Experimental Steps | Experimental Projects |
---|---|
1 | - Preprocessing of the Tufts University dataset. |
2 | - The FTSGAN is trained using the processed data. The trained model generates the face fusion image dataset. |
3 | - The face database is added to the designed face recognition algorithm for identity verification. |
4 | - Summary of experimental performance. |
Training Process of Visible and Infrared Fusion |
Parameter descriptions: loss function after training loss function after training : Discriminator and Generator : Loss functions of Discriminator and Generator |
In each training iteration: |
-N paired infrared and visible data pairs |
-Train the discriminator to obtain multiple pairs of data |
- in the next five epochs, repeat Train Generator D. |
-Concatenate stacks-N pairs of paired infrared and visible data |
-Train the generator to obtain multiple pairs of fused images |
- in the next five epochs, repeat Train Generator G. |
Every 20 epochs: -Evaluate the generator fusion model for generating image quality. |
Species | CNN [30] | GTF [31] | MEF-Gan [32] | FusionGan [13] | Deepfuse [33] | FTSGan | |
---|---|---|---|---|---|---|---|
Average fusion time | Graphics card | 0.14 s/p | 0.12 s/p | 0.08 s/p | 0.04 s/p | 0.03 s/p | 0.02 s/p |
CPU | 7.2 s/p | 4.2 s/p | 4.3 s/p | 0.46 s/p | 0.34 s/p | 0.16 s/p |
K-Fold | Fusion | Visible | Ft | Vt |
---|---|---|---|---|
6-fold | 72.92% | 66.67% | 0.98 s | 0.75 s |
93.75% | 91.67% | 0.77 s | 0.55 s | |
97.92% | 95.83% | 0.70 s | 0.61 s | |
95.83% | 93.75% | 0.82 s | 0.69 s | |
100% | 100% | 0.46 s | 0.45 s | |
95.83% | 93.75% | 0.76 s | 0.60 s | |
average | 96.67% | 95% | 0.70 s | 0.58 s |
K-Fold | Fusion | Visible | Ft | Vt |
---|---|---|---|---|
6-fold | 60.42% | 56.25% | 0.63 s | 0.59 s |
85.42% | 83.33% | 0.55 s | 0.49 s | |
97.92% | 95.26% | 0.62 s | 0.50 s | |
93.75% | 91.67% | 0.66 s | 0.49 s | |
100% | 100% | 0.35 s | 0.33 s | |
87.5% | 85.42% | 0.72 s | 0.67 s | |
average | 92.92% | 91.14% | 0.58 s | 0.50 s |
K-Fold | Fusion | Visible | Ft | Vt |
---|---|---|---|---|
6-fold | 68.75% | 64.58% | 0.86 s | 0.55 s |
88.21% | 85.42% | 0.80 s | 0.60 s | |
97.92% | 95.41% | 0.88 s | 0.63 s | |
94.72% | 92.43% | 0.72 s | 0.55 s | |
100% | 100% | 0.79 s | 0.66 s | |
91.32% | 89.21% | 0.76 s | 0.60 s | |
average | 94.43% | 92.49% | 0.79 s | 0.61 s |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, X.; Wang, H.; Liang, Y.; Meng, Y.; Wang, S. A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network. Sensors 2022, 22, 304. https://doi.org/10.3390/s22010304
Chen X, Wang H, Liang Y, Meng Y, Wang S. A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network. Sensors. 2022; 22(1):304. https://doi.org/10.3390/s22010304
Chicago/Turabian StyleChen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng, and Shifeng Wang. 2022. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network" Sensors 22, no. 1: 304. https://doi.org/10.3390/s22010304
APA StyleChen, X., Wang, H., Liang, Y., Meng, Y., & Wang, S. (2022). A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network. Sensors, 22(1), 304. https://doi.org/10.3390/s22010304