Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Abstract
:1. Introduction
2. Related Works
2.1. Sensor Pattern Noise Used for Forensics
2.2. Methods Based on Deep Learning
3. The Proposed Method
3.1. Image Preprocessing
3.1.1. Clipping into Image Patches
3.1.2. Filtering with High-Pass Filters
3.2. CNN-Based Model Training
4. Experiments
4.1. Dataset
4.2. Experiment Setup
4.3. Experimental Results
4.3.1. Different Numbers of High-Pass Filters
4.3.2. Different Quality Factors of Natural Images
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Rocha, A.; Scheirer, W.; Boult, T.; Goldenstein, S. Vision of the unseen Current trends and challenges in digital image and video forensics. ACM Comput. Surv. 2011, 43, 26–40. [Google Scholar] [CrossRef]
- Stamm, M.C.; Wu, M.; Liu, K.R. Information forensics: An overview of the first decade. IEEE Access 2013, 1, 167–200. [Google Scholar] [CrossRef]
- Rahmouni, N.; Nozick, V.; Yamagishi, J.; Echizen, I. Distinguishing Computer Graphics from Natural Images Using Convolution Neural Networks. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS), Rennes, France, 4–7 December 2017; pp. 1–6. [Google Scholar]
- Wang, X.; Liu, Y.; Xu, B.; Li, L.; Xue, J. A statistical feature based approach to distinguish PRCG from photographs. Comput. Vis. Image Underst. 2014, 128, 84–93. [Google Scholar] [CrossRef]
- Li, Z.; Zhang, Z.; Shi, Y. Distinguishing computer graphics from photographic images using a multiresolution approach based on local binary patterns. Secur. Commun. Netw. 2015, 7, 2153–2159. [Google Scholar] [CrossRef]
- Wang, J.; Li, T.; Shi, Y.Q.; Lian, S.; Ye, J. Forensics feature analysis in quaternion wavelet domain for distinguishing photographic images and computer graphics. Multimed. Tools Appl. 2017, 76, 23721–23737. [Google Scholar] [CrossRef]
- Peng, F.; Zhou, D.L.; Long, M.; Sun, X.M. Discrimination of natural images and computer generated graphics based on multi-fractal and regression analysis. AEU Int. J. Electron. Commun. 2017, 71, 72–81. [Google Scholar] [CrossRef]
- Yao, Y.; Shi, Y.; Weng, S.; Guan, B. Deep Learning for Detection of Object-Based Forgery in Advanced Video. Symmetry 2018, 10, 3. [Google Scholar] [CrossRef]
- Bayar, B.; Stamm, M.C. A Deep Learning Approach to Universal Image Manipulation Detection Using a New Convolutional Layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, Vigo, Galicia, Spain, 20–22 June 2016; pp. 5–10. [Google Scholar]
- Bondi, L.; Baroffio, L.; Güera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. First Steps Toward Camera Model Identification With Convolutional Neural Networks. IEEE Signal Process. Lett. 2017, 24, 259–263. [Google Scholar] [CrossRef]
- Tuama, A.; Comby, F.; Chaumont, M. Camera model identification with the use of deep convolutional neural networks. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS), Abu Dhabi, UAE, 4–7 December 2016; pp. 1–6. [Google Scholar]
- Xu, G.; Wu, H.Z.; Shi, Y.Q. Structural Design of Convolutional Neural Networks for Steganalysis. IEEE Signal Process. Lett. 2016, 23, 708–712. [Google Scholar] [CrossRef]
- Ye, J.; Ni, J.; Yi, Y. Deep Learning Hierarchical Representations for Image Steganalysis. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2545–2557. [Google Scholar] [CrossRef]
- Rao, Y.; Ni, J. A deep learning approach to detection of splicing and copy-move forgeries in images. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS), Abu Dhabi, UAE, 4–7 December 2016; pp. 1–6. [Google Scholar]
- Pandey, R.C.; Singh, S.K.; Shukla, K.K. Passive forensics in image and video using noise features: A review. Digit. Investig. 2016, 19, 1–28. [Google Scholar] [CrossRef]
- Orozco, A.L.S.; Corripio, J.R. Image source acquisition identification of mobile devices based on the use of features. Multimed. Tools Appl. 2016, 75, 7087–7111. [Google Scholar] [CrossRef]
- Villalba, L.J.G.; Orozco, A.L.S.; López, R.R.; Castro, J.H. Identification of smartphone brand and model via forensic video analysis. Expert Syst. Appl. 2016, 55, 59–69. [Google Scholar] [CrossRef]
- Gando, G.; Yamada, T.; Sato, H.; Oyama, S.; Kurihara, M. Fine-tuning deep convolutional neural networks for distinguishing illustrations from photographs. Expert Syst. Appl. 2016, 66, 295–301. [Google Scholar] [CrossRef]
- Level-Design Reference Database. 2017. Available online: http://level-design.org/referencedb (accessed on 25 February 2018).
- Dang-Nguyen, D.T.; Pasquini, C.; Conotter, V.; Boato, G. RAISE—A raw images dataset for digital image forensics. In Proceedings of the 6th ACM Multimedia Systems, Portland, OR, USA, 18–20 March 2015; pp. 219–224. [Google Scholar]
- Rezende, E.R.S.D.; Ruppert, G.C.S.; Carvalho, T. Detecting Computer Generated Images with Deep Convolutional Neural Networks. In Proceedings of the 30th SIBGRAPI Conference on Graphics, Patterns and Images, Niteroi, Brazil, 17–20 October 2017; pp. 71–78. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Fridrich, J.; Kodovsky, J. Rich Models for Steganalysis of Digital Images. IEEE Trans. Inf. Forensics Secur. 2012, 7, 868–882. [Google Scholar] [CrossRef]
- Qian, Y.; Dong, J.; Wang, W.; Tan, T. Deep learning for steganalysis via convolutional neural networks. In Proceedings of the SPIE 9409, Media Watermarking, Security, and Forensics, San Francisco, CA, USA, 9–11 February 2015; Volume 9409, p. 94090J. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- CGvsPhoto Github Project. 2017. Available online: https://github.com/NicoRahm/CGvsPhoto (accessed on 25 February 2018).
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
- Zhu, J.Y.; Krahenbuhl, P.; Shechtman, E.; Efros, A.A. Learning a Discriminative Model for the Perception of Realism in Composite Images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Jensen, S.; Selvik, A.L. Using 3D Graphics to Train Object Detection Systems. Master’s Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2016. [Google Scholar]
Image Patches | Full-Size Images | |||
---|---|---|---|---|
Model of 50 Epochs | Model of 80 Epochs | Model of 50 Epochs | Model of 80 Epochs | |
the proposed HPF × 3 | 99.98% | 99.95% | 100% | 100% |
the proposed HPF × 1 | 99.87% | 99.77% | 100% | 99.83% |
the proposed HPF × 0 | 88.28% | 87.77% | 93.37% | 93.12% |
Rahmouni et al. [3] | 84.8% 1 94.4% 2 | 93.2% |
Image Patches | Full-Size Images | |||
---|---|---|---|---|
Model of 50 Epochs | Model of 80 Epochs | Model of 50 Epochs | Model of 80 Epochs | |
QualityFactor = 75 | 99.52% | 99.71% | 100% | 100% |
QualityFactor = 85 | 99.95% | 99.98% | 100% | 100% |
QualityFactor = 95 | 99.99% | 99.99% | 100% | 100% |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yao, Y.; Hu, W.; Zhang, W.; Wu, T.; Shi, Y.-Q. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning. Sensors 2018, 18, 1296. https://doi.org/10.3390/s18041296
Yao Y, Hu W, Zhang W, Wu T, Shi Y-Q. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning. Sensors. 2018; 18(4):1296. https://doi.org/10.3390/s18041296
Chicago/Turabian StyleYao, Ye, Weitong Hu, Wei Zhang, Ting Wu, and Yun-Qing Shi. 2018. "Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning" Sensors 18, no. 4: 1296. https://doi.org/10.3390/s18041296
APA StyleYao, Y., Hu, W., Zhang, W., Wu, T., & Shi, Y. -Q. (2018). Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning. Sensors, 18(4), 1296. https://doi.org/10.3390/s18041296