Supervised single-image super-resolution (SISR) reconstruction models are trained with both low-resolution images (ILR) and their corresponding high-resolution images (IHR). During the training process, ILR are obtained by performing a bicubic downscaling on their IHR counterparts. This means that the model learns an inverted version of the bicubic downscaling, resulting in less realistic images that are limited to specific conditions. Generating realistic textures is non-trivial. The obtained details are either blurred or not reminiscent of the usually observed textures. SISR reconstruction with faithful ground-truth texture and no external information remains an issue, especially when the degradation model is not defined. We propose an unsupervised internal learning method of a small convolutional neural network (CNN) using the internal image statistics. We use the power of deep generative models to capture latent representation of patches within the test image across two scales and train a downscaling CNN Dw to learn how to downscale the image by matching these latent distributions. Dw constitutes the downscaling operation with the correct image-specific degradation and is subsequently used in the generation of the training dataset. Obtained results show the effectiveness of our image-degradation estimation method in extracting inner-image statistics for a better super-resolution perceptual reconstruction. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Lawrencium
Super resolution
Data modeling
Image analysis
Databases
Visualization
Image quality