Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (245)

Search Parameters:
Keywords = pan-sharpening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 82967 KiB  
Article
Pansharpening Techniques: Optimizing the Loss Function for Convolutional Neural Networks
by Rocco Restaino
Remote Sens. 2025, 17(1), 16; https://doi.org/10.3390/rs17010016 - 25 Dec 2024
Viewed by 331
Abstract
Pansharpening is a traditional image fusion problem where the reference image (or ground truth) is not accessible. Machine-learning-based algorithms designed for this task require an extensive optimization phase of network parameters, which must be performed using unsupervised learning techniques. The learning phase can [...] Read more.
Pansharpening is a traditional image fusion problem where the reference image (or ground truth) is not accessible. Machine-learning-based algorithms designed for this task require an extensive optimization phase of network parameters, which must be performed using unsupervised learning techniques. The learning phase can either rely on a companion problem where ground truth is available, such as by reproducing the task at a lower scale or using a pretext task, or it can use a reference-free cost function. This study focuses on the latter approach, where performance depends not only on the accuracy of the quality measure but also on the mathematical properties of these measures, which may introduce challenges related to computational complexity and optimization. The evaluation of the most recognized no-reference image quality measures led to the proposal of a novel criterion, the Regression-based QNR (RQNR), which has not been previously used. To mitigate computational challenges, an approximate version of the relevant indices was employed, simplifying the optimization of the cost functions. The effectiveness of the proposed cost functions was validated through the reduced-resolution assessment protocol applied to a public dataset (PairMax) containing images of diverse regions of the Earth’s surface. Full article
Show Figures

Figure 1

22 pages, 18328 KiB  
Article
A Three-Branch Pansharpening Network Based on Spatial and Frequency Domain Interaction
by Xincan Wen, Hongbing Ma and Liangliang Li
Remote Sens. 2025, 17(1), 13; https://doi.org/10.3390/rs17010013 - 24 Dec 2024
Viewed by 302
Abstract
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite [...] Read more.
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite significant developments achieved by deep learning-based pansharpening methods over traditional approaches, most existing techniques either fail to account for the modal differences between LRMS and PAN images, relying on direct concatenation, or use similar network structures to extract spectral and spatial information. Additionally, many methods neglect the extraction of common features between LRMS and PAN images and lack network architectures specifically designed to extract spectral features. To address these limitations, this study proposed a novel three-branch pansharpening network that leverages both spatial and frequency domain interactions, resulting in improved spectral and spatial fidelity in the fusion outputs. The proposed method was validated on three datasets, including IKONOS, WorldView-3 (WV3), and WorldView-4 (WV4). The results demonstrate that the proposed method surpasses several leading techniques, achieving superior performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

28 pages, 22965 KiB  
Review
Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis
by Luciano Alparone and Andrea Garzelli
Viewed by 384
Abstract
The term pansharpening denotes the process by which the geometric resolution of a multiband image is increased by means of a co-registered broadband panchromatic observation of the same scene having greater spatial resolution. Over time, the benchmarking of pansharpening methods has revealed itself [...] Read more.
The term pansharpening denotes the process by which the geometric resolution of a multiband image is increased by means of a co-registered broadband panchromatic observation of the same scene having greater spatial resolution. Over time, the benchmarking of pansharpening methods has revealed itself to be more challenging than the development of new methods. Their recent proliferation in the literature is mostly due to the lack of a standardized assessment. In this paper, we draw guidelines for correct and fair comparative evaluation of pansharpening methods, focusing on the reproducibility of results and resorting to concepts of meta-analysis. As a major outcome of this study, an improved version of the additive wavelet luminance proportional (AWLP) pansharpening algorithm offers all of the favorable characteristics of an ideal benchmark, namely, performance, speed, absence of adjustable running parameters, reproducibility of results with varying datasets and landscapes, and automatic correction of the path radiance term introduced by the atmosphere. The proposed benchmarking protocol employs the haze-corrected AWLP-H and exploits meta-analysis for cross-comparisons among different experiments. After assessment on five different datasets, it was found to provide reliable and consistent results in ranking different fusion methods. Full article
Show Figures

Graphical abstract

15 pages, 3905 KiB  
Article
Conditional Skipping Mamba Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Peng Liu and Tong Li
Symmetry 2024, 16(12), 1681; https://doi.org/10.3390/sym16121681 - 19 Dec 2024
Viewed by 491
Abstract
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, [...] Read more.
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, and Transformer-based models are computationally expensive. Recent Mamba models offer linear complexity and effective global modeling. However, existing Mamba-based methods lack sensitivity to local feature variations, leading to suboptimal fine-detail preservation. To address this, we propose a Conditional Skipping Mamba Network (CSMN), which enhances global-local feature fusion symmetrically through two modules: (1) the Adaptive Mamba Module (AMM), which improves global perception using adaptive spatial-frequency integration; and (2) the Cross-domain Mamba Module (CDMM), optimizing cross-domain spectral-spatial representation. Experimental results on the IKONOS and WorldView-2 datasets demonstrate that CSMN surpasses existing state-of-the-art methods in achieving superior spectral consistency and preserving spatial details, with performance that is more symmetric in fine-detail preservation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 26046 KiB  
Article
Downscaling Land Surface Temperature via Assimilation of LandSat 8/9 OLI and TIRS Data and Hypersharpening
by Luciano Alparone and Andrea Garzelli
Remote Sens. 2024, 16(24), 4694; https://doi.org/10.3390/rs16244694 - 16 Dec 2024
Viewed by 414
Abstract
Land surface temperature (LST) plays a pivotal role in many environmental sectors. Unfortunately, thermal bands produced by instruments that are onboard satellites have limited spatial resolutions; this seriously impairs their potential usefulness. In this study, we propose an automatic procedure for the spatial [...] Read more.
Land surface temperature (LST) plays a pivotal role in many environmental sectors. Unfortunately, thermal bands produced by instruments that are onboard satellites have limited spatial resolutions; this seriously impairs their potential usefulness. In this study, we propose an automatic procedure for the spatial downscaling of the two 100 m thermal infrared (TIR) bands of LandSat 8/9, captured by the TIR spectrometer (TIRS), by exploiting the bands of the optical instrument. The problem of fusion of heterogeneous data is approached as hypersharpening: each of the two sharpening images is synthesized following data assimilation concepts, with the linear combination of 30 m optical bands and the 15 m panchromatic (Pan) image that maximizes the correlation with each thermal channel at its native 100 m scale. The TIR bands resampled at 15 m are sharpened, each by its own synthetic Pan. On two different scenes of an OLI-TIRS image, the proposed approach is compared with 100 m to 15 m pansharpening, carried out uniquely by means of the Pan image of OLI and with the two high-resolution assimilated thermal images that are used for hypersharpening the two TIRS bands. Besides visual evaluations of the temperature maps, statistical indexes measuring radiometric and spatial consistencies are provided and discussed. The superiority of the proposed approach is highlighted: the classical pansharpening approach is radiometrically accurate but weak in the consistency of spatial enhancement. Conversely, the assimilated TIR bands, though adequately sharp, lose more than 20% of radiometric consistency. Our proposal trades off the benefits of its counterparts in a unique method. Full article
(This article belongs to the Special Issue Remote Sensing for Land Surface Temperature and Related Applications)
Show Figures

Graphical abstract

15 pages, 6962 KiB  
Article
Perceptual Quality Assessment for Pansharpened Images Based on Deep Feature Similarity Measure
by Zhenhua Zhang, Shenfu Zhang, Xiangchao Meng, Liang Chen and Feng Shao
Remote Sens. 2024, 16(24), 4621; https://doi.org/10.3390/rs16244621 - 10 Dec 2024
Viewed by 549
Abstract
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused [...] Read more.
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused image without reference is challenging. On the one hand, most methods evaluate the quality of the fused image using the full-reference indices based on the simulated experimental data on the popular Wald’s protocol; however, this remains controversial to the full-resolution data fusion. On the other hand, existing limited no reference methods, most of which depend on manually crafted features, cannot fully capture the sensitive spatial/spectral distortions of the fused image. Therefore, this paper proposes a perceptual quality assessment method based on deep feature similarity measure. The proposed network includes spatial/spectral feature extraction and similarity measure (FESM) branch and overall evaluation network. The Siamese FESM branch extracts the spatial and spectral deep features and calculates the similarity of the corresponding pair of deep features to obtain the spatial and spectral feature parameters, and then, the overall evaluation network realizes the overall quality assessment. Moreover, we propose to quantify both the overall precision of all the training samples and the variations among different fusion methods in a batch, thereby enhancing the network’s accuracy and robustness. The proposed method was trained and tested on a large subjective evaluation dataset comprising 13,620 fused images. The experimental results suggested the effectiveness and the competitive performance. Full article
Show Figures

Figure 1

27 pages, 11681 KiB  
Article
HyperGAN: A Hyperspectral Image Fusion Approach Based on Generative Adversarial Networks
by Jing Wang, Xu Zhu, Linhai Jing, Yunwei Tang, Hui Li, Zhengqing Xiao and Haifeng Ding
Remote Sens. 2024, 16(23), 4389; https://doi.org/10.3390/rs16234389 - 24 Nov 2024
Viewed by 543
Abstract
The objective of hyperspectral pansharpening is to fuse low-resolution hyperspectral images (LR-HSI) with corresponding panchromatic (PAN) images to generate high-resolution hyperspectral images (HR-HSI). Despite advancements in hyperspectral (HS) pansharpening using deep learning, the rich spectral details and large data volume of HS images [...] Read more.
The objective of hyperspectral pansharpening is to fuse low-resolution hyperspectral images (LR-HSI) with corresponding panchromatic (PAN) images to generate high-resolution hyperspectral images (HR-HSI). Despite advancements in hyperspectral (HS) pansharpening using deep learning, the rich spectral details and large data volume of HS images place higher demands on models for effective spectral extraction and processing. In this paper, we present HyperGAN, a hyperspectral image fusion approach based on Generative Adversarial Networks. Unlike previous methods that deepen the network to capture spectral information, HyperGAN widens the structure with a Wide Block for multi-scale learning, effectively capturing global and local details from upsampled HSI and PAN images. While LR-HSI provides rich spectral data, PAN images offer spatial information. We introduce the Efficient Spatial and Channel Attention Module (ESCA) to integrate these features and add an energy-based discriminator to enhance model performance by learning directly from the Ground Truth (GT), improving fused image quality. We validated our method on various scenes, including the Pavia Center, Eastern Tianshan, and Chikusei. Results show that HyperGAN outperforms state-of-the-art methods in visual and quantitative evaluations. Full article
Show Figures

Figure 1

19 pages, 44218 KiB  
Article
Testing the Impact of Pansharpening Using PRISMA Hyperspectral Data: A Case Study Classifying Urban Trees in Naples, Italy
by Miriam Perretta, Gabriele Delogu, Cassandra Funsten, Alessio Patriarca, Eros Caputi and Lorenzo Boccia
Remote Sens. 2024, 16(19), 3730; https://doi.org/10.3390/rs16193730 - 8 Oct 2024
Cited by 1 | Viewed by 1120
Abstract
Urban trees support vital ecological functions and help with the mitigation of and adaption to climate change. Yet, their monitoring and management require significant public resources. remote sensing could facilitate these tasks. Recent hyperspectral satellite programs such as PRISMA have enabled more advanced [...] Read more.
Urban trees support vital ecological functions and help with the mitigation of and adaption to climate change. Yet, their monitoring and management require significant public resources. remote sensing could facilitate these tasks. Recent hyperspectral satellite programs such as PRISMA have enabled more advanced remote sensing applications, such as species classification. However, PRISMA data’s spatial resolution (30 m) could limit its utility in urban areas. Improving hyperspectral data resolution with pansharpening using the PRISMA coregistered panchromatic band (spatial resolution of 5 m) could solve this problem. This study addresses the need to improve hyperspectral data resolution and tests the pansharpening method by classifying exemplative urban tree species in Naples (Italy) using a convolutional neural network and a ground truths dataset, with the aim of comparing results from the original 30 m data to data refined to a 5 m resolution. An evaluation of accuracy metrics shows that pansharpening improves classification quality in dense urban areas with complex topography. In fact, pansharpened data led to significantly higher accuracy for all the examined species. Specifically, the Pinus pinea and Tilia x europaea classes showed an increase of 10% to 20% in their F1 scores. Pansharpening is seen as a practical solution to enhance PRISMA data usability in urban environments. Full article
Show Figures

Graphical abstract

21 pages, 57724 KiB  
Article
MDSCNN: Remote Sensing Image Spatial–Spectral Fusion Method via Multi-Scale Dual-Stream Convolutional Neural Network
by Wenqing Wang, Fei Jia, Yifei Yang, Kunpeng Mu and Han Liu
Remote Sens. 2024, 16(19), 3583; https://doi.org/10.3390/rs16193583 - 26 Sep 2024
Cited by 1 | Viewed by 970
Abstract
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral [...] Read more.
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral fusion method based on a multi-scale dual-stream convolutional neural network, which includes feature extraction, feature fusion, and image reconstruction modules for each scale. In terms of feature fusion, we propose a multi cascade module to better fuse image features. We also design a new loss function aim at enhancing the high degree of consistency between fused images and reference images in terms of spatial details and spectral information. To validate its effectiveness, we conduct thorough experimental analyses on two widely used remote sensing datasets: GeoEye-1 and Ikonos. Compared with the nine leading pansharpening techniques, the proposed method demonstrates superior performance in multiple key evaluation metrics. Full article
Show Figures

Figure 1

20 pages, 8709 KiB  
Article
Automatic Fine Co-Registration of Datasets from Extremely High Resolution Satellite Multispectral Scanners by Means of Injection of Residues of Multivariate Regression
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(19), 3576; https://doi.org/10.3390/rs16193576 - 25 Sep 2024
Cited by 1 | Viewed by 732
Abstract
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other [...] Read more.
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other for WorldView-2/3 featuring three instruments, two of which are visible and near-infra-red (VNIR) MS scanners. The misalignment arises because the two/three instruments onboard GeoEye-1 / WorldView-2 (four onboard WorldView-3) share the same optics and, thus, cannot have parallel optical axes. Consequently, they image the same swath area from different positions along the orbit. Local height changes (hills, buildings, trees, etc.) originate local shifts among corresponding points in the datasets. The latter would be accurately aligned only if the digital elevation surface model were known with sufficient spatial resolution, which is hardly feasible everywhere because of the extremely high resolution, with Pan pixels of less than 0.5 m. The refined co-registration is achieved by injecting the residue of the multivariate linear regression of each scanner towards lowpass-filtered Pan. Experiments with two and three instruments show that an almost perfect alignment is achieved. MS pansharpening is also shown to greatly benefit from the improved alignment. The proposed alignment procedures are real-time, fully automated, and do not require any additional or ancillary information, but rely uniquely on the unimodality of the MS and Pan sensors. Full article
Show Figures

Figure 1

24 pages, 30982 KiB  
Article
A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction
by Xincan Wen, Hongbing Ma and Liangliang Li
Sensors 2024, 24(18), 6039; https://doi.org/10.3390/s24186039 - 18 Sep 2024
Viewed by 730
Abstract
In the field of remote sensing image processing, pansharpening technology stands as a critical advancement. This technology aims to enhance multispectral images that possess low resolution by integrating them with high-spatial-resolution panchromatic images, ultimately producing multispectral images with high resolution that are abundant [...] Read more.
In the field of remote sensing image processing, pansharpening technology stands as a critical advancement. This technology aims to enhance multispectral images that possess low resolution by integrating them with high-spatial-resolution panchromatic images, ultimately producing multispectral images with high resolution that are abundant in both spatial and spectral details. Thus, there remains potential for improving the quality of both the spectral and spatial domains of the fused images based on deep-learning-based pansharpening methods. This work proposes a new method for the task of pansharpening: the Multi-Stage Progressive Pansharpening Network with Detail Injection with Redundancy Reduction Mechanism (MSPPN-DIRRM). This network is divided into three levels, each of which is optimized for the extraction of spectral and spatial data at different scales. Particular spectral feature and spatial detail extraction modules are used at each stage. Moreover, a new image reconstruction module named the DRRM is introduced in this work; it eliminates both spatial and channel redundancy and improves the fusion quality. The effectiveness of the proposed model is further supported by experimental results using both simulated data and real data from the QuickBird, GaoFen1, and WorldView2 satellites; these results show that the proposed model outperforms deep-learning-based methods in both visual and quantitative assessments. Among various evaluation metrics, performance improves by 0.92–18.7% compared to the latest methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 5435 KiB  
Article
HyperKon: A Self-Supervised Contrastive Network for Hyperspectral Image Analysis
by Daniel La’ah Ayuba, Jean-Yves Guillemaut, Belen Marti-Cardona and Oscar Mendez
Remote Sens. 2024, 16(18), 3399; https://doi.org/10.3390/rs16183399 - 12 Sep 2024
Cited by 1 | Viewed by 1120
Abstract
The use of a pretrained image classification model (trained on cats and dogs, for example) as a perceptual loss function for hyperspectral super-resolution and pansharpening tasks is surprisingly effective. However, RGB-based networks do not take full advantage of the spectral information in hyperspectral [...] Read more.
The use of a pretrained image classification model (trained on cats and dogs, for example) as a perceptual loss function for hyperspectral super-resolution and pansharpening tasks is surprisingly effective. However, RGB-based networks do not take full advantage of the spectral information in hyperspectral data. This inspired the creation of HyperKon, a dedicated hyperspectral Convolutional Neural Network backbone built with self-supervised contrastive representation learning. HyperKon uniquely leverages the high spectral continuity, range, and resolution of hyperspectral data through a spectral attention mechanism. We also perform a thorough ablation study on different kinds of layers, showing their performance in understanding hyperspectral layers. Notably, HyperKon achieves a remarkable 98% Top-1 retrieval accuracy and surpasses traditional RGB-trained backbones in both pansharpening and image classification tasks. These results highlight the potential of hyperspectral-native backbones and herald a paradigm shift in hyperspectral image analysis. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Figure 1

18 pages, 2642 KiB  
Article
An Unsupervised CNN-Based Pansharpening Framework with Spectral-Spatial Fidelity Balance
by Matteo Ciotola, Giuseppe Guarino and Giuseppe Scarpa
Remote Sens. 2024, 16(16), 3014; https://doi.org/10.3390/rs16163014 - 16 Aug 2024
Cited by 1 | Viewed by 916
Abstract
In recent years, deep learning techniques for pansharpening multiresolution images have gained increasing interest. Due to the lack of ground truth data, most deep learning solutions rely on synthetic reduced-resolution data for supervised training. This approach has limitations due to the statistical mismatch [...] Read more.
In recent years, deep learning techniques for pansharpening multiresolution images have gained increasing interest. Due to the lack of ground truth data, most deep learning solutions rely on synthetic reduced-resolution data for supervised training. This approach has limitations due to the statistical mismatch between real full-resolution and synthetic reduced-resolution data, which affects the models’ generalization capacity. Consequently, there has been a shift towards unsupervised learning frameworks for pansharpening deep learning-based techniques. Unsupervised schemes require defining sophisticated loss functions with at least two components: one for spectral quality, ensuring consistency between the pansharpened image and the input multispectral component, and another for spatial quality, ensuring consistency between the output and the panchromatic input. Despite promising results, there has been limited investigation into the interaction and balance of these loss terms to ensure stability and accuracy. This work explores how unsupervised spatial and spectral consistency losses can be reliably combined preserving the outocome quality. By examining these interactions, we propose a general rule for balancing the two loss components to enhance the stability and performance of unsupervised pansharpening models. Experiments on three state-of-the-art algorithms using WorldView-3 images demonstrate that methods trained with the proposed framework achieve good performance in terms of visual quality and numerical indexes. Full article
(This article belongs to the Special Issue Weakly Supervised Deep Learning in Exploiting Remote Sensing Big Data)
Show Figures

Figure 1

26 pages, 6739 KiB  
Article
Pansharpening Based on Multimodal Texture Correction and Adaptive Edge Detail Fusion
by Danfeng Liu, Enyuan Wang, Liguo Wang, Jón Atli Benediktsson, Jianyu Wang and Lei Deng
Remote Sens. 2024, 16(16), 2941; https://doi.org/10.3390/rs16162941 - 11 Aug 2024
Viewed by 885
Abstract
Pansharpening refers to the process of fusing multispectral (MS) images with panchromatic (PAN) images to obtain high-resolution multispectral (HRMS) images. However, due to the low correlation and similarity between MS and PAN images, as well as inaccuracies in spatial information injection, HRMS images [...] Read more.
Pansharpening refers to the process of fusing multispectral (MS) images with panchromatic (PAN) images to obtain high-resolution multispectral (HRMS) images. However, due to the low correlation and similarity between MS and PAN images, as well as inaccuracies in spatial information injection, HRMS images often suffer from significant spectral and spatial distortions. To address these issues, a pansharpening method based on multimodal texture correction and adaptive edge detail fusion is proposed in this paper. To obtain a texture-corrected (TC) image that is highly correlated and similar to the MS image, the target-adaptive CNN-based pansharpening (A-PNN) method is introduced. By constructing a multimodal texture correction model, intensity, gradient, and A-PNN-based deep plug-and-play correction constraints are established between the TC and source images. Additionally, an adaptive degradation filter algorithm is proposed to ensure the accuracy of these constraints. Since the TC image obtained can effectively replace the PAN image and considering that the MS image contains valuable spatial information, an adaptive edge detail fusion algorithm is also proposed. This algorithm adaptively extracts detailed information from the TC and MS images to apply edge protection. Given the limited spatial information in the MS image, its spatial information is proportionally enhanced before the adaptive fusion. The fused spatial information is then injected into the upsampled multispectral (UPMS) image to produce the final HRMS image. Extensive experimental results demonstrated that compared with other methods, the proposed algorithm achieved superior results in terms of both subjective visual effects and objective evaluation metrics. Full article
Show Figures

Figure 1

22 pages, 7835 KiB  
Article
Towards Robust Pansharpening: A Large-Scale High-Resolution Multi-Scene Dataset and Novel Approach
by Shiying Wang, Xuechao Zou, Kai Li, Junliang Xing, Tengfei Cao and Pin Tao
Remote Sens. 2024, 16(16), 2899; https://doi.org/10.3390/rs16162899 - 8 Aug 2024
Cited by 1 | Viewed by 1959
Abstract
Pansharpening, a pivotal task in remote sensing, involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information. These pansharpened images enhance precision in land cover classification, change detection, and environmental monitoring within [...] Read more.
Pansharpening, a pivotal task in remote sensing, involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information. These pansharpened images enhance precision in land cover classification, change detection, and environmental monitoring within remote sensing data analysis. While deep learning techniques have shown significant success in pansharpening, existing methods often face limitations in their evaluation, focusing on restricted satellite data sources, single scene types, and low-resolution images. This paper addresses this gap by introducing PanBench, a high-resolution multi-scene dataset containing all mainstream satellites and comprising 5898 pairs of samples. Each pair includes a four-channel (RGB + near-infrared) multispectral image of 256 × 256 pixels and a mono-channel panchromatic image of 1024 × 1024 pixels. To avoid irreversible loss of spectral information and achieve a high-fidelity synthesis, we propose a Cascaded Multiscale Fusion Network (CMFNet) for pansharpening. Multispectral images are progressively upsampled while panchromatic images are downsampled. Corresponding multispectral features and panchromatic features at the same scale are then fused in a cascaded manner to obtain more robust features. Extensive experiments validate the effectiveness of CMFNet. Full article
Show Figures

Figure 1

Back to TopTop