remotesensing-logo

Journal Browser

Journal Browser

Advances in Synthetic Aperture Radar (SAR) Data Processing and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 3118

Special Issue Editors


E-Mail Website
Guest Editor
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071, China
Interests: synthetic aperture radar (SAR) imaging; inverse synthetic aperture radar (ISAR) imaging; synthetic aperture ladar (SAL) imaging; inverse synthetic aperture ladar (ISAL) imaging; signal processing; new system SAR

E-Mail Website
Guest Editor
Department of Science and Technology, University Parthenope, 80143 Naples, Italy
Interests: synthetic aperture radar (SAR) image processing; SAR image restoration; SAR image reconstruction; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China
Interests: radar signal processing; machine learning; 6G, and their applications
Special Issues, Collections and Topics in MDPI journals
School of Automation, Northwestern Polytechnical Universtiy, Xi’an 710129, China
Interests: radar signal processing; radar image processing; radar point cloud processing; radar and laser in remote sensing applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Hangzhou Institute of Technology, Xidian University, Hangzhou 311231, China.
Interests: Synthetic aperture Radar (SAL) imaging

Special Issue Information

Dear Colleagues,

Synthetic aperture radar (SAR) has been widely emphasized as an important sensor in remote sensing and has been playing an important role in agriculture and forestry monitoring, ocean observation, military reconnaissance and so on. Many airborne and satellite-based SAR systems have been developed in the past decades, and a large amount of SAR data have been obtained. However, the research on SAR data processing and application is insufficient compared with SAR imaging algorithms. Therefore, this Special Issue aims to invite scholars to publish articles on the latest progress in SAR data processing and applications.

This Special Issue focuses on reporting the latest progress on SAR data processing and applications. Specifically, it includes (but is not limited to) the research of advanced radar technology, the latest SAR data processing methods, the latest advances in SAR application, and the latest SAR signal and information processing algorithms.

Suggested themes and article types for submissions.

Potential topics for this Topic include, but are not limited to:

  • Novel SAR data processing methods;
  • Novel techniques for SAR applications;
  • Artificial intelligence in SAR data processing and applications;
  • On-board real-time processing of SAR images;
  • Image recognition and interpretation methods for SAR;
  • Quality evaluation of SAR images;
  • Novel imaging methods for SAR;
  • New application with polarimetric SAR in earth observation;
  • Interferometric/tomography SAR technique and application in 3D mapping;
  • Other related topics

Prof. Dr. Liang Guo
Dr. Sergio Vitale
Prof. Dr. Junli Liang
Dr. Rui Guo
Dr. Hongfei Yin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • SAR data processing methods
  • SAR application
  • SAR image recognition and interpretation
  • SAR image quality evaluation
  • artificial intelligence in SAR data processing and applications
  • SAR image processing
  • polarimetric SAR
  • interferometric SAR
  • tomographic SAR

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3722 KiB  
Article
Laser Phase Noise Compensation Method Based on Dual Reference Channels in Inverse Synthetic Aperture Lidar
by Dengfeng Liu, Chen Xu, Yutang Li, Anpeng Song, Jian Li, Kai Jin, Xi Luo and Kai Wei
Remote Sens. 2025, 17(1), 30; https://doi.org/10.3390/rs17010030 - 26 Dec 2024
Viewed by 390
Abstract
Laser phase noise is a critical factor that limits the range and performance of coherent lidar systems, especially in high-resolution applications such as inverse synthetic aperture lidar (ISAL), which demands stringent coherence. The effective suppression of laser phase noise is essential to enable [...] Read more.
Laser phase noise is a critical factor that limits the range and performance of coherent lidar systems, especially in high-resolution applications such as inverse synthetic aperture lidar (ISAL), which demands stringent coherence. The effective suppression of laser phase noise is essential to enable high-resolution imaging over long distances. This paper presents a phase noise compensation technique utilizing dual reference channels (DRCs) based on concatenated generated phase (CGP) principles. The proposed method uses two reference channels with different delay lengths: a long-delay channel for coarse phase noise compensation and a short-delay channel for fine adjustments. We performed ISAL imaging experiments on stationary and rotating targets using a seed laser with a 3.41 MHz linewidth, achieving round-trip distances exceeding 110 times the laser coherence length. Imaging quality closely matched a 100 Hz narrow linewidth laser, approaching theoretical resolution limits. Compared to prior methods based on residual error linear estimation, the DRC method enhances compensation speed tenfold while maintaining accuracy. These results highlight the efficacy of the proposed DRC method in mitigating laser phase noise, significantly improving ISAL imaging performance. Full article
Show Figures

Figure 1

20 pages, 10333 KiB  
Article
NST-YOLO11: ViT Merged Model with Neuron Attention for Arbitrary-Oriented Ship Detection in SAR Images
by Yiyang Huang, Di Wang, Boxuan Wu and Daoxiang An
Remote Sens. 2024, 16(24), 4760; https://doi.org/10.3390/rs16244760 - 20 Dec 2024
Viewed by 716
Abstract
Due to the significant discrepancies in the distribution of ships in nearshore and offshore areas, the wide range of their size, and the randomness of target orientation in the sea, traditional detection models in the field of computer vision struggle to achieve performance [...] Read more.
Due to the significant discrepancies in the distribution of ships in nearshore and offshore areas, the wide range of their size, and the randomness of target orientation in the sea, traditional detection models in the field of computer vision struggle to achieve performance in SAR image ship target detection comparable to that in optical image detection. This paper proposes an oriented ship target detection model based on the YOLO11 algorithm, Neural Swin Transformer-YOLO11 (NST-YOLO11). The proposed model integrates an improved Swin Transformer module called Neural Swin-T and a Cross-Stage connected Spatial Pyramid Pooling-Fast (CS-SPPF) module. By introducing a spatial/channel unified attention mechanism with neuron suppression in the spatial domain, the information redundancy generated by the local window self-attention module in the Swin Transformer Block is cut off. Furthermore, the idea of cross-stage partial (CSP) connections is applied to the fast spatial pyramid pooling (SPPF) module, effectively enhancing the ability to retain information in multi-scale feature extraction. Experiments conducted on the Rotated Ship Detection Dataset in SAR Images (RSDD-SAR) and the SAR Ship Detection Dataset (SSDD+) and comparisons with other oriented detection models demonstrate that the proposed NST-YOLO11 achieves state-of-the-art detection performance, demonstrate outstanding generalization ability and robustness of the proposed model. Full article
Show Figures

Figure 1

29 pages, 96249 KiB  
Article
SAR-MINF: A Novel SAR Image Descriptor and Matching Method for Large-Scale Multidegree Overlapping Tie Point Automatic Extraction
by Shuo Li, Xiongwen Yang, Xiaolei Lv and Jian Li
Remote Sens. 2024, 16(24), 4696; https://doi.org/10.3390/rs16244696 - 16 Dec 2024
Viewed by 642
Abstract
The automatic extraction of large-scale tie points (TPs) for Synthetic Aperture Radar (SAR) images is crucial for generating SAR Digital Orthophoto Maps (DOMs). This task involves matching SAR images under various conditions, such as different resolutions, incidence angles, and orbital directions, which is [...] Read more.
The automatic extraction of large-scale tie points (TPs) for Synthetic Aperture Radar (SAR) images is crucial for generating SAR Digital Orthophoto Maps (DOMs). This task involves matching SAR images under various conditions, such as different resolutions, incidence angles, and orbital directions, which is highly challenging. To address the feature extraction challenges of different SAR images, we propose a Gamma Modulated Phase Congruency (GMPC) model. This improved phase congruency model is defined by a Gamma Modulation Filter (GMF) and an adaptive noise model. Additionally, to reduce layover interference in SAR images, we introduce a GMPC-Harris feature point extraction method with layover perception. We also propose a matching method based on the SAR Modality Independent Neighborhood Fusion (SAR-MINF) descriptor, which fuses feature information from different neighborhoods. Furthermore, we present a graph-based overlap extraction algorithm and establish an automated workflow for large-scale TP extraction. Experiments show that the proposed SAR-MINF matching method increases the Correct Match Rate (CMR) by an average of 31.2% and the matching accuracy by an average of 57.8% compared with other prevalent SAR image matching algorithms. The proposed TP extraction algorithm can extract full-degree TPs with an accuracy of less than 0.5 pixels for more than 98% of 2-degree TPs and over 95% of multidegree TPs, meeting the requirements of DOM production. Full article
Show Figures

Figure 1

31 pages, 2697 KiB  
Article
Single-Scene SAR Image Data Augmentation Based on SBR and GAN for Target Recognition
by Shangchen Feng, Xikai Fu, Yanlin Feng and Xiaolei Lv
Remote Sens. 2024, 16(23), 4427; https://doi.org/10.3390/rs16234427 - 26 Nov 2024
Viewed by 497
Abstract
High-performance neural networks for synthetic aperture radar (SAR) automatic target recognition (ATR) often encounter the challenge of data scarcity. The lack of sufficient labeled SAR image datasets leads to the consideration of using simulated data to supplement the dataset. On the one hand, [...] Read more.
High-performance neural networks for synthetic aperture radar (SAR) automatic target recognition (ATR) often encounter the challenge of data scarcity. The lack of sufficient labeled SAR image datasets leads to the consideration of using simulated data to supplement the dataset. On the one hand, electromagnetic computation simulations provide high amplitude accuracy but are inefficient for large-scale datasets due to their complex computations and physical models. On the other hand, ray tracing simulations offer high geometric accuracy and computational efficiency but struggle with low amplitude correctness, hindering accurate numerical feature extraction. Furthermore, the emergence of generative adversarial networks (GANs) provides a way to generate simulated datasets, trying to balance computational efficiency with image quality. Nevertheless, the simulated SAR images generated based on random noise lack constraints, and it is also difficult to generate images that exceed the parameter conditions of the real image’s training set. Hence, it is essential to integrate physics-based simulation techniques into GANs to enhance the generalization ability of the imaging parameters. In this paper, we present the SingleScene-SAR Simulator, an efficient framework for SAR image simulation that operates under limited real SAR data. This simulator integrates rasterized shooting and bouncing rays (SBR) with cycle GAN, effectively achieving both amplitude correctness and geometric accuracy. The simulated images are appropriate for augmenting datasets in target recognition networks. Firstly, the SingleScene-SAR Simulator employs a rasterized SBR algorithm to generate radar cross section (RCS) images of target models. Secondly, a specific training pattern for cycle GAN is established to translate noisy RCS images into simulated SAR images that closely resemble real ones. Finally, these simulated images are utilized for data augmentation. Experimental results based on the constructed dataset show that with only one scene SAR image containing 30 target chips, the SingleScene-SAR Simulator can efficiently produce simulated SAR images that exhibit high similarity in both spatial and statistical distributions compared with real images. By employing simulated SAR images for data augmentation, the accuracy of target recognition networks can be consistently and significantly enhanced. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: NST-YOLO11: ViT Merged Model with Neuron Attention for Arbitrary-Oriented Ship Detection in SAR Images
Authors: Yiyang Huang; Di Wang; Boxuan Wu; Daoxiang An
Affiliation: College of Electronic Science, National University of Defense Technology
Abstract: Due to the significant discrepancies in the distribution of ships in nearshore and offshore areas, the wide range of their size, and the randomness of target orientation in the sea, traditional detection models in the field of computer vision struggle to achieve performance in SAR image ship target detection comparable to that in optical image detection. This paper proposes an oriented ship target detection model based on the YOLO11 algorithm, Neural Swin Transformer-YOLO11 (NST-YOLO11). The proposed model integrates an improved Swin Transformer module called Neural Swin-T and a Cross-Stage connected Spatial Pyramid Pooling-Fast (CS-SPPF) module. By introducing a spatial/channel unified attention mechanism with neuron suppression in the spatial domain, the information redundancy generated by the local window self-attention module in the Swin Transformer Block is cut off. Furthermore, the idea of cross-stage partial connections (CSP) is applied to the fast spatial pyramid pooling (SPPF) module, effectively enhancing the ability to retain information in multi-scale feature extraction. Experiments conducted on the Rotated Ship Detection Dataset in SAR Images (RSDD-SAR) and the SAR Ship Detection Dataset (SSDD+) and comparisons with the other oriented detection models demonstrate that the proposed NST-YOLO11 achieves state-of-the-art detection performance, demonstrate outstanding generalization ability and robustness of the proposed model.

Back to TopTop