Next Article in Journal
Assessment of Economic Recovery in Hebei Province, China, under the COVID-19 Pandemic Using Nighttime Light Data
Next Article in Special Issue
Global Feature Attention Network: Addressing the Threat of Adversarial Attack for Aerial Image Semantic Segmentation
Previous Article in Journal
Soil Erosion Satellite-Based Estimation in Cropland for Soil Conservation
Previous Article in Special Issue
Targeted Universal Adversarial Examples for Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation

College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
*
Author to whom correspondence should be addressed.
Submission received: 13 November 2022 / Revised: 12 December 2022 / Accepted: 16 December 2022 / Published: 21 December 2022
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)

Abstract

:
Recent studies have proven that synthetic aperture radar (SAR) automatic target recognition (ATR) models based on deep neural networks (DNN) are vulnerable to adversarial examples. However, existing attacks easily fail in the case where adversarial perturbations cannot be fully fed to victim models. We call this situation perturbation offset. Moreover, since background clutter takes up most of the area in SAR images and has low relevance to recognition results, fooling models with global perturbations is quite inefficient. This paper proposes a semi-white-box attack network called Universal Local Adversarial Network (ULAN) to generate universal adversarial perturbations (UAP) for the target regions of SAR images. In the proposed method, we calculate the model’s attention heatmaps through layer-wise relevance propagation (LRP), which is used to locate the target regions of SAR images that have high relevance to recognition results. In particular, we utilize a generator based on U-Net to learn the mapping from noise to UAPs and craft adversarial examples by adding the generated local perturbations to target regions. Experiments indicate that the proposed method effectively prevents perturbation offset and achieves comparable attack performance to conventional global UAPs by perturbing only a quarter or less of SAR image areas.

1. Introduction

Synthetic aperture radar (SAR) is widely used in military and civilian fields for its ability to image targets with high resolution under all-time and all-weather conditions [1,2,3]. However, unlike natural images, it is difficult for humans to intuitively understand SAR images without resorting to interpretation techniques. The most popular interpretation method at present is the SAR automatic target recognition (SAR-ATR) technology based on deep neural networks (DNNs) [4,5,6,7,8]. With their powerful representation capabilities, DNNs outperform traditional supervised methods in image classification tasks. Yet, some researchers have proved that DNN-based SAR target recognition models are vulnerable to adversarial examples [9].
Szegedy et al. [10] first proposed the concept of adversarial examples, that is, a well-designed tiny perturbation can lead to the misclassification of a well-trained recognition model. This discovery makes adversarial attacks become one of the biggest threats to artificial intelligence (AI) security. Thus far, researchers have proposed a series of adversarial attack methods, which can be divided into two categories from the perspective of prior knowledge: white-box attacks and black-box attacks. In white-box conditions, the attacker has high access to the victim model, which means that the attacker can utilize lots of prior information to craft adversarial examples. The typical white-box methods are gradient-based attacks [11,12], boundary-based attacks [13], saliency map-based attacks [14], etc. Conversely, in black-box conditions, the biggest challenge for attackers is that they can only access the output information of the victim model or even less. The representative black-box methods are probability label-based attacks [15,16], decision-based attacks [17], and transferability-based attacks [18]. While the above methods achieve fantastic attack performance, they all fool DNNs with data-dependent perturbations, i.e., each input corresponds to a different adversarial perturbation, which is hard to satisfy in real-world deployments. Moosavi et al. [19] first proposed a universal adversarial perturbation (UAP) that can deceive DNNs independently of the input data. Subsequently, the work in [20] designed a universal adversarial network to learn the mapping from noise to UAPs and demonstrated the transferability of UAPs across different network structures. Mopuri et al. [21] argue that it is difficult for attackers to obtain the training dataset of the victim model, so to reduce the dependence on the dataset, they proposed a data-free method to generate UAPs by destroying the features extracted by convolutional layers. Another data-free work [22] used class impressions to simulate a real data distribution, generating UAPs with high transferability. In the field of remote sensing, Xu et al. [23] were the first to investigate the adversarial attack and defense in safety-critical remote sensing tasks. Meanwhile, they also proposed the mixup attack [24] to craft universal adversarial examples for remote sensing data. Furthermore, researchers [25] have successfully attacked an advanced YOLOv2 detector in the real world with just a printed patch. Thus, a further study on adversarial examples, especially UAPs, is necessary for both attackers and defenders.
With the wide application of DNNs in the field of SAR-ATR, researchers have embarked on investigating the adversarial examples of SAR images. In terms of data-dependent perturbations, Li et al. [26] used the FGSM and BIM algorithms to produce abundant adversarial examples for a CNN-based SAR image classification model and comprehensively analyzed various factors affecting the attack success rate. The work in [27] presented a Fast C&W algorithm for real-time attacks that introduces an encoder network to generate adversarial examples through the one-step forward mapping of SAR images. To enhance the universality of adversarial perturbations, Wang et al. [28] utilized the method proposed in [19] to craft UAPs for SAR images and achieved high attack success rates. In addition, the latest research [29] has broken through the limitations of the digital domain and implemented the UAP of SAR images in the signal domain by transmitting a two-dimensional jamming signal.
Although the above methods perform well in fooling SAR target recognition models, they are vulnerable and inefficient in practical applications. Specifically, existing attack methods work on the assumption that the adversarial perturbations can be fully fed to the victim model, while this is not always true in practice, i.e., in many cases the perturbations fed to the model are incomplete, resulting in the failure of the adversarial attacks. We attribute the failure to the vulnerability of adversarial attacks and call this situation perturbation offset. For ease of understanding, we detail a specific example in Figure 1. On the other hand, we calculate the model’s attention heatmaps [30] through layer-wise relevance propagation (LRP) [31], which is used to analyze the relevance of each pixel in the SAR image to the recognition results. The pixel-wise attention heatmaps can be found in Section 4.3. The fact is that the background regions of SAR images have little relevance to the model’s outputs, and the features that greatly impact the recognition results are mainly concentrated in the target regions. However, existing attack methods fool DNN models by global perturbations so that massive time and computing resources are allocated to design perturbations for low-relevance background regions, which is undoubtedly inefficient. Therefore, the vulnerability and inefficiency of adversarial attacks are pending to be solved in real-world implementations.
In this paper, we propose a semi-whitebox [32] attack network called Universal Local Adversarial Network (ULAN) to generate UAPs for target regions of SAR images. Specifically, we first calculate the model’s attention heatmaps through LRP to locate the target regions in SAR images that have high relevance to the recognition results. Then, we utilize U-Net [33] to learn the mapping from noise to UAPs and craft the adversarial examples by adding the generated local perturbations to the target regions. In this way, attackers can focus perturbations on the high-relevance target regions, which significantly improves the efficiency of adversarial attacks. Meanwhile, the proposed method also can make the adversarial perturbations be fed to the victim model as completely as possible, preventing perturbation offset to the greatest extent.
The main contributions of this paper are summarized as follows.
(1)
We are the first to evaluate the adversarial attacks against DNN-based SAR-ATR models in the case of perturbation offset and analyze the relevance of each pixel in SAR images to the recognition results. Our research reveals the vulnerability and inefficiency of existing adversarial attacks in SAR target recognition tasks.
(2)
This paper designs a generative network to craft UAPs for the target regions of SAR images under semi-white-box conditions. The proposed method requires model information only during the training phase. Once the network is trained and given inputs, it can generate adversarial examples in real time for the victim model through one-step forward mapping without requiring access to the model itself anymore. Thus, our method possesses higher application potential than traditional iterative methods.
(3)
Experiments indicate that the proposed method not only prevents perturbation offset effectively but also achieves comparable attack performance to the conventional global UAPs by perturbing only a quarter or less of the SAR image area. Furthermore, we evaluate the attack performance of ULAN under small sample conditions. The results show that given five images per class, our method can cause a misclassification rate of over 70% in non-targeted attacks and make the probability of victim models outputting specified results in targeted attacks close to 80%.
The rest of this paper is organized as follows. Section 2 introduces the relevant preparation knowledge. In Section 3, we describe the proposed method in detail. The experimental results are shown in Section 4. The discussions and conclusions are given in Section 5 and Section 6, respectively.

2. Preliminary

2.1. Universal Adversarial Perturbations for SAR Target Recognition

Suppose x n [ 0 , 255 ] W × H is an 8-bit gray-scale image from the SAR image dataset X and f ( · ) is a DNN-based k-class SAR target recognition model without a softmax output layer. Given a sample x n as input to f ( · ) , the output is a k-dimensional logits vector f ( x n ) = [ f ( x n ) 1 , f ( x n ) 2 , , f ( x n ) k ] , where f ( x n ) i R denotes the score of x n belonging to class i. Let C p = arg max i ( f ( x n ) i ) represent the predicted class of the model for x n . The universal adversarial perturbation (UAP) is a single perturbation that attacks the model independently of the input data. In brief, for most of the samples in the dataset plus the UAP, the generated adversarial examples can easily fool the model as follows:
for most x n X s . t . arg max i ( f ( x n + δ ) i ) C p δ p ξ
where δ is a UAP, the L p -norm is defined as δ p = ( i δ i p ) 1 p , and ξ controls the magnitude of δ .
Meanwhile, adversarial attacks can be divided into non-targeted and targeted attacks in terms of attack modes. As the name suggests, the former makes DNN models misclassify, while the latter induces models to output specified results. From a military perspective, targeted attacks are more challenging and threatening than non-targeted attacks. In other words, UAPs can reduce the probability that DNN models correctly recognize samples in non-targeted attack scenarios; conversely, they increase the probability of models identifying samples as target classes in targeted attack scenarios. Therefore, we transform (1) into the following optimization problems:
  • For non-targeted attacks:
    m i n i m i z e ( n = 1 N D ( arg max i ( f ( x n + δ ) i ) = = C t r ) N ) , s . t . δ p ξ
  • For targeted attacks:
    m a x i m i z e ( n = 1 N D ( arg max i ( f ( x n + δ ) i ) = = C t a ) N ) , s . t . δ p ξ
where the discriminant function D ( · ) equals one if the equation holds; otherwise, it equals zero. C t r and C t a represent the true and target classes of the input data. N is the total number of images in the dataset. By traversing all the samples in the dataset, (2) can design a minor δ to minimize the probability that DNN models correctly recognize samples in non-targeted attacks. In contrast, δ , designed by (3), is used to maximize the probability of models identifying samples as target classes in targeted attacks. Obviously, the above optimization problems are exactly the opposite of a DNN’s training process, and the corresponding loss functions will be given in the next chapter.

2.2. Attention Heatmaps

When humans make judgments, they can reasonably allocate their attention to different features of an object and obtain the desired semantic information efficiently. Coincidentally, recent studies have shown that DNNs have similar characteristics when making decisions [30]. For example, in image classification tasks, the pixels surrounding target regions tend to have a much greater impact on the classification results than others. Researchers typically utilize attention heatmaps to visualize the contribution of each pixel to the network output.
Nowadays, many algorithms have been proposed to calculate DNNs’ attention heatmaps. In this paper, we employ layer-wise relevance propagation (LRP) [31] to obtain the pixel-wise attention heatmaps, which is actually a backward visualization method [34,35,36] that obtains a heatmap by calculating the relevance between adjacent layers from outputs to inputs. Figure A1 displays the heatmaps of six DNNs calculated by LRP. The results indicate that the hotspots are mainly concentrated in the target regions, and the heatmaps of different DNNs have similar structures, i.e., attention heatmaps may be the semantic features shared by DNNs. Destroying the common semantic feature of DNNs is a promising idea to enhance the transferability of adversarial examples. We will detail the principle of LRP in Section 3.2.

3. The Proposed Universal Local Adversarial Network (ULAN)

The framework of the Universal Local Adversarial Network (ULAN) is shown in Figure 2. To describe the training process of ULAN more clearly, we divide it into four steps. The first step uses a generator to learn the mapping from normal distribution noise into universal adversarial perturbations (UAPs). Next, the second step calculates the pixel-wise attention heatmaps of the surrogate model through layer-wise relevance propagation (LRP). Then, the third step utilizes UAPs and attention heatmaps to craft adversarial examples of SAR images. Finally, the fourth step computes the training loss and updates the generator’s parameters through backward propagation. Note that the victim model is a white box in the training phase, but in the testing phase it is a black box, and thus, we calculate the heatmap of the surrogate model as an alternative to the victim network’s heatmap. This chapter will introduce each of the above steps in detail.

3.1. Structure of Generative Network

In order to craft UAPs independently of the input data, this paper trains a generative network G ( · ) to transform the normal distribution noise Z into a UAP δ as follows:
δ = G ( Z ) , Z N ( 0 , 1 )
where Z and δ have the same size, denoted as w × h . Meanwhile, we set the size of SAR images to W × H . Since the generated δ is a local perturbation, the relationship between w × h and W × H is w × h W × H .
The characteristics of SAR images should be taken into account when choosing the generative network. First of all, a SAR image mainly consists of the target and background clutter. Yet, the features that have great impact on the recognition results are mainly concentrated in the target region, which only occupies a tiny part of the SAR image. Second, compared to natural images, the professionalism and confidentiality of SAR images make them challenging to access. This means that we need to consider adversarial attacks under small sample conditions, so a lightweight generator is necessary to prevent network overfitting. In summary, this paper takes U-Net as the generator to craft UAPs.
Figure 3 shows the detailed U-Net structure. It was first proposed to segment biomedical images [33] and mainly consists of an encoder and a decoder. The encoder extracts features by down-sampling the input data, while the decoder recovers the data by up-sampling feature maps. The biggest difference between the U-Net and other common encoder-decoder models is that the former introduces a skip connection operation to fuse features from different layers. Specifically, both the encoder and the decoder consist of four sub-blocks. The encoder block contains two 3 × 3 convolutional layers and a 2 × 2 max-pooling layer, while the decoder block contains a 2 × 2 transposed convolutional layer and two 3 × 3 convolutional layers. Note that the last layer of the decoder utilizes a 1 × 1 convolutional layer to make the number of input and output channels identical. The network parameters are given in Table 1.

3.2. Layer-Wise Relevance Propagation (LRP)

To analyze the relevance of each pixel in SAR images to the recognition results, we must obtain the DNN model’s attention heatmaps first. In this paper, we apply layer-wise relevance propagation (LRP) [31], which takes as input the model’s logits outputs and outputs the pixel-wise attention heatmaps of the surrogate model f s ( · ) . For an easy explanation, we suppose f s ( · ) is an l-layer DNN without the softmax output layer. Figure 4 illustrates the network’s forward propagation and LRP.
The left of Figure 4 shows a standard forward propagation, which takes a SAR image x as input and outputs a logits vector f s ( x ) . A common mapping from one layer to the next one can be expressed as follows:
x i ( l 1 ) = σ ( z i ( l 1 ) )
z i j ( l 1 ) = w i j ( l 1 ) x i ( l 1 )
z j ( l ) = i z i j ( l 1 ) + b j ( l )
where z i ( l 1 ) and x i ( l 1 ) denote the pre-activation and post-activation of the corresponding node (the superscript and subscript denote layer and node indices, respectively), σ ( · ) is an activation function, w i j ( l 1 ) and z i j ( l 1 ) can be understood as the weight and local pre-activation between nodes x i ( l 1 ) and z j ( l ) , and b j ( l ) is a bias term. The activation function σ ( · ) is usually nonlinear, such as the hyperbolic tangent t a n h or the rectification function R e L U , which can enhance the network’s representation capacity. Note that the input and output layers typically do not include activation functions, and the output f s ( x ) = [ f s ( x ) 1 , f s ( x ) 2 , , f s ( x ) N l ] is a logits vector without softmax operations.
As for LRP, given a target class output f s ( x ) j as input, its output is a pixel-wise attention heatmap reflecting the image regions most relevant to f s ( x ) j . Specifically, we sequentially decompose the relevance of each node for the target class output f s ( x ) j from the neural network’s output layer to the input layer. Meanwhile, the backward propagation of the relevance must satisfy the following conservation property:
f s ( x ) j = R j ( l ) = i R i ( l 1 ) = = n R n ( 1 )
A common decomposition is to allocate the relevance according to the ratio of local to global pre-activations in the forward propagation, as follows:
R i j ( l 1 , l ) = z i j ( l 1 ) z j ( l ) · R j ( l )
where R i j ( l 1 , l ) denotes the relevance assigned from node R j ( l ) to node R i ( l 1 ) . This decomposition can approximately satisfy the conservation property in (8):
i R i j ( l 1 , l ) = R j ( l ) · ( 1 b j ( l ) z j ( l ) ) R j ( l )
Additionally, considering that if z j ( l ) goes to zero, then R i j ( l 1 , l ) will close to infinity, so (9) can be modified by introducing a stable term ϵ 0 as follows:
R i j ( l 1 , l ) = z i j ( l 1 ) z j ( l ) + ϵ · s i g n ( z j ( l ) ) · R j ( l )
In summary, we can calculate the relevance of each node for the target class output through the following recursion formula and backward-pass the relevance until reaching the input layer.
R i ( l 1 ) = j z i j ( l 1 ) z j ( l ) + ϵ · s i g n ( z j ( l ) ) · R j ( l )

3.3. Adversarial Examples of SAR Images

To add the local perturbations generated in Section 3.1 to the target regions of SAR images, we determine the perturbation location through the attention heatmaps calculated by Section 3.2. Therefore, we take the attention heatmap centroid as the perturbation center and design a perturbation function to craft the adversarial examples.
First of all, the coordinates of the image centroid can be calculated by the following formula [37]:
( u c , v c ) = ( M 10 M 00 , M 01 M 00 )
where M 00 is the zero-order moment of the image, and M 10 and M 01 are the first-order moments of the image. This involves the calculation of higher-order moments, which are generally defined as:
M α β = u α v β f ( u , v ) d u d v
For a digital image, we regard the coordinates of the pixel as a two-dimensional random variable ( u , v ) , and the value of each pixel is regarded as the density of the point. Thus, a gray-scale image can be represented by a two-dimensional gray-scale density function V ( u , v ) , and its higher-order moments can be expressed as:
M α β = u v V ( u , v ) · u α · v β
Note that the premise here is a two-dimensional gray-scale image, so we convert the attention heatmap h m a p to a single-channel gray-scale image first and then preprocess it with Gaussian blur and binarization algorithms [38].
Then, we take the attention heatmap centroid as the perturbation center, so the pixel coordinates corresponding to δ ( 0 , 0 ) , i.e., the perturbation origin, can be derived as:
( u o , v o ) = ( u c + Δ u w 2 , v c + Δ v h 2 )
where w and h are the width and height of δ , w 2 and h 2 represent the displacement difference between the perturbation center and the perturbation origin in the horizontal and vertical directions, and · means rounding down. Meanwhile, this paper adds a two-dimensional random noise ( Δ u , Δ v ) U ( 5 , 5 ) on the centroid coordinates to improve the generalization of our attack.
Next, we add the UAP δ to the perturbed region through the following perturbation function. Let P e r t ( u o , v o , δ , W , H ) be a function that takes as input the perturbation origin coordinates ( u o , v o ) , a UAP δ , and the size of SAR images W × H and outputs an adversarial perturbation δ R W × H of the same size as SAR images, defined as:
δ ( u , v ) 0 u W 1 0 v H 1 = δ ( u u o , v v o ) , if u o u u o + w 1 v o v v o + h 1 0 , otherwise
In brief, the adversarial perturbation δ = P e r t ( u o , v o , δ , W , H ) equals zero at all pixels except the pixels in the perturbed region.
Finally, the adversarial example x can be expressed as:
x = C l i p [ 0 , 255 ] ( x + δ )
The clipping operation restricts the pixel values of x to the interval of [ 0 , 255 ] , ensuring that x is still an 8-bit gray-scale image.

3.4. Design of Loss Functions

To effectively fool the DNN model with a minor perturbation, we design a loss function L t consisting of an attack loss L a and a norm loss L n . This section will introduce them in detail.
For the non-targeted attack: In this paper, we design an attack loss L a on the basis of the following standard cross-entropy loss.
loss ( f v ( x ) , C t r ) = log exp ( f v ( x ) C t r ) j exp ( f v ( x ) j )
where f v ( x ) is the logits output of the victim model. The above formula actually contains the following softmax operation:
softmax ( f v ( x ) i ) = exp ( f v ( x ) i ) j exp ( f v ( x ) j ) [ 0 , 1 ]
Obviously, the cross-entropy loss in (19) has been widely used in network training to improve the DNN model’s classification accuracy by increasing the confidence of true classes. Instead, according to (2), the non-targeted attack can minimize the classification accuracy by decreasing the confidence of true classes, i.e., increasing the confidence of others, and thus, the attack loss L a can be expressed as:
L a ( f v ( x ) , C t r ) = log j C t r exp ( f v ( x ) j ) j exp ( f v ( x ) j ) = log 1 exp ( f v ( x ) C t r ) j exp ( f v ( x ) j )
Meanwhile, a norm loss L n is introduced to limit the perturbation magnitude. We use the traditional L p -norm to measure the degree of image distortion as follows:
L n ( x , x ) = x x p = ( i Δ x i p ) 1 p
Then, we apply the linear weighted sum method to balance the relationship between L a and L n , so the total loss L t can be represented as:
L t = L a ( f v ( x ) , C t r ) + ω · L n ( x , x ) = ω · x x p log 1 exp ( f v ( x ) C t r ) j exp ( f v ( x ) j )
where ω 0 is a constant that measures the relative importance of the attack’s effectiveness and the attack’s stealthiness.
For the targeted attack: According to (3), a targeted attack aims to maximize the probability that the victim model recognizes samples as target classes. In other words, we need to increase the confidence of target classes. Thus, the attack loss L a of targeted attacks can be expressed as:
L a ( f v ( x ) , C t a ) = log exp ( f v ( x ) C t a ) j exp ( f v ( x ) j )
The norm loss L n is the same as (22), so the total loss L t of the targeted attack can be derived as follows:
L t = L a ( f v ( x ) , C t a ) + ω · L n ( x , x ) = ω · x x p log exp ( f v ( x ) C t a ) j exp ( f v ( x ) j )

4. Experiments

4.1. Dataset and Implementation Details

4.1.1. Dataset

The moving and stationary target acquisition and recognition (MSTAR) dataset [39] published by the U.S. Defense Advanced Research Projects Agency (DARPA) is employed in our experiments. MSTAR is collected by the high-resolution spotlight SAR and contains SAR images of Soviet military vehicle targets at different azimuth and depression angles. All the experiments were performed under standard operating conditions (SOC), which included ten ground target classes, such as self-propelled howitzers (2S1); infantry fighting vehicles (BMP2); armored reconnaissance vehicles (BRDM2); wheeled armored transport vehicles (BTR60, BTR70); bulldozers (D7); main battle tanks (T62, T72); cargo trucks (ZIL131); and self-propelled artillery (ZSU234). The training dataset contains 2747 images collected at a 17° depression angle, and the testing dataset contains 2426 images captured at a 15° depression angle. More details about the dataset are shown in Table A1, and Figure A2 shows the optical images and corresponding SAR images of ten ground target classes.

4.1.2. Implementation Details

Due to the different sizes of SAR images in MSTAR, we first center-cropped the images to 128 × 128 . In practice, however, the target is not necessarily located in the center of the SAR image. Thus, we randomly cropped the cropped images to 88 × 88 again and finally normalized them to N ( 0 , 1 ) . For the victim models, we adopted six common DNNs, A-ConvNets-BN [40], VGG16-BN [41], GoogLeNet [42], InceptionV3 [43], ResNet50 [44], and ResNeXt50 [45], which were trained on the MSTAR dataset and had a classification accuracy of over 97 % . The surrogate model employed a well-trained VGG16-BN network to approximate the pixel-wise attention heatmap of the victim model. During the training phase, we formed the validation dataset by uniformly sampling 10 % of data from the training dataset and used the Adam optimizer [46] with a learning rate of 0.001 , a training epoch of 15, and a training batch size of 32. The size of UAPs defaults to 44 × 44 , the norm type defaults to the L 2 -norm, and the weight coefficient ω defaults to 0.5 . The above parameter settings have been experimentally proven to achieve excellent attack performance. We will discuss the influence of parameters on UAPs in Section 4.7.
Considering that most of the current research aims to craft global adversarial perturbations for SAR images, few scholars have focused on universal or local perturbations. Therefore, in the comparative experiments, we took the methods proposed in [20,47] as baselines to compare with ULAN. Note that baseline methods generate global UAPs for SAR images, while our method only needs to perturb local regions. All codes were written in Pytorch, and the experimental environment consisted of Windows 10 with an NVIDIA GeForce RTX 2080 Ti GPU and a 3.6 GHz Intel Core i9-9900K CPU).

4.2. Evaluation Metrics

This paper takes into account two factors to comprehensively evaluate the performance of adversarial attacks: the attack’s effectiveness and the attack’s stealthiness. In the experiments, we crafted adversarial examples for all samples in the SAR image dataset, so the victim model’s classification accuracy directly reflects the attack effectiveness of UAPs:
Acc = n = 1 N D ( arg max i ( f ( x n + δ ) i ) = = C t r ) N Non-targeted Attack C t a = 1 k n = 1 N D ( arg max i ( f ( x n + δ ) i ) = = C t a ) k   ×   N Targeted Attack
where C t r and C t a represent the true and target classes of the input data, k is the number of target classes, and D ( · ) is a discriminant function. In non-targeted attacks, the Acc metric reflects the probability that victim models correctly recognize adversarial examples. The lower the classification accuracy of the victim model on adversarial examples, the better the non-targeted attacks. In targeted attacks, the Acc metric represents the probability of victim models identifying adversarial examples as target classes. The higher the Acc metric, the stronger the targeted attacks. In conclusion, the non-targeted attack’s effectiveness is inversely proportional to the Acc metric, and the targeted attack’s effectiveness is proportional to this metric. Moreover, to verify the reliability of attacks, we also compared the confidence level of target classes before and after the attack.
When evaluating the attack stealthiness, in addition to using the L p -norm to measure the degree of image distortion, we also introduced the structural similarity (SSIM) [48], a metric more in line with human visual perception, for a more objective evaluation, defined as:
SSIM = 1 N n = 1 N ( 2 μ x n μ x n + C 1 ) ( 2 σ x n x n + C 2 ) ( μ x n 2 + μ x n 2 + C 1 ) ( σ x n 2 + σ x n 2 + C 2 )
where x n = ( x n + δ ) is the adversarial example of x n , μ x n , μ x n and σ x n , σ x n are the mean and standard deviation of the corresponding image, σ x n x n is the covariance, and C 1 , C 2 are the constants used to keep the metric stable. Equation (27) calculates the mean of the SSIM value between all the samples in the dataset and the corresponding adversarial examples, which ranges from 1 to 1. The higher the SSIM, the more imperceptible the UAPs, and the better the attack’s stealthiness is.

4.3. Attention Heatmaps for DNN-Based SAR Target Recognition Models

For the six victim models mentioned in Section 4.1.2, given ten SAR images from different target classes as input, they all correctly classified the targets with high confidence. Then, we calculated pixel-wise attention heatmaps for the victim models by LRP, as shown in Figure A3. The results are similar to the natural image in Figure A1, i.e., the pixels that have a great impact on the SAR image classifiers are mainly concentrated in the target regions. Furthermore, we found that the attention heatmaps of different models have similar structures, which proves the feasibility of our method. Specifically, since the victim model is a black box in the testing phase, attackers are unable to directly obtain its attention heatmaps through LRP. However, due to the similarity of attention heatmaps between different DNN models, we can calculate a white-box surrogate model’s attention heatmap as an alternative. Meanwhile, since the attention heatmap of VGG16-BN best matches the target shape and has the clearest boundary, the surrogate model adopts a well-trained VGG16-BN network to approximate the attention heatmap of the victim model.

4.4. Adversarial Attacks without Perturbation Offset

In this experiment, we evaluated the non-targeted and targeted attack performance of each method without perturbation offset. Specifically, we first cropped the SAR images to 88 × 88 , as mentioned in Section 4.1.2, and then crafted adversarial examples by adding well-designed perturbations to the cropped images, which ensured that the perturbations could be fully fed to the victim model. Note that the structures and parameters of the model were known in the training phase, while these details were unavailable in the testing phase. Moreover, we emphasize that the UAPs generated by baseline methods cover the global SAR images, but our method only needs to perturb target regions. The results of the non-targeted and targeted attacks are shown in Table 2 and Table 3, respectively. There are four metrics in the table to evaluate the attack performance: the classification accuracy and target class confidence before and after the attack, the L 2 -norm of image distortion, and the SSIM between clean and adversarial examples.
In the non-targeted attack, the classification accuracy of each DNN model on the testing dataset exceeds 95 % , and the true class confidence is over 0.9 . However, after the attack, the average decrease in the classification accuracy exceeds 70 % , and the maximum drop in the true class confidence reaches 0.85 . From the perspective of attack effectiveness, the UAN performs the best, followed by ULAN and U-Net, and the worst is the ResNet Generator. Yet, the biggest drawback of baseline methods is that they need to perturb the global regions of size 88 × 88 , but our method perturbs the target regions of size 44 × 44 . Even though ULAN only perturbs a quarter of the SAR image area, it achieves comparable attack performance to the global UAPs. We speculate the reason is that the features within target regions have stronger relevance with the recognition results than others, so a focused perturbation on the target region is more efficient than a global perturbation. In terms of the attack’s stealthiness, Table 2 lists the L 2 -norm value of image distortion caused by each method and the SSIM between the adversarial examples and clean SAR images. An interesting phenomenon is that sometimes ULAN causes a larger image distortion but still performs better on the SSIM metric than baseline methods. We attribute this to the fact that the human eye is more sensitive to large-range minor perturbations than small-range focused ones, resulting in the superior performance of our method on the SSIM metric. It also illustrates that local perturbations can enhance the imperceptibility of adversarial attacks.
In the targeted attack, we regard the target category as the correct class, so the classification accuracy of DNN models on the testing dataset reflects the data distribution, i.e., each category accounts for about one-tenth of the total dataset. According to Table 3, adversarial examples lead to a sharp rise in the Acc metric, the average increase reaches 75 % , and the maximum rise of the true class confidence exceeds 0.8 . This means that the generated UAPs can induce DNN models to output specified results with high confidence. In general, ULAN is slightly inferior to UAN and U-Net regarding the attack’s effectiveness but performs much better than baseline methods on the attack’s stealthiness. Thus, we believe that given a fixed SSIM value, ULAN can achieve the best attack performance.
To visualize the adversarial examples generated by different methods, we take the VGG16-BN-based SAR-ATR model as the victim network and display the adversarial examples for the non-targeted and targeted attacks in Figure 5 and Figure 6, respectively. We list the prediction and confidence output by the victim model at the top of each adversarial example, and the bottom of each figure shows the sizes of the corresponding image and perturbation. As we can see, the UAPs generated by baseline methods fully cover the SAR images fed to the model, while ULAN can locate and perturb the target (green box) region effectively. Meanwhile, according to Figure 5 and Figure 6, there are apparent shadow and texture traces in the adversarial examples crafted by baseline methods, which also suggests that the global perturbations are more perceptible than the local ones. In summary, compared to baseline methods, our method can achieve good attack performance with smaller perturbed regions and lower perceptions.

4.5. Adversarial Attacks with Perturbation Offset

We now evaluate the adversarial attacks in the case of perturbation offset. Specifically, we first recover the adversarial examples generated in Section 4.4 to 128 × 128 and next obtain the input data by randomly cropping the recovered images to 88 × 88 again. In this way, we cause a mismatch between the input and perturbed regions. As shown in Figure 1, the input and perturbed regions correspond to the red and green box regions such that the adversarial perturbations cannot be fed to the victim model completely, and thus, the perturbation offset condition is constructed. The results of non-targeted and targeted attacks in the case of perturbation offset are shown in Table 4 and Table 5, respectively.
The experimental results suggest that perturbation offset severely impacts the attack performance of baseline methods. In non-targeted attacks, the Acc metric of baseline methods deteriorates rapidly, the average increase exceeds 20 % , and the maximum increase in true class confidence reaches 0.4 . A similar situation also occurs in targeted attacks, where the UAPs generated by baseline methods are likely to be ineffective in the case of perturbation offset. The average decrease of the Acc metric exceeds 40 % , and the maximum drop in the target class confidence reaches 0.5 . In contrast, the attack performance of our method is hardly affected under the same experiment condition. Detailed experimental data are displayed in Table 4 and Table 5.
In summary, the global UAPs generated by baseline methods are vulnerable to perturbation offset. They might be ineffective unless the victim model accurately takes the perturbed region as input. However, the local perturbations generated by ULAN only cover the target regions of SAR images so that they can be fed to the model as completely as possible regardless of the input regions, which effectively prevents perturbation offset.

4.6. Adversarial Attacks under Small Sample Conditions

Thus far, we have assumed attackers share full access to any images used to train the victim model. However, the professionalism and confidentiality of SAR images make them challenging to access in practice. In other words, it is difficult for attackers to obtain sufficient data to support the training of attack networks. Therefore, we now evaluate the adversarial attacks under stronger assumptions of attacker access to training data.
We consider an extreme situation where attack networks are trained on a subset containing only 50 samples (5 per class). Specifically, we uniformly sample 50 images from the full training dataset to form the subset and compare the attack performance of attack networks trained on the subset and full training dataset against different DNN models. The results of non-targeted and targeted attacks based on different size datasets are shown in Table 6 and Table 7, respectively.
As we can see, the reduction in training data seriously impacts the attack performance of the UAN and ResNet Genenrator. Although a slight deterioration in the Acc metric can be tolerated, the average decrease in the SSIM metric is nearly 0.2 . This means that the above methods severely sacrifice the attack’s stealthiness for better attack effectiveness, which makes the generated adversarial examples easily detected by defenders. However, ULAN and U-Net still maintain good attack effectiveness and stealthiness under small sample conditions. The average change in the Acc metric in both attack modes is less than 8 % , and the mean decrease in the SSIM metric is within 0.07 .
The reasons for the above results might be due to the skip connection structure of the network and the fixation structure of the SAR image. The decoder of ULAN and U-Net fuses the features from different layers through the skip connection structure, which can help the generator learn the data distribution sufficiently. Moreover, the low dependence on the training data can also be attributed to the fixation structure of the SAR image itself such that its semantic features are more easily extracted and represented than natural images. Thus, our approach can work well in the situation where attackers have difficulty obtaining sufficient training data.

4.7. Influence of Parameters

This section evaluates the attack performance of ULAN trained on different parameter settings, providing guidance for attackers to achieve superior attack performance. The parameters mainly include the perturbation size w × h , the weight coefficient ω , and the type of L p -norm.

4.7.1. Perturbation Size w × h

To investigate the influence of the perturbation size w × h on the attack’s performance, we trained ULAN on seven different size settings: 22 × 22 , 33 × 33 , 44 × 44 , 55 × 55 , 66 × 66 , 77 × 77 , and 88 × 88 . Then, we evaluate the attack performance on the testing dataset, and the results are shown in Figure 7. As expected, for both non-targeted and targeted attacks, a larger perturbation size improves the attack effectiveness, while the attack stealthiness becomes worse. Meanwhile, we find that when the perturbation size exceeds 55 × 55 , the SSIM metric of each DNN model shown in Figure 7b,d continuously decreases, while the corresponding Acc metric shown in Figure 7a,c tends to a stable value. We speculate the reason is that perturbation offset will inevitably occur as the perturbation size increases, resulting in the fact that only partial perturbations can be fed to the victim model such that the attack effectiveness is no longer improved. Therefore, the advised perturbation size in this paper is between 44 × 44 and 55 × 55 .
Furthermore, ULAN has superior attack performance even in the case of perturbation offset, which is quite different from baseline methods. Specifically, according to Table 4 and Table 5, a large number of global UAPs generated by baseline methods fail to attack the victim model in the case of perturbation offset. Yet, when the perturbation size reaches 88 × 88 , more than 80 % of the adversarial examples generated by ULAN still work well. This is because the perturbation size is too large to prevent perturbation offset during the training phase. In other words, ULAN itself is trained in the case of perturbation offset. Thus, there is no doubt that a well-trained ULAN has already been equipped with the ability to fool models effectively in the case of perturbation offset.

4.7.2. Weight Coefficient ω

The weight coefficient ω is a constant measuring the relative importance of attack effectiveness and stealthiness, which has a great impact on the attack performance. We now train ULAN on nine different weight coefficients, 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , and 0.9 , and report attack results on the testing dataset in Figure 8. We can see that for non-targeted attacks, the Acc and SSIM metrics increase as ω becomes larger. In targeted attacks, the Acc metric declines as ω grows, while the SSIM metric is still increasing. Since the non-targeted attack effectiveness is inversely proportional to the Acc metric, and the targeted attack effectiveness is proportional to this metric, the effectiveness of adversarial attacks becomes worse as ω increases. However, in both attack modes, the SSIM metric is always proportional to the attack stealthiness such that UAPs become more imperceptible as ω gets larger. Meanwhile, Figure 8a,c suggests that the Acc metric of each DNN model cannot converge to a stable value, and the corresponding SSIM metric shown in Figure 8b,d is also constantly changing. Thus, for superior attack performance, attackers are supposed to choose an appropriate weight as needed in the training phase of ULAN.

4.7.3. Type of L p -Norm

Thus far, we have adopted the L 2 -norm to measure the image distortion caused by adversarial attacks. However, in addition to the L 2 -norm, there are many distance metrics, such as the L -norm and the L 1 -norm, etc.
In this section, we evaluate the attack performance of ULAN trained on different distance metrics: the L 2 -norm and the L -norm. Note that the values of image distortion calculated by the two metrics differ by several orders of magnitude, so we set the weight ω of the L 2 -norm to 0.5 and 10 for the L -norm. The results of non-targeted and targeted attacks are shown in Table 8. We can find that ULAN trained on the L 2 -norm has better performance on both the attack effectiveness and stealthiness. Therefore, to obtain a more threatening attack network, the advised distance metric in this paper is the L 2 -norm.

5. Discussion

The above research demonstrates that our method can efficiently attack DNN models on the MSTAR dataset. To further investigate the generality of the proposed method in SAR target recognition tasks, we also apply experiments on the FUSAR-Ship dataset [49]. Specifically, we select four kinds of sub-class targets for the experiments, and the details of the dataset are displayed in Table A2. Considering the size of SAR images is 512 × 512 , we set the input size of models to 384 × 384 , the perturbation size to 96 × 96 , and the weight coefficient ω to 0.1 . For the victim models, we adopt four common DNNs, GoogLeNet [42], InceptionV3 [43], ResNet50 [44], and ResNeXt50 [45]. The attack results of ULAN against DNN models on the FUSAR-Ship dataset are shown in Table A3. Experiments suggest that our method can fool DNN models on the FUSAR-Ship dataset by perturbing the target regions of SAR images. Meanwhile, the results in Table A4 indicate that the adversarial examples generated by ULAN prevent perturbation offset effectively. In summary, the method proposed in this paper has a promising application in adversarial attacks against DNN-based SAR-ATR models.

6. Conclusions

In this paper, a semi-white-box attack network called Universal Local Adversarial Network is proposed to generate UAPs for the target regions of SAR images with the benefit of focusing perturbations on the target regions in SAR images that have high relevance to the recognition results. A focused perturbation on the high-relevance target region significantly improves the efficiency of adversarial attacks. Additionally, it can be fed to the victim model as completely as possible regardless of the input regions such that perturbation offset is effectively prevented. Since ULAN, a generative network, requires model information only during the training phase, once the network is trained and given inputs, it can craft adversarial examples in real time for the DNN-based SAR-ATR model through one-step forward mapping without requiring access to the model itself anymore, which possesses better feasibility than traditional iterative methods. Experimental results demonstrate that the proposed method prevents perturbation offset effectively and achieves comparable attack performance to the conventional global UAPs by perturbing only a quarter or less of the SAR image area. Moreover, our experiments also indicate that ULAN is insensitive to the amount of training data, which makes it still work well under small sample conditions. Potential future work could consider replacing the victim model with a distillation model to construct a black-box attack network. It is also of great interest to enhance the transferability of adversarial examples between different DNN models.

Author Contributions

Conceptualization, M.D. (Meng Du) and D.B.; methodology, M.D. (Meng Du); software, M.D. (Meng Du); validation, D.B., X.X. and Z.W.; formal analysis, D.B. and M.D. (Mingyang Du); investigation, M.D. (Mingyang Du); resources, D.B.; data curation, M.D. (Meng Du); writing—original draft preparation, M.D. (Meng Du); writing—review and editing, M.D. (Meng Du) and D.B.; visualization, M.D. (Meng Du); supervision, D.B.; project administration, D.B.; funding acquisition, D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62071476.

Institutional Review Board Statement

The study does not involve humans or animals.

Informed Consent Statement

The study does not involve humans.

Data Availability Statement

The experiments in this paper use public datasets, so no data are reported in this work.

Conflicts of Interest

The authors declare that they have no conflict of interest to report regarding the present study.

Appendix A

Figure A1. Attention heatmaps for AlexNet [50], MobileNet [51], ResNet18 [44], VGG11-BN, VGG16, and VGG16-BN [41].
Figure A1. Attention heatmaps for AlexNet [50], MobileNet [51], ResNet18 [44], VGG11-BN, VGG16, and VGG16-BN [41].
Remotesensing 15 00021 g0a1
Figure A2. Optical images (top) and SAR images (bottom) of ten ground target classes.
Figure A2. Optical images (top) and SAR images (bottom) of ten ground target classes.
Remotesensing 15 00021 g0a2
Table A1. Details of MSTAR under SOC, including target class, serial, depression angle, and sample numbers.
Table A1. Details of MSTAR under SOC, including target class, serial, depression angle, and sample numbers.
Target ClassSerialTraining DataTesting Data
Depression AngleNumberDepression AngleNumber
2S1b01 17 ° 299 15 ° 274
BMP29566 17 ° 233 15 ° 196
BRDM2E-71 17 ° 298 15 ° 274
BTR60k10yt7532 17 ° 256 15 ° 195
BTR70c71 17 ° 233 15 ° 196
D792v13015 17 ° 299 15 ° 274
T62A51 17 ° 299 15 ° 273
T72132 17 ° 232 15 ° 196
ZIL131E12 17 ° 299 15 ° 274
ZSU234d08 17 ° 299 15 ° 274
Figure A3. Pixel-wise attention heatmaps for DNN-based SAR-ATR models. The true class of the SAR image is listed at the top, and the DNN structure is shown on the left.
Figure A3. Pixel-wise attention heatmaps for DNN-based SAR-ATR models. The true class of the SAR image is listed at the top, and the DNN structure is shown on the left.
Remotesensing 15 00021 g0a3
Table A2. Details of FUSAR-Ship, including target classes and sample numbers.
Table A2. Details of FUSAR-Ship, including target classes and sample numbers.
Target ClassTraining NumberTesting Number
BulkCarrier9725
CargoShip12632
Fishing7519
Tanker3610
Table A3. Adversarial attacks of ULAN against DNN models on the FUSAR-Ship dataset. We report attack results on the testing dataset.
Table A3. Adversarial attacks of ULAN against DNN models on the FUSAR-Ship dataset. We report attack results on the testing dataset.
ModeVictimAccConfidence L 2 -NormSSIM
CleanAdvGapCleanAdvGap
GoogLeNet77.65%29.41%−48.24%0.770.30-0.478.870.99
InceptionV377.65%34.12%−43.53%0.770.34−0.437.590.97
Non-targetResNet5072.94%29.41%−43.53%0.730.30−0.434.490.98
ResNeXt5064.71%28.24%−36.47%0.630.30−0.3320.350.96
Mean73.24%30.30%−42.94%0.730.31−0.4210.330.98
GoogLeNet25.88%79.71%+53.83%0.260.79+0.5310.950.97
InceptionV324.12%75.00%+50.88%0.250.74+0.497.530.98
TargetResNet5024.71%68.53%+43.82%0.250.67+0.4214.730.97
ResNeXt5023.82%67.65%+43.83%0.240.67+0.4314.790.97
Mean24.63%72.72%+48.09%0.250.72+0.4712.000.97
Table A4. Adversarial attacks of ULAN against DNN models on the FUSAR-Ship dataset in the case of perturbation offset. We report attack results on the testing dataset.
Table A4. Adversarial attacks of ULAN against DNN models on the FUSAR-Ship dataset in the case of perturbation offset. We report attack results on the testing dataset.
ModeVictimAccConfidence
No-OffsetOffsetGapNo-OffsetOffsetGap
GoogLeNet29.41%30.59%+1.18%0.300.31+0.01
InceptionV334.12%41.18%+7.06%0.340.39+0.05
Non-targetResNet5029.41%31.24%+1.82%0.300.32+0.02
ResNeXt5028.24%31.76%+3.53%0.300.32+0.02
Mean30.30%33.69%+3.40%0.310.34+0.03
GoogLeNet79.71%78.24%−1.47%0.790.76−0.03
InceptionV375.00%69.41%−5.59%0.740.66−0.08
TargetResNet5068.53%60.00%−8.53%0.670.61−0.06
ResNeXt5067.65%61.47%−6.18%0.670.60−0.07
Mean72.72%67.28%−5.44%0.720.66−0.06

References

  1. Zhang, F.; Yao, X.; Tang, H.; Yin, Q.; Hu, Y.; Lei, B. Multiple mode SAR raw data simulation and parallel acceleration for Gaofen-3 mission. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2115–2126. [Google Scholar] [CrossRef]
  2. Brown, W.M. Synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 1967, AES-3, 217–229. [Google Scholar] [CrossRef]
  3. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-valued convolutional neural network and its application in polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
  5. Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  6. Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
  7. Du, C.; Chen, B.; Xu, B.; Guo, D.; Liu, H. Factorized discriminative conditional variational auto-encoder for radar HRRP target recognition. Signal Process. 2019, 158, 176–189. [Google Scholar] [CrossRef]
  8. Vint, D.; Anderson, M.; Yang, Y.; Ilioudis, C.; Di Caterina, G.; Clemente, C. Automatic Target Recognition for Low Resolution Foliage Penetrating SAR Images Using CNNs and GANs. Remote Sens. 2021, 13, 596. [Google Scholar] [CrossRef]
  9. Huang, T.; Zhang, Q.; Liu, J.; Hou, R.; Wang, X.; Li, Y. Adversarial attacks on deep-learning-based SAR image target recognition. J. Netw. Comput. Appl. 2020, 162, 102632. [Google Scholar] [CrossRef]
  10. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar] [CrossRef]
  11. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar] [CrossRef]
  12. Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: London, UK, 2018; pp. 99–112. [Google Scholar]
  13. Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2574–2582. [Google Scholar] [CrossRef] [Green Version]
  14. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbrücken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar] [CrossRef] [Green Version]
  15. Su, J.; Vargas, D.V.; Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, P.Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.J. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 15–26. [Google Scholar] [CrossRef] [Green Version]
  17. Chen, J.; Jordan, M.I.; Wainwright, M.J. Hopskipjumpattack: A query-efficient decision-based attack. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (sp), San Francisco, CA, USA, 18–21 May 2020; pp. 1277–1294. [Google Scholar] [CrossRef]
  18. Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; Yuille, A.L. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–19 June 2019; pp. 2730–2739. [Google Scholar] [CrossRef]
  19. Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1765–1773. [Google Scholar] [CrossRef]
  20. Hayes, J.; Danezis, G. Learning universal adversarial perturbations with generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 43–49. [Google Scholar] [CrossRef] [Green Version]
  21. Mopuri, K.R.; Garg, U.; Babu, R.V. Fast feature fool: A data independent approach to universal adversarial perturbations. arXiv 2017, arXiv:1707.05572. [Google Scholar] [CrossRef]
  22. Mopuri, K.R.; Uppala, P.K.; Babu, R.V. Ask, acquire, and attack: Data-free uap generation using class impressions. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, German, 8–14 September 2018; pp. 19–34. [Google Scholar] [CrossRef]
  23. Xu, Y.; Du, B.; Zhang, L. Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: Attacks and defenses. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1604–1617. [Google Scholar] [CrossRef]
  24. Xu, Y.; Ghamisi, P. Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  25. Thys, S.; Van Ranst, W.; Goedemé, T. Fooling automated surveillance cameras: Adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar] [CrossRef]
  26. Li, H.; Huang, H.; Chen, L.; Peng, J.; Huang, H.; Cui, Z.; Mei, X.; Wu, G. Adversarial examples for CNN-based SAR image classification: An experience study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1333–1347. [Google Scholar] [CrossRef]
  27. Du, C.; Huo, C.; Zhang, L.; Chen, B.; Yuan, Y. Fast C&W: A Fast Adversarial Attack Algorithm to Fool SAR Target Recognition with Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  28. Wang, L.; Wang, X.; Ma, S.; Zhang, Y. Universal adversarial perturbation of SAR images for deep learning based target classification. In Proceedings of the 2021 IEEE 4th International Conference on Electronics Technology (ICET), Chengdu, China, 7–10 May 2021; pp. 1272–1276. [Google Scholar] [CrossRef]
  29. Xia, W.; Liu, Z.; Li, Y. SAR-PeGA: A Generation Method of Adversarial Examples for SAR Image Target Recognition Network. IEEE Trans. Aerosp. Electron. Syst. 2022, 1–11. [Google Scholar] [CrossRef]
  30. Chen, S.; He, Z.; Sun, C.; Yang, J.; Huang, X. Universal adversarial attack on attention and the resulting dataset damagenet. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2188–2197. [Google Scholar] [CrossRef]
  31. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef]
  32. Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating adversarial examples with adversarial networks. arXiv 2018, arXiv:1801.02610. [Google Scholar] [CrossRef]
  33. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  34. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
  35. Zhou, J.; Troyanskaya, O.G. Predicting effects of noncoding variants with deep learning–based sequence model. Nat. Methods 2015, 12, 931–934. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013, arXiv:1312.6034. [Google Scholar] [CrossRef]
  37. Teague, M.R. Image analysis via the general theory of moments. Josa 1980, 70, 920–930. [Google Scholar] [CrossRef]
  38. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  39. Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. Algorithms Synth. Aperture Radar Imag. III 1996, 2757, 228–242. [Google Scholar] [CrossRef]
  40. Junfan, Z.; Hao, S.; Lin, L.; Kefeng, J.; Gangyao, K. Sparse Adversarial Attack of SAR Image. J. Signal Process. 2021, 37, 11. [Google Scholar]
  41. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  42. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  43. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  44. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  45. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar] [CrossRef] [Green Version]
  46. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  47. Poursaeed, O.; Katsman, I.; Gao, B.; Belongie, S. Generative adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4422–4431. [Google Scholar] [CrossRef]
  48. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  49. Hou, X.; Ao, W.; Song, Q.; Lai, J.; Wang, H.; Xu, F. FUSAR-Ship: Building a high-resolution SAR-AIS matchup dataset of Gaofen-3 for ship detection and recognition. Sci. China Inf. Sci. 2020, 63, 1–19. [Google Scholar] [CrossRef] [Green Version]
  50. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  51. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
Figure 1. The adversarial attacks with (bottom) and without (top) perturbation offset. Suppose the green box region is the perturbed region. An adversarial attack without perturbation offset means that the perturbed region must be exactly fed to the model. However, if the model takes as input the red box region that has an offset from the perturbed region, an incomplete adversarial perturbation is likely to make the attack fail.
Figure 1. The adversarial attacks with (bottom) and without (top) perturbation offset. Suppose the green box region is the perturbed region. An adversarial attack without perturbation offset means that the perturbed region must be exactly fed to the model. However, if the model takes as input the red box region that has an offset from the perturbed region, an incomplete adversarial perturbation is likely to make the attack fail.
Remotesensing 15 00021 g001
Figure 2. Framework of ULAN. The generator G ( · ) crafts the local UAP δ . The attention heatmap (hmap) of the surrogate model f s ( · ) locates the target (green box) region. Attackers obtain the adversarial example x by adding δ to the target region and utilize it to attack the victim model f v ( · ) . Finally, the total loss L t formed by the attack loss L a and norm loss L n is used to update G ( · ) .
Figure 2. Framework of ULAN. The generator G ( · ) crafts the local UAP δ . The attention heatmap (hmap) of the surrogate model f s ( · ) locates the target (green box) region. Attackers obtain the adversarial example x by adding δ to the target region and utilize it to attack the victim model f v ( · ) . Finally, the total loss L t formed by the attack loss L a and norm loss L n is used to update G ( · ) .
Remotesensing 15 00021 g002
Figure 3. The structure of U-Net.
Figure 3. The structure of U-Net.
Remotesensing 15 00021 g003
Figure 4. Forward propagation (left) and LRP (right) of the surrogate model f s ( · ) .
Figure 4. Forward propagation (left) and LRP (right) of the surrogate model f s ( · ) .
Remotesensing 15 00021 g004
Figure 5. (a) The original SAR image in MSTAR. (b) The clean SAR image fed to the model. The first row shows the adversarial examples for non-targeted attacks, and the second row shows the UAPs generated by different methods, corresponding to ULAN (c), UAN (d), U-Net (e), and ResNet Generator (f), respectively. We list the prediction and confidence output by the victim model at the top of each adversarial example, and the bottom of the figure shows the sizes of the corresponding image and perturbation.
Figure 5. (a) The original SAR image in MSTAR. (b) The clean SAR image fed to the model. The first row shows the adversarial examples for non-targeted attacks, and the second row shows the UAPs generated by different methods, corresponding to ULAN (c), UAN (d), U-Net (e), and ResNet Generator (f), respectively. We list the prediction and confidence output by the victim model at the top of each adversarial example, and the bottom of the figure shows the sizes of the corresponding image and perturbation.
Remotesensing 15 00021 g005
Figure 6. (a) The original SAR image in MSTAR. (b) The clean SAR image fed to the model. From top to bottom, the corresponding target classes are BRDM2, ZIL131, and ZSU234. For each target class, the first row shows the adversarial examples for targeted attacks, and the second row shows the UAPs generated by different methods, corresponding to ULAN (c), UAN (d), U-Net (e), and ResNet Generator (f), respectively. We list the prediction and confidence output by the victim model at the top of each adversarial example, and the bottom of the figure shows the sizes of the corresponding image and perturbation.
Figure 6. (a) The original SAR image in MSTAR. (b) The clean SAR image fed to the model. From top to bottom, the corresponding target classes are BRDM2, ZIL131, and ZSU234. For each target class, the first row shows the adversarial examples for targeted attacks, and the second row shows the UAPs generated by different methods, corresponding to ULAN (c), UAN (d), U-Net (e), and ResNet Generator (f), respectively. We list the prediction and confidence output by the victim model at the top of each adversarial example, and the bottom of the figure shows the sizes of the corresponding image and perturbation.
Remotesensing 15 00021 g006
Figure 7. The influence of the perturbation size w × h on the attack performance. The Acc and SSIM metrics of non-targeted attacks are shown in (a,b), and the corresponding metrics of targeted attacks are shown in (c,d).
Figure 7. The influence of the perturbation size w × h on the attack performance. The Acc and SSIM metrics of non-targeted attacks are shown in (a,b), and the corresponding metrics of targeted attacks are shown in (c,d).
Remotesensing 15 00021 g007
Figure 8. The influence of the weight coefficient ω on the attack performance. The Acc and SSIM metrics of non-targeted attacks are shown in (a,b), and the corresponding metrics of targeted attacks are shown in (c,d).
Figure 8. The influence of the weight coefficient ω on the attack performance. The Acc and SSIM metrics of non-targeted attacks are shown in (a,b), and the corresponding metrics of targeted attacks are shown in (c,d).
Remotesensing 15 00021 g008
Table 1. The network parameters. Here, we set w × h to 32 × 32 and abbreviate the combination of two convolutional layers as DoubleConv. The parameters of the convolutional layer represent the number of input and output channels and the kernel size, respectively. The parameter of the max-pooling layer represents the kernel size.
Table 1. The network parameters. Here, we set w × h to 32 × 32 and abbreviate the combination of two convolutional layers as DoubleConv. The parameters of the convolutional layer represent the number of input and output channels and the kernel size, respectively. The parameter of the max-pooling layer represents the kernel size.
LayerShape
Input 1 × 32 × 32
DoubleConv( 1 , 64 , 3 ) + Max pool(2) 64 × 16 × 16
DoubleConv( 64 , 128 , 3 ) + Max pool(2) 128 × 8 × 8
DoubleConv( 128 , 256 , 3 ) + Max pool(2) 256 × 4 × 4
DoubleConv( 256 , 512 , 3 ) + Max pool(2) 512 × 2 × 2
DoubleConv( 512 , 1024 , 3 ) 1024 × 2 × 2
ConvTrans( 1024 , 512 , 2 ) + DoubleConv( 1024 , 512 , 3 ) 512 × 4 × 4
ConvTrans( 512 , 256 , 2 ) + DoubleConv( 512 , 256 , 3 ) 256 × 8 × 8
ConvTrans( 256 , 128 , 2 ) + DoubleConv( 256 , 128 , 3 ) 128 × 16 × 16
ConvTrans( 128 , 64 , 2 ) + DoubleConv( 128 , 64 , 3 ) 64 × 32 × 32
Conv( 64 , 1 , 1 ) 1 × 32 × 32
Table 2. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset. We report attack results on the testing dataset.
Table 2. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset. We report attack results on the testing dataset.
MethodVictimAccConfidence L 2 -NormSSIM
CleanAdvGapCleanAdvGap
A-Conv98.19%31.53%−66.66%0.930.31−0.622.030.96
VGG1696.17%16.94%−79.23%0.950.17−0.782.450.95
GoogLeNet97.28%16.90%−80.38%0.960.17−0.793.110.95
ULANInceptionV392.91%23.00%−69.91%0.910.23−0.682.300.96
ResNet5096.17%16.08%−80.09%0.960.16−0.803.650.94
ResNeXt5096.37%17.35%−79.02%0.960.18−0.783.840.94
Mean96.18%20.30%−75.88%0.950.20−0.742.900.95
A-Conv98.23%29.06%−69.17%0.940.29−0.652.540.93
VGG1695.75%10.47%−85.28%0.950.11−0.843.630.86
GoogLeNet97.11%11.91%−85.20%0.960.12−0.843.680.88
UANInceptionV392.87%14.59%−78.28%0.920.15−0.772.640.93
ResNet5096.21%14.39%−81.82%0.960.14−0.825.570.73
ResNeXt5096.78%10.96%−85.82%0.960.11−0.854.560.82
Mean96.16%15.23%−80.93%0.950.15−0.803.770.86
A-Conv98.52%28.07%−70.45%0.940.28−0.662.330.95
VGG1695.59%12.94%−82.65%0.940.13−0.812.680.93
GoogLeNet97.32%15.87%−81.45%0.970.17−0.802.870.93
U-NetInceptionV393.16%22.59%−70.57%0.920.21−0.712.300.95
ResNet5095.67%19.95%−75.72%0.950.20−0.753.570.91
ResNeXt5096.58%13.27%−83.31%0.960.14−0.823.430.91
Mean96.14%18.78%−77.36%0.950.19−0.762.860.93
A-Conv98.06%35.04%−63.02%0.930.33−0.602.070.94
VGG1695.63%18.51%−77.12%0.950.18−0.774.340.82
GoogLeNet97.32%19.62%−77.70%0.970.20−0.772.670.94
ResGInceptionV393.45%21.48%−71.97%0.920.21−0.712.660.93
ResNet5096.08%35.00%−61.08%0.960.35−0.613.700.91
ResNeXt5096.70%17.52%−79.18%0.960.18−0.783.190.92
Mean96.21%24.53%−71.68%0.950.24−0.713.110.91
Table 3. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset. We report attack results on the testing dataset.
Table 3. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset. We report attack results on the testing dataset.
MethodVictimAccConfidence L 2 -NormSSIM
CleanAdvGapCleanAdvGap
A-Conv9.99%85.45%+75.46%0.100.81+0.714.280.90
VGG169.98%90.21%+80.23%0.100.89+0.794.710.90
GoogLeNet10.02%81.65%+71.63%0.100.80+0.704.470.92
ULANInceptionV310.05%80.06%+70.01%0.100.79+0.694.080.93
ResNet509.95%85.31%+75.36%0.100.84+0.745.540.90
ResNeXt509.98%86.53%+76.55%0.100.86+0.765.150.90
Mean10.00%84.87%+74.87%0.100.83+0.734.710.91
A-Conv9.97%90.73%+80.76%0.100.87+0.773.560.88
VGG1610.02%93.84%+83.82%0.100.93+0.834.990.80
GoogLeNet10.03%90.70%+80.67%0.100.90+0.804.800.85
UANInceptionV39.98%91.10%+81.12%0.100.90+0.804.870.84
ResNet5010.09%90.41%+80.32%0.100.90+0.805.460.80
ResNeXt509.98%91.77%+81.79%0.100.91+0.815.600.79
Mean10.01%91.43%+81.41%0.100.90+0.804.880.83
A-Conv9.99%91.63%+81.64%0.100.88+0.783.710.87
VGG1610.02%94.15%+84.13%0.100.93+0.835.090.82
GoogLeNet10.00%91.33%+81.33%0.100.89+0.794.640.88
U-NetInceptionV39.95%91.77%+81.82%0.100.90+0.804.910.87
ResNet5010.01%87.58%+77.57%0.100.87+0.775.340.83
ResNeXt5010.00%91.88%+81.88%0.100.91+0.815.280.83
Mean10.00%91.39%+81.40%0.100.90+0.804.830.85
A-Conv9.98%90.23%+80.25%0.100.87+0.773.940.87
VGG1610.05%88.19%+78.14%0.100.86+0.767.250.69
GoogLeNet10.02%77.74%+67.72%0.100.77+0.674.990.83
ResGInceptionV39.90%84.27%+74.37%0.100.82+0.724.990.84
ResNet5010.04%88.08%+78.04%0.100.87+0.776.700.75
ResNeXt509.98%83.89%+73.91%0.100.83+0.736.730.75
Mean10.00%85.40%+75.41%0.100.84+0.745.770.79
Table 4. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR datset in the case of perturbation offset. We report attack results on the testing dataset.
Table 4. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR datset in the case of perturbation offset. We report attack results on the testing dataset.
MethodVictimAccConfidence
No-OffsetOffsetGapNo OffsetOffsetGap
A-Conv31.53%32.09%+0.56%0.310.33+0.02
VGG1616.94%17.31%+0.37%0.170.18+0.01
GoogLeNet16.90%18.92%+2.02%0.170.19+0.02
ULANInceptionV323.00%23.50%+0.50%0.230.24+0.01
ResNet5016.08%16.24%+0.16%0.160.16+0.00
ResNeXt5017.35%17.44%+0.09%0.180.18+0.00
Mean20.30%20.92%+0.62%0.200.21+0.01
A-Conv29.06%59.65%+30.59%0.290.53+0.24
VGG1610.47%26.26%+15.79%0.110.26+0.15
GoogLeNet11.91%40.40%+28.49%0.120.39+0.27
UANInceptionV314.59%36.31%+21.72%0.150.35+0.20
ResNet5014.39%23.87%+9.48%0.140.24+0.10
ResNeXt5010.96%41.59%+30.63%0.110.41+0.30
Mean15.23%38.01%+22.78%0.150.36+0.21
A-Conv28.07%51.48%+23.41%0.280.47+0.19
VGG1612.94%46.25%+33.31%0.130.46+0.33
GoogLeNet15.87%43.57%+27.70%0.170.43+0.26
U-NetInceptionV322.59%46.17%+23.58%0.210.44+0.23
ResNet5019.95%50.62%+30.67%0.200.50+0.30
ResNeXt5013.27%46.21%+32.94%0.140.46+0.32
Mean18.78%47.38%+28.60%0.190.46+0.27
A-Conv35.04%60.10%+25.06%0.330.55+0.22
VGG1618.51%29.39%+10.88%0.180.29+0.11
GoogLeNet19.62%48.76%+29.14%0.200.48+0.28
ResGInceptionV321.48%39.20%+17.72%0.210.38+0.17
ResNet5035.00%51.65%+16.65%0.350.51+0.16
ResNeXt5017.52%57.05%+39.53%0.180.56+0.38
Mean24.53%47.69%+23.16%0.240.46+0.22
Table 5. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset in the case of perturbation offset. We report attack results on the testing dataset.
Table 5. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset in the case of perturbation offset. We report attack results on the testing dataset.
MethodVictimAccConfidence
No-OffsetOffsetGapNo OffsetOffsetGap
A-Conv85.45%82.23%−3.22%0.810.79−0.02
VGG1690.21%86.72%−3.49%0.890.86−0.03
GoogLeNet81.65%78.71%−2.94%0.800.78−0.02
ULANInceptionV380.06%73.54%−6.52%0.790.72−0.07
ResNet5085.31%82.32%−2.99%0.840.81−0.03
ResNeXt5086.53%82.82%−3.71%0.860.82−0.04
Mean84.87%81.06%−3.81%0.830.80−0.04
A-Conv90.73%43.80%−46.93%0.870.42−0.45
VGG1693.84%56.72%−37.12%0.930.56−0.37
GoogLeNet90.70%49.18%−41.52%0.900.49−0.41
UANInceptionV391.10%41.56%−49.54%0.900.41−0.49
ResNet5090.41%43.48%−46.93%0.900.43−0.47
ResNeXt5091.77%50.22%−41.55%0.910.50−0.41
Mean91.43%47.49%−43.93%0.900.47−0.43
A-Conv91.63%45.49%−46.14%0.880.43−0.45
VGG1694.15%58.46%−35.69%0.930.58−0.35
GoogLeNet91.33%44.81%−46.52%0.890.44−0.45
U-NetInceptionV391.77%44.46%−47.31%0.900.44−0.46
ResNet5087.58%40.07%−47.51%0.870.40−0.47
ResNeXt5091.88%50.59%−41.29%0.910.50−0.41
Mean91.39%47.31%−44.08%0.900.47−0.43
A-Conv90.23%44.28%−45.95%0.870.42−0.45
VGG1688.19%51.23%−36.96%0.860.51−0.35
GoogLeNet77.74%46.76%−30.98%0.770.46−0.31
ResGInceptionV384.27%42.35%−41.92%0.820.41−0.41
ResNet5088.08%46.32%−41.76%0.870.46−0.41
ResNeXt5083.89%41.28%−42.61%0.830.41−0.42
Mean85.40%45.37%−40.03%0.840.45−0.39
Table 6. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset under small sample conditions. We report attack results on the testing dataset.
Table 6. Non-targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset under small sample conditions. We report attack results on the testing dataset.
MethodVictimAccSSIM
Full DatasetSubsetGapFull DatasetSubsetGap
A-Conv-BN31.53%27.78%−3.75%0.960.94−0.02
VGG16-BN16.94%25.68%+8.74%0.950.91−0.04
GoogLeNet16.90%17.23%+0.33%0.950.950.00
ULANInceptionV323.00%19.50%−3.50%0.960.95−0.01
ResNet5016.08%17.11%+1.03%0.940.93−0.01
ResNeXt5017.35%23.41%+6.06%0.940.92−0.02
Mean20.30%21.79%+1.49%0.950.93−0.02
A-Conv-BN29.06%26.67%−2.39%0.930.78−0.15
VGG16-BN10.47%20.77%+10.30%0.860.55−0.31
GoogLeNet11.91%18.63%+6.72%0.880.66−0.22
UANInceptionV314.59%21.52%+6.93%0.930.83−0.10
ResNet5014.39%15.38%+0.99%0.730.63−0.10
ResNeXt5010.96%14.92%+3.96%0.820.62−0.20
Mean15.23%19.65%+4.42%0.860.68−0.18
A-Conv-BN28.07%26.75%−1.32%0.950.950.00
VGG16-BN12.94%28.40%+15.46%0.930.90−0.03
GoogLeNet15.87%12.61%−3.26%0.930.89−0.04
U-NetInceptionV322.59%16.16%−6.43%0.950.93−0.02
ResNet5019.95%17.64%−2.31%0.910.77−0.14
ResNeXt5013.27%11.54%−1.73%0.910.89−0.02
Mean18.78%18.85%+0.07%0.930.89−0.04
A-Conv-BN35.04%42.25%+7.21%0.940.88−0.06
VGG16-BN18.51%18.01%−0.50%0.820.86+0.04
GoogLeNet19.62%10.63%−8.99%0.940.85−0.09
ResGInceptionV321.48%59.40%+37.92%0.930.930.00
ResNet5035.00%20.20%−14.80%0.910.86−0.05
ResNeXt5017.52%12.16%−5.36%0.920.80−0.12
Mean24.53%27.11%+2.58%0.910.86−0.05
Table 7. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset under small sample conditions. We report attack results on the testing dataset.
Table 7. Targeted attacks of ULAN (ours), UAN [20], U-Net, and ResNet Generator [47] against DNN models on the MSTAR dataset under small sample conditions. We report attack results on the testing dataset.
MethodVictimAccSSIM
Full DatasetSubsetGapFull DatasetSubsetGap
A-Conv-BN85.45%77.77%−7.68%0.900.900.00
VGG16-BN90.21%80.08%−10.13%0.900.89−0.01
GoogLeNet81.65%80.75%−0.90%0.920.91−0.01
ULANInceptionV380.06%73.38%−6.68%0.930.92−0.01
ResNet5085.31%75.32%−9.99%0.900.900.00
ResNeXt5086.53%80.39%−6.14%0.900.900.00
Mean84.87%77.95%−6.92%0.910.90−0.01
A-Conv-BN90.73%86.29%−4.44%0.880.80−0.08
VGG16-BN93.84%92.56%−1.28%0.800.59−0.21
GoogLeNet90.70%89.46%−1.24%0.850.70−0.15
UANInceptionV391.10%89.21%−1.89%0.840.71−0.13
ResNet5090.41%88.58%−1.83%0.800.60−0.20
ResNeXt5091.77%90.11%−1.66%0.790.61−0.18
Mean91.43%89.37%−2.06%0.830.67−0.16
A-Conv-BN91.63%88.64%−2.99%0.870.86−0.01
VGG16-BN94.15%85.56%−8.59%0.820.73−0.09
GoogLeNet91.33%85.84%−5.49%0.880.81−0.07
U-NetInceptionV391.77%89.06%−2.71%0.870.79−0.08
ResNet5087.58%82.77%−4.81%0.830.77−0.06
ResNeXt5091.88%71.81%−20.07%0.830.74−0.09
Mean91.39%83.95%−7.44%0.850.78−0.07
A-Conv-BN90.23%83.79%−6.44%0.870.80−0.07
VGG16-BN88.19%72.38%−15.81%0.690.31−0.38
GoogLeNet77.74%57.79%−19.95%0.830.71−0.12
ResGInceptionV384.27%79.72%−4.55%0.840.69−0.15
ResNet5088.08%70.97%−17.11%0.750.63−0.12
ResNeXt5083.89%72.61%−11.28%0.750.62−0.13
Mean85.40%72.88%−12.52%0.790.63−0.16
Table 8. Adversarial attacks that adopt different types of L p -norm as the distance metric. We report attack results on the testing dataset.
Table 8. Adversarial attacks that adopt different types of L p -norm as the distance metric. We report attack results on the testing dataset.
ModeVictimAccSSIM
L 2 -Norm L -Norm L 2 -Norm L -Norm
A-Conv31.53%28.85%0.960.92
VGG1616.94%21.19%0.950.88
GoogLeNet16.90%17.60%0.950.91
Non-targetInceptionV323.00%24.65%0.960.91
ResNet5016.08%14.10%0.940.88
ResNeXt5017.35%18.43%0.940.90
Mean20.30%20.80%0.950.90
A-Conv85.45%84.33%0.900.85
VGG1690.21%87.25%0.900.83
GoogLeNet81.65%81.39%0.920.85
TargetInceptionV380.06%78.72%0.930.86
ResNet5085.31%83.03%0.900.82
ResNeXt5086.53%82.07%0.900.85
Mean84.87%82.80%0.910.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, M.; Bi, D.; Du, M.; Xu, X.; Wu, Z. ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation. Remote Sens. 2023, 15, 21. https://doi.org/10.3390/rs15010021

AMA Style

Du M, Bi D, Du M, Xu X, Wu Z. ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation. Remote Sensing. 2023; 15(1):21. https://doi.org/10.3390/rs15010021

Chicago/Turabian Style

Du, Meng, Daping Bi, Mingyang Du, Xinsong Xu, and Zilong Wu. 2023. "ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation" Remote Sensing 15, no. 1: 21. https://doi.org/10.3390/rs15010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop