Next Article in Journal
A Novel Particle Filter Based on One-Step Smoothing for Nonlinear Systems with Random One-Step Delay and Missing Measurements
Previous Article in Journal
Novel Class-AB Operational Amplifier for Compact and Energy-Efficient Wake-Up Sensor Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion

1
Graduate School of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
2
Department of Information Systems Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
3
School of Electronic and Information Engineering, Ankang University, Ankang 725000, China
*
Author to whom correspondence should be addressed.
Submission received: 3 December 2024 / Revised: 3 January 2025 / Accepted: 5 January 2025 / Published: 7 January 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Convolutional neural networks have achieved excellent results in image denoising; however, there are still some problems: (1) The majority of single-branch models cannot fully exploit the image features and often suffer from the loss of information. (2) Most of the deep CNNs have inadequate edge feature extraction and saturated performance problems. To solve these problems, this paper proposes a two-branch convolutional image denoising network based on nonparametric attention and multiscale feature fusion, aiming to improve the denoising performance while better recovering the image edge and texture information. Firstly, ordinary convolutional layers were used to extract shallow features of noise in the image. Then, a combination of two-branch networks with different and complementary structures was used to extract deep features from the noise information in the image to solve the problem of insufficient feature extraction by the single-branch network model. The upper branch network used densely connected blocks to extract local features of the noise in the image. The lower branch network used multiple dilation convolution residual blocks with different dilation rates to increase the receptive field and extend more contextual information to obtain the global features of the noise in the image. It not only solved the problem of insufficient edge feature extraction but also solved the problem of the saturation of deep CNN performance. In this paper, a nonparametric attention mechanism is introduced in the two-branch feature extraction module, which enabled the network to pay attention to and learn the key information in the feature map, and improved the learning performance of the network. The enhanced features were then processed through the multiscale feature fusion module to obtain multiscale image feature information at different depths to obtain more robust fused features. Finally, the shallow features and deep features were summed using a long jump join and were processed through an ordinary convolutional layer and output to obtain a residual image. In this paper, Set12, BSD68, Set5, CBSD68, and SIDD are used as a test dataset to which different intensities of Gaussian white noise were added for testing and compared with several mainstream denoising methods currently available. The experimental results showed that this paper’s algorithm had better objective indexes on all test sets and outperformed the comparison algorithms. The method in this paper not only achieved a good denoising effect but also effectively retained the edge and texture information of the original image. The proposed method provided a new idea for the study of deep neural networks in the field of image denoising.

1. Introduction

In the process of image acquisition and transmission, the original image is often affected by the noise introduced by the system equipment and transmission channel, which leads to the loss of effective information about the image, and then affects the subsequent image analysis and processing, such as image segmentation, target recognition, edge extraction, etc. Accordingly, image denoising has become a classic problem and a popular research topic in the area of vision applications and image processes. Efficient image-denoising algorithms remove the noise while ensuring that the structural information of the processed image is not altered, which helps in other image-processing tasks, and is further used in remote sensing, medical imaging, surveillance, and other fields [1,2].
The current image-denoising algorithms can be classified into two main types, i.e., conventional denoising algorithms and deep learning-based denoising algorithms. The conventional methods mainly use the structural properties of the image itself for denoising, such as denoising algorithms based on the theory of nonlocal self-similarity of the image, denoising algorithms based on sparse representations, and so on. There are ones using filters such as Gaussian filtering methods [3,4], bilateral filtering methods [5,6], and median filtering methods [7,8]. Nonlocal self-similarity algorithms take advantage of the fact that the image blocks in a natural image are similar to each other, and search for image blocks similar to the image block centered on the current pixel in the whole image, and process their similar blocks. The representative ones are nonlocal means (NLMs) [9,10,11], three-dimensional block-matched filtering (BM3D) [12], and the weighted nuclear paradigm minimization (WNNM) [13]. Classical sparse representation-based denoising methods include the dictionary learning algorithm (KSVD) [14], and nonlocal centralized sparse representation (NCSR) [15,16]. However, such methods need to find the a priori information of the image first, and then use optimization algorithms to solve the model iteratively. Therefore, the complex optimization process of traditional denoising methods takes a lot of time and computational cost and also requires manual parameter adjustment, which is not very generalizable. Traditional methods are also prone to image blurring and detail loss problems.
With the enhanced performance and computational power of various types of computers, researchers have gradually introduced deep learning methods into the field of image processing. Deep learning has been applied broadly in the area of computer vision [17,18,19,20,21,22]. The main ideas of the deep learning denoising algorithm are to use a large number of noisy and clean image pairs and to perform deep neural network denoising of these training data using end-to-end learning with excellent performance. Schmidt and Roth proposed a cascade of shrinkage fields (CSF) [23] approach to unify random field-based models and unfold semi-quadratic optimization algorithms into a single learning framework. Chen [24] et al. proposed a trainable nonlinear reaction–diffusion (TNRD) model. Burger [25] et al. implemented image denoising using a multilayer perceptron (MLP) approach. ZHANG et al. [26] raised a deep denoising convolutional neural network DnCNN, which for the first time applied batch normalization [27] and residual learning [28] to the field of image denoising, and was able to handle uniform Gaussian noise effectively. Subsequently, ZHANG et al. [29] proposed an FFDNET method for image denoising, which took the noise level and the noisy image as joint inputs and trained a model to process the noisy image under different noise levels. To further optimize the denoising performance of the neural network, Tian [30] offered an enhanced convolutional network, ECNDNet, by combining the dilation convolution with ordinary convolution, which further improved the sensory field of the network. The author of [31] introduced residual optimization based on a convolutional neural network, which addressed the progressive disappearance of the gradient during the propagation of convolutional neural networks when the number of layers was greater. The author of [32] set a baseline depth denoising initially by training a flexible and efficient CNN denoiser, which was inserted as a module into an iterative HQS-based algorithm that could solve various image restoration problems. The author of [33] proposed a robust deformation denoising CNN that could exploit morphable learned kernels and stamped convolutional architectures to extract more typical noise features. Study [34] was a mixture of denoising models based on a network of transformer encoders and convolutional decoders that achieved state-of-the-art denoising performance on real images at relatively low computational cost. The author of [35] designed a dual network with a sparse mechanism that extracts complementary features to recover clear images that could act on real noisy images.
Although the above deep learning-based denoising algorithms have produced good results, there are still problems. The edge and texture information of the image is very important for the recovery of the image, but the denoising network treats all the acquired information in the network equally and does not focus on the edge and texture information of the input image, which results in the poor recovery of the denoised image in the edge region. Therefore, how to extract the edge as well as texture features of the image from the limited features is the difficulty of the subsequent denoising network. To address the above problems, Hu et al. [36] proposed a channel attention mechanism to learn the correlation between channels. Woo et al. [37] proposed CBAM to better learn the correlation between feature maps from channels and spatial locations. These two attention mechanisms generated weights through global pooling operations and convolution. Yang [38] proposed SimAM (a simple parameter-free attention module) to learn the correlation between channels and spatial correlations at each position of the feature map without the need for parameters, using statistical laws. In addition to single-layer convolutional neural networks, BRDnet [39] with a two-layer neural network structure has also been proposed, which increased the width of the network by combining two networks to obtain more features and improved the training speed and training effectiveness by applying batch renormalization, residual learning, and dilation convolution simultaneously. Z. Cai et al. [40] proposed a two-stage image denoising model in which the input image was first processed with a specialized denoiser, and then the resulting intermediate denoised image was passed to a kernel prediction network that estimated the denoising kernel for each pixel. The robustness of the method to noise parameters superseded comparable blind denoisers while approaching state-of-the-art denoising quality for camera sensor noise.
Based on the previous research, this paper presents a parallel denoising network with nonparametric attention and multiscale feature fusion (NAMFPDNet). The main work is as follows:
(1)
Aiming at the recovered image with blurred edge information and unclear image texture, a dual-branch image denoising network (NAMFPDNet) based on the residual denoising network is proposed based on the nonparticipant attention mechanism and multiscale feature fusion.
(2)
A dual-branch deep feature extraction module was designed, in which the upper branch adopted the densely connected block to extract the local features of image noise, and the lower branch combined the ordinary convolution with the dilated convolution to form the residual block, which extracted the global information of image noise and strengthened the feature extraction capability of the network. Compared with the single-branch network structure, the dual-branch network not only solved the problem of insufficient feature extraction by the single-branch network model but also solved the problem of the saturation of the deep CNN performance.
(3)
We used SimAm, a parameter-free attention mechanism. A parameter-free attention module was designed to focus on critical regions in important channels in the feature map from both spatial and channel aspects so that the network could recover clear edges as well as texture details.
(4)
We designed a multiscale feature fusion module that deeply fuses global and local features using three convolutional layers of different scale sizes. Compared with the traditional single-scale convolution operation, the multiscale feature fusion method could better recover the image contour information and texture information.

2. Theory and Methodology

The paper designs a parallel image denoising network based on nonparametric attention and multiscale feature fusion (NAMFPDNet). The network continued the idea of DnCNN [24] by using residual learning combined with batch normalization. The denoising network structure is shown in Figure 1. The input of this network was the noisy image X and the output was the residual image V ~ learned by the NAMFPDNet. The purpose of this algorithm was to learn the residual V ~ that approximated the noise V , i.e., V ~ = N A M F P D N e t ( X ) V , and then remove the residual V ~ from the noisy image X .
The entire network first applied a Conv3×3 + ReLU to perform an initial sampling operation on the image X . The size of the convolution kernel was 3 × 3. There were 1 input channel and 64 output channels. The initial feature extraction module was used to extract the initial features F 0 . Then, F 0 was passed through the parallel feature extraction module (PFEM), which adopted the upper branch network and the lower branch network in parallel to obtain the depth features of an image, and the features were spliced to obtain the deep features F 1 . Then, F 1 was passed through a multiscale feature fusion module (MFM) for feature fusion to obtain the fused feature F 2 . The fused feature F 2 was passed through the nonparametric attention module (NAM), which focused on the critical regions in the important channels in the feature map both spatially and channel-wise, so that the network could recover clear edges as well as texture details to obtain the enhanced feature F 3 .
The shallow feature F 0 and the enhanced deep feature F 3 were merged using a long skip connection and then passed to the residual reconstruction module. The residual reconstruction module applied only one layer of convolution to reconstruct the noise residuals, the size of the convolution kernel was 3 × 3, the number of input channels was 64, and the number of output channels was 1. The noise residual map V ~ = N A M F P D N e t ( X ) learned by the network was shown as shown in Equation (1). Long jump connections integrated shallow and deep feature information in the network, which was beneficial to stabilize the training of the network and improve the denoising performance. Finally, the original image was used to subtract V ~ to obtain the denoised image.
N A M F P D N e t ( X ) = C o n v 3 × 3 ( F 0 + F 3 )

2.1. Parallel Feature Extraction Module

The parallel feature extraction module (PFEM) employed a network structure in which the upper branch network and the lower branch network were connected in parallel. This is shown in Figure 2.
The upper branch network in this paper used a similar connectivity approach as DenseNet, which mainly consisted of three tightly connected blocks (TCBs) connected in series to extract the local features of the noisy image. This local feature was processed by the later nonparametric attention mechanism (NAM) module and using the local residual connections. NAM learned the correlation of each position from the spatial and channel positions for the extracted features, and thus adaptively changed the weight of each position, which was multiplied with the extracted features, to achieve the focus on the important features in the local specialization and suppression of the invalid features. The output of the upper branch network was f 1 .
The structure of TCB is shown in Figure 3, and the input of the normalization layer in TCB came from the output of all the previous convolutional layers. This dense connection not only solved the gradient vanishing problem, but also brought powerful feature extraction capability and enhanced feature propagation. The TCB consisted of a total of five convolutional layers with a convolutional kernel size of 3 × 3, and the parameter settings of the convolutional layers are shown in Table 1. A convolutional layer of a 1 × 1 size was used to reduce the number of channels at the end of the TCB, and the number of channels of the output feature map was 64, thus effectively reducing the computation.
The down-branch network in this paper used four dilated convolutional residual blocks (DCRBs) in series, which were then subjected to NAM for feature enhancement. The purpose of the design of the lower branch network was to compensate for the damage to the image information structure and the loss of noise information in the first branch through different structures. The output of the down-branch network was f 2 .
The structure of the DCRB is shown in Figure 4. The DCRB mainly consisted of a series of dilated convolutions with dilatation rates of 1, 2, and 3, respectively. The size of the convolution kernel was 3 × 3, with 64 convolution kernel numbers. The network specific parameter settings are shown in Table 2.
Combining the dilation convolution with ordinary convolution, forming a sparse structure, expanded the sensory field of the network without additional learning parameters, which solved the problem of saturating the network feature extraction caused by using a single-sized convolutional kernel for deep networks, and effectively improved the performance of denoising networks. Different dilation rates prevented the lattice effect brought about by a single dilation rate. Local residuals were also added inside the DCRB to further enhance the feature extraction capability of the module, thus improving the model performance.

2.2. The Nonparametric Attention Module

Yang [38] proposed a simple attention module (SimAm) by statistical laws, and SimAm proposed a 3d attention module based on human visual neurons focusing on both spatial as well as channel attention. Specifically, an energy function was optimized based on many neuroscience theories to find the importance of each neuron. By designing the linear separability between the target neuron and other neurons in the same channel, it was determined whether the neuron should be attended to or not. By deriving a closed-form solution of the energy function, the minimum energy of the neuron was obtained as shown in Equation (2).
e t * = 4 ( δ ^ 2 + λ ) + 0.5 ( t μ ^ ) 2
where μ ^ = 1 M 1 i = 1 M 1 x i ; δ ^ 2 = 1 M 1 i = 1 M 1 ( x i μ ^ ) 2 .
X indicates the input feature map, X ϵ R H W C , and M = H     W indicates the number of neurons in the same channel. μ ^ denotes the mean of all neurons within the same channel. δ ^ 2 represents the energy variance of the neurons within the same channel. λ was taken as 1 e 4 . Lower energy meant that the neuron was more deserving of attention than other neurons. Thus, the importance of a neuron can be obtained by 1 / e t * .
Since attention was achieved by weighting, the formula for SimAm is shown in Equation (3),
X ˜ = S i g m o i d ( 1 / e t * ) X
Based on SimAm theory, the specific steps for the implementation of the nonparametric attention module (NAM) designed in this paper were as follows:
Input: X represents the input feature map, X ϵ R H W C ;
Output: enhanced feature map X ~ .
Step 1: Calculate the mean value of X over the channel dimension. This meant squeezing the feature map along the spatial direction to find the mean value on each H     W .
Step 2: Calculate the square of the mean error for each position in the same channel, obtaining X   and   X ϵ R H W C .
Step 3: Compute the variance of X over the channel dimensions. That meant that X was summed over each H     W dimension and divided by n , where n = H     W 1 . Then, obtain the result t  a n d   t ϵ R   1 1 C , as the channel attention information.
Step 4: Calculate the amount of energy per pixel by using ( 4     t + λ + 0.5 ) / X .
Step 5: Enhance the feature map using the s i g m o i d function. This implied that the X ~ obtained was calculated using Equation (3) as the augmented feature map.
To implement NAM, a custom neural network layer, named NAM, inherited from the nnet.layer.Layer base class was defined. In the forward propagation function of this class, the NAM was implemented. Therefore, the forward propagation method for this layer based on the above steps is shown in Figure 5 below.
Compared with the existing channel attention mechanism SE and mixed attention mechanism CBAM, although the SE and CBAM can greatly improve the accuracy of the network, the network would generate more parameters because its implementation depends on the Full Connectivity Layer and Pooling Layers for weight allocation. For SimAm, on the other hand, it can provide neural networks with three-dimensional attention weights without adding any network parameters, as shown in Table 3. This feature enabled SimAm to greatly reduce the complexity and computational cost of the model while maintaining high performance.

2.3. Multiscale Feature Fusion Module

This paper uses multi-feature fusion to extract image features at different scales. The structure of the multiscale feature fusion block (MFM) is shown in Figure 6.   f 1 and f 2 indicate the feature maps’ output from the upper and lower branch networks, respectively. The feature maps of the two branches were first feature-concatenated, and then feature extraction was carried out by parallel convolutional layers of convolutional kernel sizes 1 × 1, 3 × 3, and 5 × 5, respectively, with the number of convolutional kernels being 64. The extracted features were finally summed and fused. As compared to traditional single-scale convolutional operations, the multiscale feature fusion method can better recover the image contour information and texture information.

3. Experimental Results and Analyses

3.1. Experimental Setting

The training set images were randomly rotated to obtain the enhanced image, and then the enhanced image was cropped into small image blocks of size 40 × 40 pixels. Subsequently, Gaussian white noise was added to the image blocks to generate noise-containing images, to test the effect of noise intensity on network performance. The test set uses Set12, BSD68, Set5, CBSD68, SIDD, and DND image datasets.
The Adam optimizer was employed to train. The initial learning rate was 1 e 3 , β 1 = 0.9 ,   β 2 = 0.999 , and ε = 10 8 . This paper adopts the learning rate decay rule, and the decay period was 50 epochs. The learning rate decay product factor gamma = 0.1. The training was terminated after 50 epochs. The batch size was set to 128.

3.2. Loss Function and Evaluation Metrics

(1)
Loss function
For training the models in this paper, the mean square deviation function was used as a loss function to train the network parameters, the mathematical expression of which is given in Equation (4).
L o s s ( θ ) = 1 2 N i = 1 N N A M F P D N e t ( X i ; θ ) ( X i I i ) 2
where N denotes the number of training samples, θ denotes the parameters learned by the NAMFPDNet, and X i ,   I i denotes the corresponding (noisy, clear) image.
(2)
Evaluation metrics
In this paper, PSNR and SSIM are used to evaluate the quality of denoised images, as shown in Equations (5) and (6) [41,42].
P S N R = 10 × log 10 255 1 H × W i = 1 H j = 1 W I ( i , j ) I ˜ ( i , j ) 2
S S I M = ( 2 μ 1 μ 2 + C 1 ) ( 2 σ 1 , 2 + C 2 ) ( μ 1 2 + μ 2 2 + C 1 ) ( σ 1 2 + σ 2 2 + C 2 )

3.3. Comparison Experiment of Denoising Performance

3.3.1. Gray Image Denoising

To prove the denoising performance of the network in the paper, for greyscale images, this paper compares seven more advanced image denoising methods, including BM3D, WNNM, EPLL, MLP, DnCNN-S, DnCNN-B, and FFDNet, on the Set12 dataset and BSD68 dataset.
Table 4 and Table 5, respectively, show the average values of PSNR and SSIM for different denoising methods on the test sets Set12 and BSD68 for three noise levels of 15, 25, and 50, where the best-performing value was given a bold font. It was evident that the average value of PSNR and SSIM for the network in this paper was higher than the other methods. Take Set12, for example; at a noise level of 50, the average PSNR of this paper was 0.83 dB, 0.50 dB, and 1.21 dB higher than the average of BM3D, WNNM, and EPLL, respectively, and 0.37 dB, 0.35 dB, and 0.22 dB higher than the average PSNR of DnCNN-S, DnCNN-B, and FFDNet, respectively. Combined results in Table 4 and Table 5 indicated that this paper’s method can obtain a better PSNR average and SSIM average than other methods in the overall comparison, and this method possessed better denoising performance while maintaining the detailed features of the image in a better way.
To more intuitively compare the difference between the proposed method and other methods on the Set12 and BSD68 test set, we used average PSNR. Figure 6 shows the bar chart of the difference between BM3D, WNNM, EPLL, MLP, DnCNN-S, DnCNN-B, FFDNet, and the proposed method on Set12 and BSD68. As can be seen from Figure 7, as the noise intensity increased, the difference between the proposed method and other methods also increased, indicating that the proposed method had a better denoising ability for high-intensity noise compared with other methods.
To better compare the image denoising effect of different methods, the values of PSNR and SSIM after denoising by different methods are given in Table 6, Table 7, Table 8, and Table 9 for eight images on Set12, respectively. It can be seen that the PSNR values of this paper’s method were better than the other methods for most of the test images at Gaussian noise levels of 15 and 25. The SSIM values of this paper’s method at all noise intensities were located in first place or second place, indicating that this paper’s denoising method better restored the image structure. In summary, the method in this paper achieved better results in two objective metrics, indicating that the method had better noise.
To show the efficiency of the proposed algorithm more clearly, an image was randomly selected from each of the two test datasets as the subjective evaluation result of the image denoising effect. The subjective results of the different algorithms are shown in Figure 8 and Figure 9 below, in which the magnified details of the local regions of the images are shown in the bottom right corner of each image. It can be seen that the edge of the image after denoising by most of the comparison algorithms was relatively blurred, such as BM3D, WNNM, and MLP algorithms. The EPLL method not only lost many details in the image but also still left a small amount of noise, while the image appeared as artifacts. Therefore, the algorithm in this paper was comparable to the visual denoising effect of DnCNN and FFDNet algorithms. This paper’s algorithm was more capable of restoring the texture details of the original image, and this contrast effect was more obvious in the case of that more seriously polluted by noise.

3.3.2. Color Image Denoising

In this paper, Set5 and CBSD68 are used as color image datasets to verify the effectiveness of the proposed algorithm on color image denoising. The selected comparison algorithms included CBM3D [10], DNCNN-C [24], and FFDNet [27]. The experimental results are shown in Table 10 below. It can be seen from the table that the proposed algorithm had a better PSNR value than other algorithms.
An image was selected from Set5 and CBSD68 datasets, respectively, for experiments, and the denoising results of the proposed algorithm and other algorithms were presented. As can be seen from Figure 10 and Figure 11, the denoising results of the proposed algorithm were clearer and the details of the image were preserved.

3.3.3. Real Image Denoising

For real image denoising, two real noisy image datasets, SIDD [43] and DND [44], were tested using CBM3D, DnCNN, FFDNet, CBDNet [45], and the algorithm in this paper. The average PSNR and SSIM values of different denoising methods are shown in Table 11. It can be seen that the average PSNR and SSIM of this paper’s algorithm were the best, showing better denoising performance.
From the SIDD dataset, an image was selected for denoising experiments and the denoising results of this paper’s algorithm and other algorithms are shown. As can be seen in Figure 12, the denoising results of this paper’s algorithm performed well in preserving the detailed features of the denoised image, also compared to CBM3D, DnCNN, and CBDNet.

3.4. Ablation Experiments

To test the effectiveness of this paper’s upper and lower branching networks for image Gaussian denoising, the upper and lower branching networks were, respectively, subjected to Gaussian image denoising on the Set12 grayscale map dataset, and the obtained average PSNR value trend (σ = 25) is shown in Figure 13. It can be seen that although the separate up-branch or down-branch networks showed well in the denoising, the denoising performance of the dual-branch network was greatly improved. It proved that the two structures can complement each other to better extract the features of the image and achieve better denoising results.
The network in this paper and DnCNN as well as FFDNet were subjected to Gaussian image denoising on the Set12, and the average PSNR value (σ = 25) was obtained as shown in Figure 14. It can be seen that the highest PSNR value of the model in this paper was much higher than that of DnCNN and FFDNet, and its convergence speed was also much faster than that of DnCNN and FFDNet. It fully proved that the network in this paper was advanced in Gaussian image denoising.
To verify the effectiveness of each module in the network, ablation experiments are conducted in this paper and Set12 was chosen for the test set. (1) The network without the nonparametric attention module (NAM) as well as the multiscale feature fusion module (MFM) was used as the baseline network (BL). (2) Adding a nonparametric attention module (NAM) to BL was called baseline with NAM (BL + NAM). (3) The channel attention mechanism was introduced in the baseline network (BL + SE) to be compared with the BL + NAM. (4) We verified the effectiveness of the multiscale feature fusion module. Two methods, the ordinary direct feature fusion method (BL + NAM + direct) and the multiscale feature fusion method (BL + NAM + MFM), were used for comparison, respectively. The ordinary direct feature fusion method is shown in Figure 15 below, and the multiscale feature fusion method is shown in Figure 6.
As can be seen from Table 12 and Table 13, the PSNR and SSIM values of the baseline network were improved after introducing the NAM and the MFM module into the baseline network, indicating that both the NAM and the MFM can effectively improve the denoising performance of the network. The BL + NAM + MFM network had higher PSNR and SSIM values than the BL + NAM + direct network, which confirmed the superiority of the MFM module. The BL + NAM performed better than the BL + SE. As the noise intensity increased, the contribution of each module to the baseline network showed a growing trend, which proved that the network in this paper had better high noise removal capability.

4. Conclusions

With the improvement of computer arithmetic power, deep learning has achieved many results in the field of computer vision. To address the problems of a denoised image edge and unclear texture in previous image denoising algorithms based on deep learning, a dual-branch image denoising network based on nonparametric attention and multiscale feature fusion was proposed. The method uses a two-branch network structure for feature extraction to make up for the shortcomings of the single-branch network. Meanwhile, nonparametric attention was introduced into the feature extraction module to focus on the important features spatially and channel-wise to learn and extract the key information effectively. A new multiscale feature fusion approach was also proposed to better fuse local features with global features. Through the analysis of the experimental results, such as objective evaluation and subjective evaluation, the method in this paper had a certain degree of improvement both in terms of indicators and vision and could obtain clearer denoised images and retain more edge detail information. In the future, the study of denoising networks applied on hyperspectral images should be continued, to further optimize the network structure.

Author Contributions

Conceptualization, J.M. and L.S.; methodology, J.M. and J.C.; writing—original draft preparation, J.M.; writing—review and editing, J.M., J.C. and S.Y.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant #: 12174004); and the Science and Technology Plan Project of Ankang City (grant #: AK202-GY03-2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, X.; Hu, Y.; Jie, Y.; Zhao, C.; Zhang, Z. Dual-Frequency Lidar for Compressed Sensing 3D Imaging Based on All-Phase Fast Fourier Transform. J. Opt. Photonics Res. 2024, 1, 74–81. [Google Scholar] [CrossRef]
  2. Hu, K.; Chen, Z.; Kang, H.; Tang, Y. 3D vision technologies for a self-developed structural external crack damage recognition robot. Autom. Constr. 2024, 159, 105262. [Google Scholar] [CrossRef]
  3. Lindenbaum, M.; Fischer, M.; Bruckstein, A. On Gabor’s contribution to image enhancement. Pattern Recognit. 1994, 27, 1–8. [Google Scholar] [CrossRef]
  4. Guan, X.; Zhao, L.; Tang, Y. Mixed filter for image denoising. J. Image Graph. 2005, 10, 332–337. [Google Scholar]
  5. Chaudhury, K.N.; Sage, D.; Unser, M. Fast O(1) bilateral filtering using trigonometric range kernels. IEEE Trans. Image Process. 2011, 20, 3376–3382. [Google Scholar] [CrossRef] [PubMed]
  6. Gavaskar, R.G.; Chaudhury, K.N. Fast adaptive bilateral filtering. IEEE Trans. Image Process. 2018, 28, 779–790. [Google Scholar] [CrossRef] [PubMed]
  7. Gallagher, N.; Wise, G. A theoretical analysis of the properties of median filters. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1136–1141. [Google Scholar] [CrossRef]
  8. Li, Y.J.; Su, H.Q.; Yang, F.; Fan, G.L.; Lin, P. Improved algorithm study about removing image noise. Comput. Eng. Des. 2009, 30, 2995–2997. [Google Scholar]
  9. Vignesh, R.; Oh, B.T.; Kuo, C.C.J. Fast non-local means (NLM) computation with probabilistic early termination. IEEE Signal Process. Lett. 2009, 17, 277–280. [Google Scholar] [CrossRef]
  10. Qi, D.M. Research and application of improved non-local mean filtering algorithm based on improved non-local mean filtering algorithm in medical image processing. Comput. Appl. Softw. 2021, 38, 256–261. [Google Scholar]
  11. Guo, X.; Xu, L.; Huo, J.; Cheng, C. Research on image denoising algorithm based on non-local self-similarity. Comput. Simul. 2022, 39, 364–369. [Google Scholar]
  12. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  13. Gu, S.; Zhang, L.; Zuo, W. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2862–2869. [Google Scholar]
  14. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 11, 4311–4322. [Google Scholar] [CrossRef]
  15. Wei, S.; Tong, C. Improvement of maximum uniform smoothing method based on image denoising. Comput. Technol. Dev. 2020, 30, 100–103. [Google Scholar]
  16. Peng, T.; Jiang, J.; Ou, Y. Research on image adaptive fusion under improved sparse denoising algorithm. Comput. Simul. 2023, 40, 224–227. [Google Scholar]
  17. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  18. Wang, Q.L.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 16–18 June 2020; pp. 11534–11542. [Google Scholar]
  19. Chen, Z.; Yang, X. Twin network based multilevel attention feature fusion target tracking algorithm. Comput. Technol. Dev. 2021, 31, 58–63. [Google Scholar]
  20. Yang, Y.; Wang, A.; He, L. Research progress on image restoration based on generative adversarial networks. Comput. Technol. Dev. 2022, 32, 75–81. [Google Scholar]
  21. Hendriksen, A.A.; Pelt, D.M.; Batenburg, K.J. Noise2inverse: Self-supervised deep convolutional denoising for tomography. IEEE Trans. Comput. Imaging 2020, 6, 1320–1335. [Google Scholar] [CrossRef]
  22. Liu, Y.; Qin, Z.; Anwar, S.; Ji, P.; Kim, D.; Caldwell, S.; Gedeon, T. Invertible denoising network: A light solution for real noise removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 19–25 June 2021; pp. 13365–13374. [Google Scholar]
  23. Schmidt, U.; Roth, S. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2774–2781. [Google Scholar]
  24. Chen, Y.; Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1256–1272. [Google Scholar] [CrossRef] [PubMed]
  25. Burger, H.C.; Christian, J.S.; Stefan, H. Image denoising: Can plain neural networks compete with BM3D? In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2012; pp. 2392–2399. [Google Scholar]
  26. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
  27. Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; pp. 770–778. [Google Scholar]
  29. Zhang, K.; Wang, M.Z.; Lei, Z. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [PubMed]
  30. Tian, C.; Xu, Y.; Fei, L.; Wang, J.; Wen, J.; Luo, N. Enhanced CNN for image denoising. CAAI Trans. Intell. Technol. 2019, 4, 17–23. [Google Scholar] [CrossRef]
  31. Zhang, M.; Lü, X.; Wu, L.; Yu, D. Multiplicative denoising method based on deep residual learning. Laser Optoelectron. Prog. 2018, 55, 031004. [Google Scholar] [CrossRef]
  32. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; Timofte, R. Plug and play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef] [PubMed]
  33. Zhao, M.; Cao, G.; Huang, X.; Yang, L. Hybrid transformer-CNN for real image denoising. IEEE Signal Process. Lett. 2022, 29, 1252–1256. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Xiao, J.; Tian, C.; Lin, J.C.; Zhang, S. A robust deformed convolutional neural network (CNN) for image denoising. CAAI Trans. Intell. Technol. 2023, 8, 331–342. [Google Scholar] [CrossRef]
  35. Tian, C.; Xu, Y.; Zuo, W.; Du, B.; Lin, C.-W.; Zhang, D. Designing and training of a dual CNN for image denoising. Knowl.-Based Syst. 2021, 226, 106949. [Google Scholar] [CrossRef]
  36. Hu, J.; Li, S.; Gang, S. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  37. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  38. Yang, L.; Zhang, R.Y.; Li, L. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
  39. Tian, C.W.; Yong, X.; Wang, M.Z. Image denoising using deep CNN with batch renormalization. Neural Netw. 2020, 121, 461–473. [Google Scholar] [CrossRef]
  40. Cai, Z.; Zhang, Y.; Manzi, M.; Öztireli, A.C.; Gross, M.H.; Aydin, T.O. Robust Image Denoising using Kernel Predicting Networks. Eurographics 2021, 1, 37–40. [Google Scholar]
  41. Garncarek, Ł.; Powalski, R.; Stanisławek, T. LAMBERT: Layout-aware language modeling for information extraction. In Proceedings of the International Conference on Document Analysis and Recognition, Lausanne, Switzerland, 5–10 September 2021; pp. 532–547. [Google Scholar]
  42. Setiadi, D.R.I.M. PSNR vs. SSIM: Imperceptibility quality assessment for image steganography. Multimed. Tools Appl. 2021, 80, 8423–8444. [Google Scholar] [CrossRef]
  43. Abdelhamed, A.; Lin, S.; Brown, M.S. A High-Quality Denoising Dataset for Smartphone Cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1692–1700. [Google Scholar]
  44. Plotz, T.; Roth, S. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1586–1595. [Google Scholar]
  45. Guo, S.; Yan, Z.; Zhang, K. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1712–1722. [Google Scholar]
Figure 1. The framework of the NAMFPDNet.
Figure 1. The framework of the NAMFPDNet.
Sensors 25 00317 g001
Figure 2. Structure of PFEM.
Figure 2. Structure of PFEM.
Sensors 25 00317 g002
Figure 3. TCB structure.
Figure 3. TCB structure.
Sensors 25 00317 g003
Figure 4. DCRB structure.
Figure 4. DCRB structure.
Sensors 25 00317 g004
Figure 5. An implementation of simamAtten.
Figure 5. An implementation of simamAtten.
Sensors 25 00317 g005
Figure 6. Multiscale feature fusion module.
Figure 6. Multiscale feature fusion module.
Sensors 25 00317 g006
Figure 7. Results of average PSNR of the proposed method minus average PSNR (dB) of other algorithms (on Set12 and BSD68). (a) Set12; (b) BSD68.
Figure 7. Results of average PSNR of the proposed method minus average PSNR (dB) of other algorithms (on Set12 and BSD68). (a) Set12; (b) BSD68.
Sensors 25 00317 g007
Figure 8. Lena image denoising results ( δ = 25 ). (a) Original image; (b) noise image/20.26 dB; (c) BM3D/32.07 dB; (d) WNNM/32.24 dB; (e) MLP/32.25 dB; (f) EPLL/31.73 dB; (g) DNCNN-S/32.44 dB; (h) DNCNN-B/32.42 dB; (i) FFDNet/32.57 dB; (j) ours/32.74 dB.
Figure 8. Lena image denoising results ( δ = 25 ). (a) Original image; (b) noise image/20.26 dB; (c) BM3D/32.07 dB; (d) WNNM/32.24 dB; (e) MLP/32.25 dB; (f) EPLL/31.73 dB; (g) DNCNN-S/32.44 dB; (h) DNCNN-B/32.42 dB; (i) FFDNet/32.57 dB; (j) ours/32.74 dB.
Sensors 25 00317 g008
Figure 9. Building image denoising results ( δ = 50 ). (a) Original image; (b) noise image/14.78 dB; (c) BM3D/26.21 dB; (d) WNNM/26.51 dB; (e) EPLL/26.36 dB; (f) MLP/26.54 dB; (g) DnCNN-S/26.89 dB; (h) DnCNN-B/26.92 dB; (i) FFDNet/26.93 dB; (j) ours/26.97 dB.
Figure 9. Building image denoising results ( δ = 50 ). (a) Original image; (b) noise image/14.78 dB; (c) BM3D/26.21 dB; (d) WNNM/26.51 dB; (e) EPLL/26.36 dB; (f) MLP/26.54 dB; (g) DnCNN-S/26.89 dB; (h) DnCNN-B/26.92 dB; (i) FFDNet/26.93 dB; (j) ours/26.97 dB.
Sensors 25 00317 g009aSensors 25 00317 g009b
Figure 10. Denoising effect of different algorithms from Set5 ( δ = 25 ). (a) Original image; (b) noise image/20.8186 dB; (c) CBM3D/33.1082 dB; (d) DNCNN-C/33.1754 dB; (e) FFDNet/33.3205 dB; (f) ours/33.4017 dB.
Figure 10. Denoising effect of different algorithms from Set5 ( δ = 25 ). (a) Original image; (b) noise image/20.8186 dB; (c) CBM3D/33.1082 dB; (d) DNCNN-C/33.1754 dB; (e) FFDNet/33.3205 dB; (f) ours/33.4017 dB.
Sensors 25 00317 g010
Figure 11. Denoising effect of different algorithms from CBSD68 ( δ = 50 ). (a) Original image; (b) noise image/14.6714 dB; (c) CBM3D/26.4053 dB; (d) DNCNN-C/27.3557 dB; (e) FFDNet/27.2981 dB; (f) ours/27.3931 dB.
Figure 11. Denoising effect of different algorithms from CBSD68 ( δ = 50 ). (a) Original image; (b) noise image/14.6714 dB; (c) CBM3D/26.4053 dB; (d) DNCNN-C/27.3557 dB; (e) FFDNet/27.2981 dB; (f) ours/27.3931 dB.
Sensors 25 00317 g011
Figure 12. SIDD image denoising results with different algorithms. (a) Noise image; (b) original image; (c) BM3D (35.64 dB); (d) DnCNN-C (40.67 dB); (e) FFDNet (40.96 dB); (f) CBDNet (38.75 dB); (g) ours (41.36 dB).
Figure 12. SIDD image denoising results with different algorithms. (a) Noise image; (b) original image; (c) BM3D (35.64 dB); (d) DnCNN-C (40.67 dB); (e) FFDNet (40.96 dB); (f) CBDNet (38.75 dB); (g) ours (41.36 dB).
Sensors 25 00317 g012
Figure 13. Average PSNR of each sub-network with a dual-branch network.
Figure 13. Average PSNR of each sub-network with a dual-branch network.
Sensors 25 00317 g013
Figure 14. Average PSNR of this paper’s network with DnCNN and FFDNet.
Figure 14. Average PSNR of this paper’s network with DnCNN and FFDNet.
Sensors 25 00317 g014
Figure 15. Feature direct fusion method.
Figure 15. Feature direct fusion method.
Sensors 25 00317 g015
Table 1. TCB parameter settings.
Table 1. TCB parameter settings.
Network LayerKernel SizeStridePaddingOutput Channels
Conv3×33 × 31064
Conv1×11 × 11064
Table 2. DCRB parameter settings.
Table 2. DCRB parameter settings.
Network LayerKernel SizeStrideDilation FactorOutput Channels
The first Conv3×33 × 31164
The second Conv3×33 × 31264
The third Conv3×33 × 31364
Table 3. Comparison of parameters of different attention mechanisms.
Table 3. Comparison of parameters of different attention mechanisms.
Attention ModelsParametersRemarks
SE 2 C 2 / r C : number of current feature channels
r : reduction ratio
k : number of convolutional layer filters
CBAM 2 C 2 r + 2 k 2
NAM0
Table 4. Average PSNR (dB) values of different methods on Set12 and BSD68.
Table 4. Average PSNR (dB) values of different methods on Set12 and BSD68.
Dataset σ BM3DWNNMEPLLDnCNN-SDnCNN-BFFDNetOurs
Set121532.3732.7032.0932.8632.6832.7732.97
2529.9730.2629.6230.3930.3630.4430.58
5026.7227.0526.3427.1827.2027.3327.55
BSD681531.0731.3731.2131.7331.6131.6331.76
2528.5728.8328.6829.2329.1629.1929.34
5025.6225.8725.6726.2326.2326.2926.37
The bold one in the table is the best indicator.
Table 5. Average SSIM values of different methods on Set12 and BSD68.
Table 5. Average SSIM values of different methods on Set12 and BSD68.
Dataset σ WNNMBM3DDnCNN-SDnCNN-BFFDNetOurs
Set12150.88690.89890.90270.90010.90180.9057
250.83230.85530.86180.86020.86280.8668
500.72820.76790.78270.78280.79020.7953
BSD68150.90940.87410.89060.88660.89010.8933
250.81570.80250.82780.82420.82890.8341
500.70290.67440.71890.71640.72420.7301
The bold one in the table is the best indicator.
Table 6. PSNR (dB) values for different denoising methods on Set12 ( δ = 15 ).
Table 6. PSNR (dB) values for different denoising methods on Set12 ( δ = 15 ).
MethodC.manPepperMolnar.AirplaneParrotLenaBarbaraBoat
BM3D31.9132.6931.8531.0731.3734.2633.1032.13
WNNM32.1732.9732.7131.3931.6234.2733.6032.27
EPLL31.8532.6432.1031.1931.4233.9231.3831.93
DnCNN-S32.6133.3033.0931.7031.8334.6232.6432.42
DnCNN-B32.1033.1532.9431.5631.6334.5632.0932.35
FFDNet32.4333.2532.6631.5731.8134.6232.5432.38
OURS32.5733.4233.2731.9231.9634.7232.8933.31
The bold one in the table is the best indicator.
Table 7. PSNR (dB) values for different denoising methods on Set12 ( δ = 25 ).
Table 7. PSNR (dB) values for different denoising methods on Set12 ( δ = 25 ).
MethodC.manPepperMolnar.AirplaneParrotLenaBarbaraBoat
BM3D29.4530.1629.2528.4228.9332.0730.7129.90
WNNM29.6430.4229.8428.6929.1532.2431.2430.03
MLP29.6130.3029.6128.8229.2532.2529.5429.97
EPLL29.2630.1729.3928.6128.9531.7328.6129.74
DnCNN-S30.1830.8730.2829.1329.4332.4430.0030.18
DnCNN-B29.9430.8430.2529.0929.3532.4229.6930.20
FFDNet30.1030.9330.0829.0429.4432.5730.0130.25
Ours30.3731.1330.2529.2929.5932.7430.8130.45
The bold one in the table is the best indicator.
Table 8. SSIM values for different denoising methods on Set12 ( δ = 15 ).
Table 8. SSIM values for different denoising methods on Set12 ( δ = 15 ).
MethodC.manPepperMolnar.AirplaneParrotLenaBarbaraBoat
BM3D0.89680.908780.93480.90200.89660.89510.92170.8530
WNNM0.90020.91150.94060.90490.89930.89690.92600.8551
EPLL0.90470.90840.93620.90650.90420.88940.90340.8511
DnCNN-S0.91310.91200.95010.90770.90490.90030.92000.8612
DnCNN-B0.90290.91300.94620.90560.89950.89990.91650.8586
FFDNet0.91180.91120.949 10.90740.90450.90120.91980.8608
Ours0.91610.91480.95240.91200.91590.90380.92280.8695
The bold one in the table is the best indicator.
Table 9. SSIM values for different denoising methods on Set12 ( δ = 25 ).
Table 9. SSIM values for different denoising methods on Set12 ( δ = 25 ).
MethodC.manPepperMolnar.AirplaneParrotLenaBarbaraBoat
BM3D0.84740.86960.89790.85890.85280.86060.88560.7998
WNNM0.85420.87460.90670.86500.85530.86510.89550.8014
MLP0.85930.87560.90270.86820.85990.86550.86160.8030
EPLL0.84600.86830.89840.86360.85560.85040.84270.7964
DnCNN-S0.84070.87500.91650.85400.83490.85500.85400.7763
DnCNN-B0.86450.88160.91570.87040.85760.86900.87530.8102
FFDNet0.87550.87940.92050.87000.86240.87360.88020.8124
OURS0.87580.88620.92010.87020.86170.87460.8834s0.8179
The bold one in the table is the best indicator.
Table 10. The average value of PSNR (dB) and SSIM of different algorithms on the dataset on Set5 and CBSD68.
Table 10. The average value of PSNR (dB) and SSIM of different algorithms on the dataset on Set5 and CBSD68.
Dataset σ CBM3DDnCNN-CFFDNetOurs
Set51533.4134.0434.3034.42
2530.9231.9132.1032.37
5028.1628.9629.2529.49
CBSD681533.5033.8833.8734.12
2530.6931.2331.2131.43
5027.3627.9227.9528.14
The bold one in the table is the best indicator.
Table 11. Average PSNR and SSIM of different methods on two real noise image sets.
Table 11. Average PSNR and SSIM of different methods on two real noise image sets.
SIDDDND
PSNR (dB)SSIMPSNR (dB)SSIM
CBM3D25.650.68534.510.851
DnCNN-C35.590.86137.900.943
FFDNet38.270.94837.610.942
CBDNet33.280.86838.060.942
Ours38.570.95138.690.947
The bold one in the table is the best indicator.
Table 12. Comparison of ablation experiment results ( δ = 25 ).
Table 12. Comparison of ablation experiment results ( δ = 25 ).
NetworkBLBL + NAMBL + SEBL + NAM + DirectBL + NAM + MFM
PSNR30.4530.5230.4930.5430.58
SSIM0.86410.86520.86460.86540.8668
The bold one in the table is the best indicator.
Table 13. Comparison of ablation experiment results ( δ = 50 ).
Table 13. Comparison of ablation experiment results ( δ = 50 ).
NetworkBLBL + NAMBL + SEBL + NAM + DirectBL + NAM + MFM
PSNR27.3527.4627.4127.4827.55
SSIM0.79110.79310.79190.79420.7953
The bold one in the table is the best indicator.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mao, J.; Sun, L.; Chen, J.; Yu, S. A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion. Sensors 2025, 25, 317. https://doi.org/10.3390/s25020317

AMA Style

Mao J, Sun L, Chen J, Yu S. A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion. Sensors. 2025; 25(2):317. https://doi.org/10.3390/s25020317

Chicago/Turabian Style

Mao, Jing, Lianming Sun, Jie Chen, and Shunyuan Yu. 2025. "A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion" Sensors 25, no. 2: 317. https://doi.org/10.3390/s25020317

APA Style

Mao, J., Sun, L., Chen, J., & Yu, S. (2025). A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion. Sensors, 25(2), 317. https://doi.org/10.3390/s25020317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop