Variable Step Normalized Least Mean Square Guided by Composite Desired Signal for Few-View Computed Tomography Denoising

Abstract

Background: Low-dose CT provides essential diagnostic information while minimizing radiation exposure through few-view reconstruction techniques. However, these techniques often introduce noise and artifacts, affecting diagnostic accuracy. Although L 0 -smoothing regularization methods partially address these issues, their fixed sparsity constraint cannot adapt to CT image complex characteristics, and they remain highly sensitive to regularization parameter selection. Objective: To propose a novel CT image denoising method named Variable Step Normalized Least Mean Square L 0 -smoothing (VSNLMS- L 0 ) that achieves an optimal balance between noise reduction and structural preservation while reducing sensitivity to regularization parameter selection. Methods: The VSNLMS- L 0 method employs an adaptive framework that dynamically responds to local image characteristics. The variable step-size strategy enables precise calibration of processing intensity across regions with varying noise levels and detail complexity, ingeniously combining filtered back projection (FBP) reconstruction results with L 0 -smoothing to create a composite desired signal. Conclusions: This approach offers an effective solution for enhancing low-dose CT image quality and improving diagnostic reliability.

Share and Cite:

Zhou, Y.X., Ji, D.J. and Zhang, Q. (2025) Variable Step Normalized Least Mean Square Guided by Composite Desired Signal for Few-View Computed Tomography Denoising. Journal of Signal and Information Processing, 16, 1-17. doi: 10.4236/jsip.2025.161001.

1. Introduction

In the field of medical imaging, CT images are extensively utilized for clinical diagnosis and research. However, these images are inevitably affected by various types of noise during the acquisition process, which may stem from the physical limitations of imaging equipment, patient movement, few-view reconstruction, and other factors [1]. Moreover, in few-view sampling, CT images frequently face challenges like stripe artifacts [2] and loss of critical features. Many approaches were proposed to improve the quality of CT images. These techniques aim to reduce the artifacts, noise or both present in CT images, which can be roughly divided into two categories: sinogram domain reconstruction and image domain postprocessing. Sinogram domain methods concentrate on processing the original projection data. These methods either apply filters [3] [4] to smooth the sinogram or utilize iterative reconstruction techniques guided by priors. By introducing prior information during the iterative optimization process, the reconstruction quality can be improved, noise and artifacts can be reduced, and image details can be enhanced. Common types include regularization method [5]-[7], priors based on non-local information [8] [9], priors guided by deep learning [10], nonlocal regularization [8] [9] and physics-guided priors [11].

Image post-processing refers to a series of operations performed after image acquisition or preliminary processing, aiming to improve image quality, extract useful information, or achieve specific goals. Among them, denoising is a common post-processing task. Different algorithms, based on their respective mathematical models and principles, exhibit varying advantages and effects in different scenarios. Algorithms such as the Wavelet Transform denoising algorithm [12], Total Variation (TV) denoising algorithm [13], Block -Matching and 3D filtering (BM3D) [14], and L 0 -smoothing algorithm [15] all fall within the category of image post-processing techniques. Among them, the Wavelet Transform denoising algorithm effectively removes noise by performing wavelet decomposition on the image, dividing it into sub-bands of different frequencies, and conducting threshold processing on the wavelet coefficients in the high-frequency sub-bands. Subsequently, the image is reconstructed through the inverse wavelet transform, restoring details while improving the image quality. While wavelet transform focuses on frequency-domain denoising, the Total Variation (TV) denoising algorithm introduces a spatial-domain method that minimizes the total variation of the image, i.e., the sum of differences between adjacent pixels. This approach effectively suppresses noise while preserving sharp edges, although it may lead to some smoothing of fine textures. Building upon the need for better texture preservation, the Block-Matching and 3D Filtering (BM3D) algorithm refines the process by dividing the image into small two-dimensional blocks, grouping similar blocks in a three-dimensional space, and performing joint filtering. This technique excels in maintaining image details and textures while significantly reducing noise, making it a highly effective post-processing method. The L 0 -smoothing algorithm further enhances edge preservation. By minimizing the L 0 norm of the image gradient, it effectively removes noise while retaining structural and edge information as much as possible. These algorithms aim to enhance image quality and visual effects, and provide a better foundation for subsequent image analysis and processing tasks. Recently, deep learning (DL) techniques, particularly convolutional neural networks (CNNs), have significantly enhanced image quality in CT reconstruction and post-processing applications. It is widely applied in different medical imaging tasks, including being used for CT reconstruction [10] [16]-[19], image denoising [20] [21], PET reconstruction and calibration [22] [23]. These methods have demonstrated remarkable performance, surpassing traditional algorithms by learning complex noise patterns and structural features directly from data. Unlike conventional techniques that rely on predefined mathematical models, DL-based approaches leverage large datasets to train models capable of adaptive and context-aware noise reduction.

The Normalized Least Mean Square (NLMS) algorithm is a widely used adaptive filtering technique that improves upon the Least Mean Square (LMS) algorithm [24] by normalizing the step size. This normalization enhances the stability and convergence speed of the algorithm, making it more effective in practical applications. However, NLMS relies on a fixed step size, which results in an inherent trade-off between convergence speed and steady-state error, limiting its performance in non-stationary environments [25]. To overcome this limitation, this paper investigates the Variable Step-Size Normalized Least Mean Square (VSNLMS) algorithm, an extension of NLMS that adaptively adjusts the step size based on the variance of the filtering region. This adaptive approach optimizes the convergence behavior by automatically selecting larger step sizes when rapid adaptation is needed and smaller step sizes when fine-tuning is required, thereby achieving both faster convergence and smaller steady-state error in varying imaging conditions.

In the field of CT image denoising, traditional methods such as the L 0 -smoothing algorithm present significant limitations despite their effectiveness in preserving edges and structures. These algorithms employ a fixed sparsity constraint through the L 0 norm, which fails to adapt to the complex and diverse characteristics of CT images. They often struggle with excessive smoothing in texture-rich regions and may introduce artifacts in complex patterns. A critical challenge lies in selecting appropriate regularization parameters—parameters set too high lead to excessive smoothing and loss of essential structural details, while parameters set too low result in inadequate noise suppression, leaving residual artifacts that compromise diagnostic accuracy. Consequently, achieving an optimal balance between noise reduction and detail preservation remains elusive with traditional approaches. To address these fundamental limitations, we propose the VSNLMS- L 0 algorithm—an innovative approach that leverages the variable step-size normalized least mean square algorithm. Unlike deep learning methods requiring extensive training datasets, VSNLMS operates with a single desired signal as reference, making the selection of this desired signal critically important. In CT reconstruction, particularly few-view CT which inherently suffers from information loss, conventional methods either produce noisy results or over-smooth important structural details. Our VSNLMS- L 0 algorithm overcomes these challenges by employing a composite desired signal constructed from two complementary components: the original FBP-reconstructed image (which preserves structural details but contains noise) and its L 0 -smoothing denoised version (which reduces noise but potentially sacrifices fine details). This strategic combination creates a reference target that retains high-frequency structural information that would be lost when using L 0 -smoothing with fixed regularization parameters alone. By using this composite signal, the VSNLMS- L 0 algorithm adaptively optimizes filter coefficients to balance noise suppression and detail preservation, then applies these optimized coefficients to the original FBP reconstruction. The result is an enhanced image that demonstrates both improved noise reduction and superior preservation of diagnostically important fine structures, effectively addressing the limitations of traditional denoising approaches.

The flowchart of the VSNLMS- L 0 is shown in Figure 1.

Figure 1. Shows the framework of the VSNLMS- L 0 algorithm applied to image f .

Our approach has three novelties:

1) Variable step-size strategy: VSNLMS- L 0 ,as an extension of NLMS, introduces a core innovation in its ability to adaptively adjust the step size based on the variance of the filtering region. This step-size adjustment mechanism, which responds to local image characteristics, enables the algorithm to precisely calibrate processing intensity according to varying noise levels and detail complexity across different regions.

2) Composite desired signal construction: The VSNLMS- L 0 algorithm features a composite desired signal constructed from two complementary components: the original FBP-reconstructed image (which preserves structural details but contains noise) and its L 0 -smoothing denoised version (which reduces noise but potentially loses fine details). This composite signal provides a more comprehensive reference baseline, enabling VSNLMS to effectively suppress noise while preserving critical structures, particularly suitable for processing complex details and varying noise levels in few-view CT images.

3) Addressing regularization parameter selection challenges in CT image processing: The VSNLMS- L 0 algorithm has been successfully applied to CT image denoising, effectively overcoming key challenges in regularization parameter selection inherent in traditional methods. By combining the adaptive characteristics of VSNLMS with guidance from the composite desired signal, the algorithm exhibits reduced sensitivity to regularization parameter selection, avoiding both over-smoothing and structural detail loss from excessively high parameter settings, as well as insufficient noise suppression from parameters set too low.

The outline of this paper is as follows. A review of the VSNLMS algorithm, L 0 -smoothing algorithm is given in Section 2. In Section 3, we propose an innovative denoising algorithm based on the VSNLMS- L 0 framework. In Section 4, the effectiveness of the proposed method is verified through simulation and real data experiments. Finally, we summarize the entire work in Section 5.

2. Related Work

2.1. NLMS Algorithm

The NLMS algorithm is an adaptive filtering technique that improves upon the LMS algorithm by normalizing the step size with the energy of the input signal [24]. This normalization enhances the stability and convergence speed of the algorithm, mitigating the trade-off between convergence rate and steady-state error typically observed in LMS. The NLMS algorithm updates the filter coefficients based on the error signal and dynamically adjusts the step size, making it more robust to variations in signal power [25].

The NLMS algorithm is defined by two main equations, which provide the error signal and the filter update, respectively, as follows

e k = d k w k T f k (1)

w k = w k1 + μ k e k f k ε+ f k T f k   (2)

In Equation (1), e k is the a priori error signal at the discrete-time index k ; d k is the desired (or reference) signal, how to obtain the desired signal depends on the specific application scenario and mission goals; Input signal vector f k = [ f k , f k1 ,, f nM+1 ] T ,where f k is the input signal at time k and M is the filter length; in Equation (2), w k is the adaptive filter (of length M  ) at the discrete-time index k , ε>0 is a small regularization parameter to avoid division by zero. Continuously update the filter coefficient w k until the maximum number of iterations is reached or the e k is minimized.

In the field of adaptive filtering, the NLMS algorithm is widely used in noise cancellation, echo cancellation and other scenarios due to its simplicity and stability. However, the NLMS algorithm uses a fixed step factor, which cannot adapt to the non-stationary characteristics of the input signal, thus affecting the convergence speed and steady-state error performance of the algorithm.

2.2. L 0 -Smoothing Algorithm

L 0 -smoothing is an image smoothing algorithm that aims to eliminate low-amplitude details by minimizing L 0 gradients, while simultaneously preserving and enhancing significant image edges [15]. The objective of this algorithm is to identify a smooth image S that eliminates unimportant details while preserving the primary structure of the original image f as much as possible. The fundamental concept involves minimizing the L 0 gradient of the image and restricting the number of non-zero gradients, thereby ensuring that the smoothing result retains the main edges while discarding insignificant features.

This algorithm aims to achieve global image smoothing by optimizing the number of non-zero gradients in the image while preserving significant edges. The optimization goals are as follows [15]:

min S { P ( S p f p ) 2 +λC( S ) } (3)

among them, S represents the smoothed image, that is, the output image; f represents the input image; λ is a parameter that controls the degree of smoothing. The larger the value, the stronger the smoothing effect and the less details are retained. p is the pixel position index of the image. C( S ) is the counting function of non-zero gradients, that is, the number of all non-zero gradients in the image:

C( S )=#{ p|| x S p |+| y S p |0 } (4)

in this formula # represents the count of pixels with non-zero gradient in the image, while x S p and y S p are the gradients of the image in the x and y directions respectively.

Then adopt a special alternating optimization strategy with half-quadratic splitting, based on the idea of introducing auxiliary variables to expand the original terms and update them iteratively [15], and finally output the smoothed image S .

3. The Proposed Approach

In image denoising algorithms, the selection of regularization parameters is crucial, as demonstrated by the L 0 -smoothing algorithm referenced in this study. When these parameters are not well understood, the effectiveness of image denoising can be significantly compromised. To address this issue, we propose an innovative CT image denoising approach based on the VSNLMS, which is denoted as VSNLMS- L 0 .

First, in the VSNLMS- L 0 algorithm, the desired signal is weighted by the image f reconstructed by FBP algorithm and L 0 -smoothing denoising result L 0 [ f ] denoised by state-of-the-art denoising algorithm.

The formula for desired signal:

d( x,y )=( 1t )f( x,y )+t L 0 [ f( x,y ) ] (5)

where the weight parameter is t .

filter_region=f( xP:x+P,yP:y+P ) (6)

where P is the padding size, padding size refers to the pixels used to process the edges of the image. Without adequate padding, the number of edge pixels involved in the convolution operation is relatively small, which can result in the loss of edge information or improper processing. By employing appropriate padding, edge pixels can be processed more thoroughly, thereby maintaining the integrity of the image.

To calculate the error signal, the VSNLMS- L 0 algorithm aims to minimize the error to train the best weight:

e=d( x,y ) m=1 M n=1 M [ filter_region( m,n ) w k1 ( m,n ) ] (7)

Since NLMS adopts a fixed step-size factor, it is difficult to balance the convergence speed and steady-state error in a non-stationary environment. Therefore, in the VSNLMS- L 0 algorithm, a variable step size technique is adopted, where μ k is updated based on the variance of the filter region, and μ k is used to control the convergence speed:

μ k ={ μ 1 ,var[ filter_region ] N 1 μ 2 , N 1 <var[ filter_region ]< N 2 μ 3 ,var[ filter_region ] N 2 (8)

where μ 1 , μ 2 , and μ 3 are different fixed steps, and N 1 , N 2 are different variances.

Filter coefficients are constantly updated:

w k = w k1 + μ k ε+ filter_region 2 efilter_region (9)

Finally, the updated filter coefficients after training are used to perform convolution operations on the image to remove noise from the input image and obtain the output image S .

Algorithm. VSNLMS- L 0 algorithm.

Input:

Input image: f , L 0 [ f ]

Filter-size: M

Padding-size: P= M 2

Step-size parameter μ k

N 1 , N 2 denote threshold value

ε denotes small positive constant.

Execution:

1. To generate desired signal d=( 1t )f+t L 0 [ f ]

for x=P+1,P+2,,HP

for y=P+1,P+2,,WP

2. filter_region=f( xP:x+P,yP:y+P )

3. e=d( x,y ) m=1 M n=1 M [ filter_region( m,n ) w k1 ( m,n ) ]

4. Update step-size parameter and weights

μ k ={ μ 1 ,var[ filter_region ] N 1 μ 2 , N 1 <var[ filter_region ]< N 2 μ 3 ,var[ filter_region ] N 2

w k = w k1 + μ k ε+ filter_region 2 efilter_region

Output image:

S=f w k ( denotes convolution operation)

4. Experiment

In this section, both simulation and real sample are images with texture details. The real sample was imaged at the BL13W1 beamline at the Shanghai Synchrotron Radiation Facility (SSRF) using a parallel beam. Similarly, the simulation data were generated using a parallel beam scanning approach for projection and reconstruction. For simulation and real experiments, we conduct two sets of experiments respectively to illustrate the superiority of our algorithm. In order to verify the robustness of the proposed VSNLMS- L 0 algorithm with respect to the regularization parameter. In the first group (Case 1), the regularization parameters of the L 0 -smoothing is set too large, resulting in the loss of image details. The second group (Case 2) has appropriate regularization parameter settings for the L 0 -smoothing algorithm, which were chosen after multiple experiments as a compromise between minimizing noise and retaining essential details. To objectively evaluate the performance of noise suppression, we employed two widely recognized image quality metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) [26].

PSNR is a commonly used objective metric to evaluate the quality of image and video reconstruction, especially in the context of lossy compression. It measures the similarity between the original and distorted images by calculating the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the quality of its representation. PSNR is calculated using the Mean Squared Error (MSE) between the original image f and the reconstructed image S .

PSNR=10 log 10 ( MAX f 2 MSE ) (10)

where

MSE= 1 H×W x=0 H1 y=0 W1 ( f( x,y )S( x,y ) ) 2 (11)

H and W are the dimensions of the images. f( x,y ) and S( x,y ) represent pixel values of the original and reconstructed images, respectively. MAX is the maximum possible pixel value of the image, typically 255 for an 8-bit image. PSNR is a logarithmic scale measure that provides a straightforward numerical representation of the image quality. Higher PSNR values typically indicate better reconstruction quality.

SSIM is an image quality assessment metric that evaluates the structural similarity between two images. Unlike PSNR, which focuses on absolute errors, SSIM considers changes in structural information, luminance, and contrast, making it more consistent with human visual perception. The SSIM index between images f and S is defined as:

SSIM( f,S )= ( 2 μ f μ S + C 1 )( 2 σ fS + C 2 ) ( μ f 2 + μ S 2 + C 1 )( σ f 2 + σ S 2 + C 2 ) (12)

where: μ f and μ S are the mean values of images f and S ; μ f 2 and μ S 2 are the variances of f and S ; σ fS is the covariance of f and S ; C 1 and C 2 are small constants to stabilize the division when the denominators are near zero SSIM evaluates image quality based on perceived changes in structure, contrast, and brightness, aligning well with how humans perceive visual quality. A higher SSIM value indicates better similarity between the original and distorted images.

4.1. Simulation

In the first simulation experiment, Figure 2 shows the visual results of VSNLMS- L 0 and other contrast methods applied to FBP reconstruction images. To facilitate a comparison of image details and textures, we focus on specific areas highlighted by red rectangular boxes (regions of interest, ROI), as illustrated in the second row of Figure 2. Figure 2(a) is the ground truth, and Figure 2(b) shows the reconstruction results of the FBP image of the projection data sampled at six angles in the range of 0 to π. The optimal parameter values for each algorithm are presented in Table 1.

Table 1. Optimal parameter values of different methods for the simulation experiments.

Case1

L 0 -smoothing

λ=0.0079 , kappa = 1.3, β=2λ

VSNLMS- L 0

iteration = 1, filter size = 5, t=0.3 , μ 1 =0.00045 , μ 2 =0.000045

μ 3 =0.0000045 , N 1 =0.06 , N 2 =0.6

Case2

L 0 -smoothing

λ=0.0051 , kappa = 1.3, β=2λ

VSNLMS- L 0

iteration = 1, filter size = 5, t=0.3 , μ 1 =0.00076 , μ 2 =0.000076 , μ 3 =0.0000076 , N 1 =0.06 , N 2 =0.6

In the simulation experiments, the denoised image in Figure 2(c) appears oversmoothed due to the use of a large regularization parameter in the L 0 -smoothing [Case 1] algorithm, and some detailed information, such as edges and texture, is lost. In the local amplification area of Figure 2(e), a significant reduction in noise is evident, following denoising through L 0 -smoothing [Case 2], the details remain somewhat unclear, the ROI is still affected, and the arrow points to the texture area that produces blur. The VSNLMS- L 0 method, illustrated in Figure 2(d) and Figure 2(f), demonstrates the most effective denoising capabilities while successfully preserving texture details. In comparison to other methods, VSNLMS- L 0 excels in denoising textured areas and edges, resulting in crisp details. It is clear from the area pointed by the arrows in Figure 2(f) that the de-noised image is of higher quality and closer to the input image (Figure 2(a)). Table 2 shows that the VSNLMS- L 0 method achieves the best overall performance in terms of PSNR and SSIM.

Figure 2. The first simulation experiment (a) ground truth. (b) FBP (c) L 0 -smoothing [Case 1]. (d) VSNLMS- L 0 [Case 1]. (e) L 0 -smoothing [Case 2]. (f) VSNLMS- L 0 [Case 2]. The first row is denoised images. The second row is zoomed regions in red boxes.

Table 2. Different sizes of regularized parameters in the case of different methods on simulated data results. ( ) means the lower (higher) the better.

Method

PSNR ( )

SSIM ( )

FBP

19.0657

0.8274

Case 1

L 0 -smoothing

23.3266

0.9177

VSNLMS- L 0

30.4016

0.9639

Case 2

L 0 -smoothing

20.6731

0.9041

VSNLMS- L 0

31.1033

0.9649

In the second simulation experiment, Figure 3 shows the visual results of VSNLMS- L 0 and other comparison methods applied to the FBP reconstructed image. Figure 3(a) shows the ground truth values, and Figure 3(b) presents the FBP image reconstruction results of the sampled projection data from 12 angles within the range of 0 to π. The optimal parameter values of each algorithm are shown in Table 3.

It can be seen from Figure 3 and Table 4 that the comprehensive performance of the VSNLMS- L 0 method in terms of PSNR and SSIM is better than that of L 0 -smoothing.

Table 3. Optimal parameter values of different methods for the simulation experiments.

Case 1

L 0 -smoothing

λ=0.0099 , kappa = 1.3, β=2λ

VSNLMS- L 0

iteration = 300, filter size = 7, t=0.4 , μ 1 =0.0045 , μ 2 =0.00045

μ 3 =0.000045 , N 1 =0.09 , N 2 =0.9

Case 2

L 0 -smoothing

λ=0.0071 , kappa = 1.3, β=2λ

VSNLMS- L 0

iteration = 200, filter size = 7, t=0.7 , μ 1 =0.0045 , μ 2 =0.00045 , μ 3 =0.000045 , N 1 =0.04 , N 2 =0.4

Figure 3. The second simulation experiment (a) ground truth. (b) FBP (c) L 0 -smoothing [Case 1]. (d) VSNLMS- L 0 [Case 1]. (e) L 0 -smoothing [Case 2]. (f) VSNLMS- L 0 [Case 2]. The first row is denoised images. The second row is zoomed regions in red boxes.

Table 4. Different sizes of regularized parameters in the case of different methods on simulated data results. ( ) means the lower (higher) the better.

Method

PSNR ( )

SSIM ( )

FBP

17.9046

0.7562

Case1

L 0 -smoothing

21.0234

0.8568

VSNLMS- L 0

23.9638

0.9006

Case2

L 0 -smoothing

18.5590

0.8290

VSNLMS- L 0

22.5477

0.8840

4.2. Real Data Experiment

In our real experiments, the visual results of VSNLMS- L 0 and L 0 -smoothing algorithm applied to FBP-reconstructed images are presented in Figure 4. To facilitate the analysis of these experimental results, the region of interest (ROI) was selected, as illustrated in the second row of Figure 4.

Figure 4(a) is the ground truth, while Figure 4(b) shows the reconstruction results of the FBP image derived from projection data sampled at six angles, ranging from 0 to π. The optimal parameter values for each algorithm are presented in Table 5.

Table 5. Optimal parameter values of different methods for the real experiments.

Case 1

L 0 -smoothing

λ=0.019 , kappa = 2, β=2λ

VSNLMS- L 0

iteration = 1, filter size = 5, t=0.4 , μ 1 =0.00001 , μ 2 =0.000001

μ 3 =0.0000001 , N 1 =0.000008 , N 2 =0.00008

Case 2

L 0 -smoothing

λ=0.01 , kappa = 2, β=2λ

VSNLMS- L 0

iteration = 1, filter size = 5, t=0.6 , μ 1 =0.000005 , μ 2 =0.0000005 , μ 3 =0.00000005 , N 1 =0.000008 , N 2 =0.00008

In the real experiment, for the L 0 -smoothing algorithm, we selected the regularization parameters through extensive experiments, aiming to balance denoising and detail preservation as much as possible for both algorithms.

Figure 4(c) uses the L 0 -smoothing algorithm [Case 1], the strip artifacts in the image were relatively well removed, but the locally magnified image shows that it appears oversmoothed, with a significant loss of texture and fine details. Figure 4(d), which uses the VSNLMS- L 0 algorithm [Case 1] for denoising, produces an image that preserves more details compared to the image in Figure 4(c). In the image obtained with the VSNLMS- L 0 algorithm, some of the key structures and edges are more visible. In Figure 4(e), due to the more appropriate selection of regularization parameters in the L 0 -smoothing, the image denoising effect is also restored in detail compared with Figure 4(c), but the arrow pointing part is still somewhat fuzzy. As can be further seen from Figure 4(f), VSNLMS- L 0 [Case 2] algorithm has achieved excellent performance in terms of visual effects. According to the objective evaluation metrics in Table 6, VSNLMS- L 0 still shows a clear advantage in terms of PSNR and SSIM

Figure 4. The real experiment (a) The full angle reconstructed image. (b) FBP. (c) L 0 -smoothing [Case 1]. (d) VSNLMS- L 0 [Case 1]. (e) L 0 -smoothing [Case 2]. (f) VSNLMS- L 0 [Case 2]. The first row is denoised images. The second row is zoomed regions in red boxes.

In summary, the image results of VSNLMS- L 0 achieved excellent performance from both subjective and objective evaluation perspectives.

Table 6. Different sizes of regularized parameters in the case of different methods on real data results. ( ) means the lower (higher) the better.

Method

PSNR ( )

SSIM ( )

FBP

21.7018

0.8416

Case1

L 0 -smoothing

21.8716

0.8799

VSNLMS- L 0

27.1418

0.9072

Case2

L 0 -smoothing

26.2072

0.9294

VSNLMS- L 0

27.7525

0.9336

4.3. Parameter Analysis

In this study, we conducted a sensitivity analysis, focusing primarily on the Case 2 for VSNLMS- L 0 in real data experiments. Our aim was to evaluate the impact of various parameters on performance metrics, particularly PSNR and SSIM. Through extensive experimentation, we selected several parameters for analysis, including λ (in Equation (3)), the four parameters of the VSNLMS- L 0 algorithm are the number of iterations, filter size, weight parameter (in Equation (5)), and the step size (in Equation (8)). We chose to analyze these parameters because our experiments revealed that they significantly influence the experimental results.

Figure 5. Illustrates the impact of different regularization parameters in the L 0 -smoothing algorithm on PSNR and SSIM.

Figure 5 illustrates the impact of different regularization parameters λ in the L 0 -smoothing algorithm on PSNR and SSIM: illustrates the impact of λ on PSNR and SSIM. The figure shows that PSNR and SSIM increase with increasing λ , indicating enhanced noise suppression and image clarity. When λ is set to 0.01, the evaluation indicators all reach peak values. Combining the subjective images with the objective evaluation metrics from the experimental results, we considered both subjective and objective evaluations and selected an appropriate parameter value, setting λ to 0.01.

Figure 6. illustrates the impact of different parameters in the VSNLMS- L 0 algorithm on PSNR and SSIM.

Figure 6 shows the influence of different parameter values on PSNR and SSIM in VSNLMS- L 0 . (a) Shows the effect of filter size on PSNR and SSIM. The curve in the figure shows that when filter size is set to 5, PSNR and SSIM reach the maximum. In addition, the filter size value that is too large or too small will cause PSNR and SSIM to decrease. If the filter size is too large, the image will be too smooth, and details and edges will be blurred. When the filter size is too small, noise remains and too many details are retained, which may cause the image to be locally unsmooth. (b) The corresponding PSNR and SSIM values for different weight parameters are provided. When the weight parameter is 0.6, the PSNR reaches its maximum. (c) Shows the SSIM and PSNR values corresponding to different variance ranges, where R i ={ N 1 , N 2 },( i=1,2,3,4,5,6,7,8 ) (in Equation (10)); R 1 ={ 0.0000008,0.000008 } , R 2 ={ 0.000001,0.00001 } , R 3 ={ 0.000005,0.00005 } , R 4 ={ 0.000008,0.00008 } , R 5 ={ 0.00001,0.0001 } , R 6 ={ 0.00008,0.0008 } , R 7 ={ 0.0008,0.008 } , R 8 ={ 0.08,0.8 } (d) shows the SSIM and PSNR values corresponding to different step size ranges, Where M i ={ μ 1 , μ 2 , μ 3 },( i=1,2,3,4,5,6 ) (in Equation (10)); M 1 ={ 0.000001,0.0000001,0.00000001 } , M 2 ={ 0.000005,0.0000005,0.00000005 } , M 3 ={ 0.00001,0.000001,0.0000001 } , M 4 ={ 0.00005,0.000005,0.0000005 } , M 5 ={ 0.0001,0.00001,0.000001 } , M 6 ={ 0.001,0.0001,0.00001 } , the learning rate in the filter is adjusted adaptively according to the variance of the filter region. The analysis shows that when the variable step range is M 2 , the PSNR and SSIM values reach the maximum. When the step size is too large, the convergence is unstable, causing image details to be destroyed; when the step size is too small, the convergence speed is slow and the denoising effect is not obvious.

5. Conclusion

In this study, we proposed a novel CT image denoising method, Variable Step Normalized Least Mean Square- L 0 smoothing (VSNLMS- L 0 ), that successfully addresses the fundamental challenge of balancing noise suppression and detail preservation in few-view CT reconstruction. Our approach effectively overcomes the limitations of traditional methods by implementing an adaptive framework that dynamically responds to local image characteristics. The synergistic integration of variable step-size optimization and strategic signal composition enables VSNLMS- L 0 to achieve superior denoising performance without sacrificing critical anatomical structures. Notably, our method demonstrates remarkable stability across varying imaging conditions, substantially reducing the sensitivity to regularization parameter selection that has long plagued conventional techniques. The simulations and real data experiments confirm that VSNLMS- L 0 consistently outperforms standard L 0 -smoothing denoising methods, particularly when processing diagnostically challenging images with complex textures. These results highlight the significant potential of our approach for clinical applications, where improved image quality directly translates to enhanced diagnostic capabilities. The VSNLMS- L 0 method offers a robust solution that maintains high image fidelity while effectively managing noise.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Tang, L., Hui, Y., Yang, H., Zhao, Y. and Tian, C. (2022) Medical Image Fusion Quality Assessment Based on Conditional Generative Adversarial Network. Frontiers in Neuroscience, 16, Article 986153.
https://doi.org/10.3389/fnins.2022.986153
[2] Wang, T., Chen, C., Shen, K., Liu, W. and Tian, C. (2023) Streak Artifact Suppressed Back Projection for Sparse-View Photoacoustic Computed Tomography. Applied Optics, 62, 3917-3925.
https://doi.org/10.1364/ao.487957
[3] Balda, M., Hornegger, J. and Heismann, B. (2012) Ray Contribution Masks for Structure Adaptive Sinogram Filtering. IEEE Transactions on Medical Imaging, 31, 1228-1239.
https://doi.org/10.1109/tmi.2012.2187213
[4] Manduca, A., Yu, L., Trzasko, J.D., Khaylova, N., Kofler, J.M., McCollough, C.M., et al. (2009) Projection Space Denoising with Bilateral Filtering and CT Noise Modeling for Dose Reduction in Ct. Medical Physics, 36, 4911-4919.
https://doi.org/10.1118/1.3232004
[5] Wang, T., Kudo, H., Yamazaki, F. and Liu, H. (2019) A Fast Regularized Iterative Algorithm for Fan-Beam CT Reconstruction. Physics in Medicine & Biology, 64, Article 145006.
https://doi.org/10.1088/1361-6560/ab22ed
[6] Wang, S., Wu, W., Feng, J., Liu, F. and Yu, H. (2020) Low-Dose Spectral CT Reconstruction Based on Image-Gradient L0-Norm and Adaptive Spectral PICCS. Physics in Medicine & Biology, 65, Article 245005.
https://doi.org/10.1088/1361-6560/aba7cf
[7] Chen, Z., Jin, X., Li, L. and Wang, G. (2013) A Limited-Angle CT Reconstruction Method Based on Anisotropic TV Minimization. Physics in Medicine and Biology, 58, 2119-2141.
https://doi.org/10.1088/0031-9155/58/7/2119
[8] Xia, W., Yang, Z., Lu, Z., Wang, Z. and Zhang, Y. (2024) RegFormer: A Local-Nonlocal Regularization-Based Model for Sparse-View CT Reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences, 8, 184-194.
https://doi.org/10.1109/trpms.2023.3281148
[9] Yu, H., Wang, S., Wu, W., Gong, C., Wang, L., Pi, Z., et al. (2021) Weighted Adaptive Non-Local Dictionary for Low-Dose CT Reconstruction. Signal Processing, 180, Article 107871.
https://doi.org/10.1016/j.sigpro.2020.107871
[10] Wu, D., Kim, K., El Fakhri, G. and Li, Q. (2017) Iterative Low-Dose CT Reconstruction with Priors Trained by Artificial Neural Network. IEEE Transactions on Medical Imaging, 36, 2479-2486.
https://doi.org/10.1109/tmi.2017.2753138
[11] Yang, C., Wu, P., Gong, S., Wang, J., Lyu, Q., Tang, X., et al. (2017) Shading Correction Assisted Iterative Cone-Beam CT Reconstruction. Physics in Medicine & Biology, 62, 8495-8520.
https://doi.org/10.1088/1361-6560/aa8e62
[12] Mallat, S. and Hwang, W.L. (1992) Singularity Detection and Processing with Wavelets. IEEE Transactions on Information Theory, 38, 617-643.
https://doi.org/10.1109/18.119727
[13] Rudin, L.I., Osher, S. and Fatemi, E. (1992) Nonlinear Total Variation Based Noise Removal Algorithms. Physica D: Nonlinear Phenomena, 60, 259-268.
https://doi.org/10.1016/0167-2789(92)90242-f
[14] Dabov, K., Foi, A., Katkovnik, V. and Egiazarian, K. (2007) Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing, 16, 2080-2095.
https://doi.org/10.1109/tip.2007.901238
[15] Xu, L., Lu, C., Xu, Y. and Jia, J. (2011) Image Smoothing via L0 Gradient Minimization. ACM Transactions on Graphics, 30, 1-12.
https://doi.org/10.1145/2070781.2024208
[16] Zheng, A., Gao, H., Zhang, L. and Xing, Y. (2020) A Dual-Domain Deep Learning-Based Reconstruction Method for Fully 3D Sparse Data Helical CT. Physics in Medicine & Biology, 65, 245030.
https://doi.org/10.1088/1361-6560/ab8fc1
[17] Kang, E., Min, J. and Ye, J.C. (2017) A Deep Convolutional Neural Network Using Directional Wavelets for Low‐Dose X‐Ray CT Reconstruction. Medical Physics, 44, e360-e375.
https://doi.org/10.1002/mp.12344
[18] Wu, D., Kim, K. and Li, Q. (2019) Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction. Medical Physics, 46, 4763-4776.
https://doi.org/10.1002/mp.13627
[19] Bao, P., Sun, H., Wang, Z., Zhang, Y., Xia, W., Yang, K., et al. (2019) Convolutional Sparse Coding for Compressed Sensing CT Reconstruction. IEEE Transactions on Medical Imaging, 38, 2607-2619.
https://doi.org/10.1109/tmi.2019.2906853
[20] Lu, W., Onofrey, J.A., Lu, Y., Shi, L., Ma, T., Liu, Y., et al. (2019) An Investigation of Quantitative Accuracy for Deep Learning Based Denoising in Oncological Pet. Physics in Medicine & Biology, 64, Article 165019.
https://doi.org/10.1088/1361-6560/ab3242
[21] Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., et al. (2018) Low-Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss. IEEE Transactions on Medical Imaging, 37, 1348-1357.
https://doi.org/10.1109/tmi.2018.2827462
[22] Dong, X., Lei, Y., Wang, T., Higgins, K., Liu, T., Curran, W.J., et al. (2020) Deep Learning-Based Attenuation Correction in the Absence of Structural Information for Whole-Body Positron Emission Tomography Imaging. Physics in Medicine & Biology, 65, Article 055011.
https://doi.org/10.1088/1361-6560/ab652c
[23] Corda-D’Incan, G., Schnabel, J.A. and Reader, A.J. (2022) Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets. IEEE Transactions on Radiation and Plasma Medical Sciences, 6, 552-563.
https://doi.org/10.1109/trpms.2021.3101947
[24] Slock, D.T.M. (1993) On the Convergence Behavior of the LMS and the Normalized LMS Algorithms. IEEE Transactions on Signal Processing, 41, 2811-2825.
https://doi.org/10.1109/78.236504
[25] Eweda, E., Bershad, N.J. and Bermudez, J.C.M. (2018) Stochastic Analysis of the LMS and NLMS Algorithms for Cyclostationary White Gaussian and Non-Gaussian Inputs. IEEE Transactions on Signal Processing, 66, 4753-4765.
https://doi.org/10.1109/tsp.2018.2860552
[26] Setiadi, D.R.I.M. (2020) PSNR vs SSIM: Imperceptibility Quality Assessment for Image Steganography. Multimedia Tools and Applications, 80, 8423-8444.
https://doi.org/10.1007/s11042-020-10035-z

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.