An Efficient Projected Gradient Method for Convex Constrained Monotone Equations with Applications in Compressive Sensing

Abstract

In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.

Share and Cite:

Hu, Y. and Wang, Y. (2020) An Efficient Projected Gradient Method for Convex Constrained Monotone Equations with Applications in Compressive Sensing. Journal of Applied Mathematics and Physics, 8, 983-998. doi: 10.4236/jamp.2020.86077.

1. Introduction

This paper is dedicated to solving the following nonlinear convex constrained monotone equations:

F ( x ) = 0 , x Ω , (1)

where F : R n R n is a continuous nonlinear mapping and the feasible region Ω R n is a nonempty closed convex set, e.g. an n-dimensional box, namely, Ω = x R n : l x u . Monotone means that

F ( x ) F ( y ) , x y 0 , x , y R n , (2)

where the , denotes the inner product of vectors. The problems (1) emerges in many fields such as economic equilibrium problems [1], chemical equilibrium systems [2] and the power flow equations [3]. Based on the work of Solodov and Svaiter [4], Wang et al. [5] proposed a projection type method to solve Equation (1). The obtained method in [5] possesses global convergence property without any regularity assumptions. Nevertheless the method needs to solve a linear equation at each iteration. To avoid solving the linear equation and improving the effectiveness, some projected conjugate gradient methods [6] [7] [8] [9] are studied based on the projection technique of Solodov and Svaiter [4]. The numerical results gained in [6] [7] [8] [9] indicate that the projected conjugate gradient type methods for solving problem (1) are indeed efficient and promising. In this paper, by combining the well-known Polak-Ribière-Polyak [10] [11] method with the projection technique of Solodov and Svaiter [4], a conjugate gradient projected method with fast convergent property is proposed for the nonlinear monotone equations with convex constraints. Under some mild conditions, the global convergent results are established for the given method. The obtained method possesses the following three beneficial properties: 1) The search direction satisfies the sufficient descent condition, 2) The global convergence is independent of any merit function, and 3) It is derivative-free method and is effective for large scale nonlinear convex constrained monotone equations (with a maximum dimension of 100,000). Furthermore, the obtained method is extended to solve the l 1 -norm problem by reformulating it as non-smooth monotone equations.

In Section 2, the modified PRP-type conjugate gradient projected method is proposed, and some preliminary properties are studied. The global convergence results are established in Section 3. The numerical experiments, and the applications of the obtained method for l 1 -norm regularized compressive sensing problems are discussed in Section 4. Finally, we have a conclusion section.

2. The Proposed Method and Corresponding Algorithm

We firstly introduce the definition of the projection operator P Ω [ ] which is defined as the mapping from R n to Ω ,

P Ω [ x ] = arg min { y x | y Ω } , x R n ,

where denotes the Euclidean norm of vectors, Ω is a nonempty closed convex subset of R n .

The projection operator is non-expansive, namely, for any x , y R n , the following condition holds

P Ω [ y ] P Ω [ x ] x y . (3)

Let’s review the Polak-Ribière-Polyak [10] [11] conjugate gradient method briefly. The PRP method is firstly designed for solving the unconstrained optimization problem:

min { f ( x ) | x R n } , (4)

where f : R n R is continuously differentiable. It generates the iteration sequence { x k } in the form

x k + 1 = x k + α k d k , (5)

where x k is the current iteration point, α k > 0 is a step-length, and d k is the search direction given by

d k = { g k + β k 1 P R P d k 1 , if k > 0 , g k , if k = 0 , (6)

where β k 1 P R P = g k T y k 1 g k 1 2 , y k 1 = g k g k 1 .

Combining the projected technique of Solodov and Svaiter [4] with the PRP method formed by Equation (5) and Equation (6), the following modified PRP formula is defined given in this paper

d k = { g k + g k T y k 1 d k 1 d k 1 T g k y k 1 max { 2 γ d k 1 y k 1 , d k 1 T y k 1 , g k 1 2 } , if k > 0 g k , if k = 0 , (7)

where y k 1 = g k g k 1 and γ > 0 is a constant.

It is show be noted that the proposed direction formula Equation (7) reduces to PRP formula if the exact line search is used. Furthermore, the sufficient descent condition automatically holds for all k, since d k T g ( x k ) = g ( x k ) 2 . There are some conjugate gradient methods with similar idea concerning Equation (7) have been studied in the papers [12] - [19].

The corresponding modified PRP conjugate gradient projection algorithm for solving problem (1) starts as follows.

Algorithm 1:

Step 0 Choose any initial point x 0 Ω , and select constants ρ ( 0 , 1 ) , γ > 0 , σ > 0 , ξ > 0 , ϵ ( 0 , 1 ) and d 0 = F ( x 0 ) . Let k : = 0 .

Step 1 If F ( x k ) ϵ , stop. Otherwise compute search direction d k by Equation (7) with g k and g k 1 replaced by F k and F k 1 , respectively.

Step 2 Let z k = x k + α k d k , where α k = max { ξ ρ i | i = 0 , 1 , } such that

F ( x k + α k d k ) , d k σ α k d k 2 . (8)

Step 3 If F ( z k ) ϵ , stop and let x k + 1 = z k . Otherwise compute the next iteration by

x k + 1 = P Ω [ x k β k F ( z k ) ] , (9)

where

β k = F ( z k ) , x k z k F ( z k ) 2 (10)

Step 4 Let k : = k + 1 , and go to Step 1.

Remark 1: In the algorithm 1, the step size α k given by Equation (8) satisfies

F ( z k ) , x k z k > 0 ,

where z k = x k + α k d k , d k is the search direction. Moreover, for any x * such that F ( x * ) = 0 ,

F ( z k ) , x * z k 0.

comes from the monotonicity property of F ( x ) . This means that the hyperplane

H k = { x R n | F ( z k ) , x z k = 0 }

strictly separates the current point x k from the solution set of the problem. The above facts and Step 3 indicate that the next iteration x k + 1 is computed by projecting x k onto the intersection of the feasible set Ω with the halfspace H k .

3. Convergence Analysis

In this section, we are going to discuss the convergence property of the given method. Before that, there are some basic assumptions on problem (1) needs to been given.

Assumption 1: The mapping F is Lipschitz continuous with constant L > 0 in a set Ω , written F Lip ( Ω ) , for every x , y Ω ,

F ( x ) F ( y ) L x y . (11)

Assumption 2: The solution set of the problem (1), denoted by S, is nonempty convex.

For conjugate gradient method, the sufficient descent property is essential in the convergence analysis, the following lemma shows that the search direction { d k } generated by Algorithm 1 satisfies the sufficient descent condition independent of line search.

Lemma 1: Let the sequence { x k } and { d k } be generated by Algorithm 1. Then, for all k 0 ,

F ( x k ) T d k = F ( x k ) 2 , (12)

and

d k ( 1 + 1 γ ) F ( x k ) . (13)

Proof: For k = 0 , Equation (12) and Equation (13) follows from the direct application of d 0 = g ( x 0 ) . For k 1 , using Equation (7), the definition of the search direction d k + 1 , it follows that

d k + 1 T F k + 1 = F k + 1 2 + [ F k + 1 T y k d k d k T F k + 1 y k max { 2 γ d k y k , d k T y k , F k 2 } ] T F k + 1 = F k + 1 2 ,

similarly,

d k + 1 = F k + 1 + F k + 1 T y k d k d k T F k + 1 y k max { 2 γ d k y k , d k T y k , F k 2 } F k + 1 + F k + 1 y k d k + d k F k + 1 y k max { 2 γ d k y k , d k T y k , F k 2 } ( 1 + 1 γ ) F k + 1 ,

where the last inequality follows from the fact

max { 2 γ d k y k , F k 2 } 2 γ d k y k .

In the remaining part of this paper, we assume that F k 0 for all k 0 , otherwise, the solution of the problem (1) has been found.

Lemma2: Let the sequence { x k } and { z k } be generated by Algorithm 1. Suppose that the Assumption 1 holds. Then there exists a positive number α k satisfying Equation (8) for all k 0 .

Proof: The line search ensure that if α k ξ , then α k = ρ 1 α k does not satisfy Equation (8), namely,

F ( z k ) , d k < σ α k d k 2 ,

where z k = x k + α k d k . From Equation (12) and Assumption 1 we have

F k 2 = F k , d k = F ( z k ) F ( x k ) , d k F ( z k ) , d k L α k d k 2 + σ α k d k 2 ρ 1 α k ( L + σ ) d k 2

which means that

α k min { ξ , ρ L + σ F k 2 d k 2 } . (14)

The above result Equation (14) shows that the line search procedure Equation (8) always terminates in a finite number of steps.

Lemma3: Let sequences { x k } and { z k } be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then both { x k } and { z k } are bounded. Moreover, we have

lim k x k z k = 0 , (15)

and

lim k x k + 1 x k = 0. (16)

Particularly, Equation (15) implies that

lim k α k d k = 0. (17)

Proof: x * S denotes any arbitrary solution of the problem (1). The monotonicity of F and the line search Equation (8) deduce

F ( z k ) , x k x * F ( z k ) , x k z k σ α k 2 d k 2 0. (18)

Equation (3), Equation (9) and Equation (18) imply

x k + 1 x * 2 = P Ω [ x k β k F ( z k ) ] x * 2 x k β k F ( z k ) x * 2 = x k x * 2 2 β k F ( z k ) , x k x * + β k 2 F ( z k ) 2 x k x * 2 2 β k F ( z k ) , x k z k + β k 2 F ( z k ) 2 x k x * 2 F ( z k ) , x k z k 2 F ( z k ) 2 x k x * 2 σ 2 x k z k 4 F ( z k ) 2 . (19)

Since the sequence { x k x * } is decreasing and convergent, the sequence { x k } is bounded. Equation (19) shows that x k x * x 0 x * for all k. Then, by Assumption 1, we have

F ( x k ) = F ( x k ) F ( x * ) L x k x * L x 0 x * . (20)

Let M 1 = L x 0 x * ,

F ( x k ) M 1 , k 0. (21)

From the Cauchy-Schwarz inequality, the line search Equation (8), the monotonicity of F and Equation (18), it follows that

0 < σ x k z k 2 F ( z k ) , x k z k F ( x k ) , x k z k F ( x k ) x k z k .

σ x k z k F ( x k ) M 1 , (22)

which shows that the sequence { z k } is bounded. Furthermore, the sequence { z k x * } is also bounded, there exists M 2 > 0 , k 0 0 , such that

z k x * M 2 , k k 0 . (23)

Based on Equation (23) and Assumption 1 it follows

F ( z k ) = F ( z k ) F ( x * ) L z k x * L M 2 . (24)

Substituting the above relationship into Equation (19), it deduces

σ 2 ( L M 2 ) 2 k = 0 x k z k 4 k = 0 ( x k x * 2 x k + 1 x * 2 ) < , (25)

which implies

lim k x k z k = 0.

From the definition of z k and Equation (15), it holds that

lim k α k d k = 0.

Combining the definition of β k , Equation (3), and the Cauchy-Schwarz inequality, we have

x k + 1 x k = P Ω [ x k β k F ( z k ) ] x k x k β k F ( z k ) x k = F ( z k ) , x k z k F ( z k ) x k z k

which together with Equation (15), proves Equation (16).

Theorem1: Let sequences { x k } and { z k } be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then

lim k inf F k = 0. (26)

Proof: We prove this Theorem by contradiction. Assume that Equation (26) does not hold, namely, there exists ε > 0 such that

F k ε , k 0. (27)

From Equation (12) and Equation (27),

d k 2 = d k + F k F k 2 = d k + F k 2 2 d k + F k , F k + F k 2 2 d k , F k F k 2 = F k 2 ,

which implies

d k ε , k 0. (28)

On the other hand, Equation (13), Equation (21) and the definition of d k deduce

d k ( 1 + 1 γ ) F k ( 1 + 1 γ ) M 1 , k 0.

Finally, from Equation (14), Equation (27) and Equation (28),

α k d k min { ξ , ρ L + σ F k 2 d k 2 } d k min { ξ ε , ρ ε 2 ( L + σ ) ( 1 + γ 1 ) M 1 }

which contradicts with Equation (17). Thus, Equation (26) holds.

4. Numerical Experiments

The numerical performances of the proposed Algorithm 1 for large scale nonlinear convex constrained monotone equations with various dimensions and different initial points are studied in this section. Furthermore, the given Algorithm 1 is extended to solve the l 1 -norm regularized problems which decode a sparse signal in compressive sensing. The algorithm is coded in MATLAB R2015a and run on a PC with Core i5 CPU and 4 GB memory.

4.1. Experiments on Nonlinear Convex Constrained Monotone Equations

The testing problems are listed as follows.

Problem 1. (Wang et al. [5]) The elements of F ( x ) are given by

F i ( x ) = e x i 1 , i = 1 , 2 , 3 , , n .

and Ω = R + n .

Problem 2. The example is taken from [7]. The elements of F ( x ) are given by

F i ( x ) = 2 x i sin ( x i ) , i = 1 , 2 , 3 , , n .

and Ω = R + n .

Problem 3. The example is taken from [9].

g 1 ( x ) = x 1 e cos ( x 1 + x 2 n + 1 ) , g i ( x ) = x i e cos ( x i 1 + x i + x i + 1 n + 1 ) , i = 2 , 3 , , n 1 , g n ( x ) = x n e cos ( x n 1 + x n n + 1 ) .

and Ω = R + n .

Problem 4. The example is taken from [20].

F i ( x ) = x i sin ( | x i 1 | ) , i = 1 , 2 , 3 , , n .

and Ω = { x R n | i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .

For convenience, MPRP denotes the proposed Algorithm 1. We compare the MPRP method with CGD method [8] on problems 1-4. For both methods, set ξ = 1 , ρ = 0.4 , σ = 10 4 . In order to evaluate the efficiency and the robustness of both methods, we test the Problems 1-4 with various dimensions n = 10000 , 50000 , 100000 and different initial points: x 1 = ( 1 , 0.5 , , 1 n ) T , x 2 = 1 n ones ( n , 1 ) , x 3 = ones ( n , 1 ) , x 4 = 2 ones ( n , 1 ) , x 5 = rand ( n , 1 ) , where ones ( n , 1 ) returns a n-by-1 array of ones and rand ( n , 1 ) returns a n-by-1 array of rand values in MATLAB.

Numerical results are shown in Tables 1-4, in which Init (Dim), NI and NF denote initial points (dimension), the number of iterations and the number of function evaluations respectively. F ( x ) is the final Euclidean norm of the function values, and CPU-time in seconds.

Tables 1-4 indicate that the dimension of the problem has little effect on the number of iterations of the algorithm. However, the computing time is relatively large in high dimension cases. Moreover, we can see from the results of Tables 1-4 that Algorithm 1 is more competitive than CGD algorithm as Algorithm 1 can get the solution of all the test data at a smaller number of iterations and smaller CPU time. So the results of Tables 1-4 show that our method is very efficient.

The numerical performances of the both methods are also evaluated by using the performance profile tool of tool of Dolan and Moré [21]. Figure 1 shows the performance of two methods, it is obviously that the proposed MPRP method is more efficient and robust than CGD method.

Table 1. Numerical results for MPRP/CGD on problem 1.

Table 2. Numerical results for MPRP/CGD on problem 2.

Table 3. Numerical results for MPRP/CGD on problem 3.

Table 4. Numerical results for MPRP/CGD on problem 4.

Figure 1. Performance profiles for two methods MPRP and CGD, where the left and the right figures are represented as the number of function evaluations and the CPU time, respectively.

4.2. Experiments on the l1-Norm Regularization Problem

The problem of the combination of l 2 and l 1 norms in the cost function often emerges for the signal reconstruction, i.e.:

min 1 2 y A x 2 2 + λ x 1 , (28)

where . 2 is the Euclidean norm, and

x 1 = j = 1 m | x j |

is the l 1 norm, A is a system matrix, y R m is the observed data, x R n is the signal to be reconstructed, and λ is a positive regularization parameter.

The optimization problems of the form Equation (28) appear in several signal reconstruction problems, such as sparse signal de-blurring [22], medical image reconstructions [23], compressed sensing [24], and super-resolution [25]. Iterative line search method or fixed point iteration schemes are commonly used to solve problem (28). By using the technique proposed by Figueiredo et al. [26], we can reformulate problem (28) as a convex quadratic program problem. Let x = u v , u 0 , v 0 , where u , v R n , u i = max ( 0 , x i ) for all i = 1 , , n and v i = min ( 0 , x i ) for all i = 1 , , n . The l 1 norm can be formulated as x 1 = e n T u + e n T v , where e n = ( 1 , 1 , , n ) T . The problem (28) is expressed as the bound-constrained quadratic program:

min u , v 1 2 y A ( u v ) 2 2 + λ e n T u + λ e n T v , s .t . u 0 , v 0. (29)

Furthermore, the problem (29) can be rewritten as a standard convex quadratic program problem:

min z 1 2 z T B z + c T z , s .t . z 0 , (30)

where

z = ( u v ) , c = λ e 2 n + ( u v ) , b = A T y , B = ( A T A A T A A T A A T A ) ,

B is a semi-definite positive matrix. Recently, the problem (30) was reformulated as a linear variable inequality (LVI) problem by Xiao et al. [8] [27]. They pointed out that this LVI problem is equivalent to a linear complementary problem, and z is a solution of the linear complementary problem if and only if it is a solution of the following nonlinear monotone equations:

F ( z ) = min { z , B z + c } = 0 , (31)

where F ( z ) is Lipschitz continuous. This result indicates that problem (28) can be solved by MPRP projection method.

In this part of numerical experiments, a compressive sensing scenario is considered, which aims to reconstruct a length-n sparse signal from significantly fewer m observations, where m n . The quality of restoration is measured by the mean of squared error (MSE) to the original signal x ¯ , that is

MSE = 1 n x ¯ x * ,

where x * is the restored signal. In practice, n = 2 12 and m = 2 10 , and the original contains 26 randomly non-zero elements. A is the Gaussian matrix generated by Matlab’s code rand ( m , n ) , the measurement y contains noise,

y = A x ¯ + ω ,

where ω is the Gaussian noise distributed as N ( 0 , 10 4 ) . The merit function is

f ( x ) = 1 2 y A x 2 2 + τ x 1 ,

where τ is forced to decrease as the measure in. The experiment starts at the measurement image, i.e. x 0 = A T y , and terminates when the relative change of the iteration satisfies:

Tol = f k f k 1 f k 1 < 10 5 ,

where f k is the function value at x k .

We compare the proposed MPRP method with CGD method for this problem. In both methods, the parameters are taken as ξ = 10 , σ = 10 4 and ρ = 0.5 . The same initial point and continuation technique on parameter τ are used in both methods.

Figure 2 shows simulation results of MPRP and CGD for a signal sparse reconstruction. As we can see in Figure 2, the original sparse signal is restored highly exactly both by MPRP and CGD. Figure 3 provides a series of comparisons among the objective function values and relative error as the iteration numbers and computing time increase. As we can see in Figure 3, the descent rates of MSE and objective function values of MPRP method are faster. The experiments are repeated for 15 random different noise samples in Table 5. We report the

Figure 2. From top to bottom: the original signal, the measurement, and the recovery signals by two methods MPRP and CGD, respectively.

Figure 3. Comparison results of MPRP and CGD methods. From left to right: the changed trends of MSE and the changed trends of the objective function values goes along with the number of iterations and CPU time in seconds, respectively.

Table 5. The experiment results for MPRP/CGD on l 1 -norm regularization problem.

number of iterations (Niter) and the CPU time (in second) required for the whole testing process. From Table 5, we can see that MPRP method is better than CGD method. For example, the new method’s iteration number and CPU time are much less than those of the CGD method. To summarize, these experiment results show that the proposed algorithm MPRP can work well in an efficient manner.

5. Conclusion

In this paper, we proposed a conjugate gradient projection algorithm for solving large-scale nonlinear convex constrained monotone equations based on the well-known Polak-Ribière-Polyak conjugate gradient method which is one of the most effective conjugate gradient methods to solve the unconstrained optimization problems. The algorithm combines CG technique with projection scheme and is a derivative-free method, so it can be applied to solve large-scale non-smooth equations for its low storage requirement. Under some technical conditions, we have established the global convergence. Another contribution of this paper is to use the given method to solve the l 1 -norm regularized problems in compressive sensing.

Acknowledgements

This work was supported by the Scientific Research Project of Tianjin Education Commission (No. 2019KJ232).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Dirkse, S.P. and Ferris, M.C. (1995) MCPLIB: A Collection of Nonlinear Mixed Complementarity Problems. Optimization Methods & Software, 5, 319-345.
https://doi.org/10.1080/10556789508805619
[2] Meintjes, K. and Morgan, A.P. (1990) Chemical Equilibrium Systems as Numerical Test Problems. ACM Transactions on Mathematical Software, 16, 143-151.
https://doi.org/10.1145/78928.78930
[3] Wood, A.J. and Wollenberg, B.F. (1996) Power Generations, Operations and Control. Wiley, New York.
[4] Solodov, M.V. and Svaiter, B.F. (1998) A Globally Convergent Inexact Newton Method for Systems of Monotone Equations. In: Fukushima, M. and Qi, L., Eds., Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic, 355-369.
https://doi.org/10.1007/978-1-4757-6388-1_18
[5] Wang, C.W. and Wang, Y.J. (2009) A Superlinearly Convergent Projection Method for Constrained Systems of Nonlinear Equations. Journal of Global Optimization, 44, 283-296.
https://doi.org/10.1007/s10898-008-9324-8
[6] Hu, Y.P. and Wei, Z.X. (2015) Wei-Yao-Liu Conjugate Gradient Projection Algorithm for Nonlinear Monotone Equations with Convex Constraints. International Journal of Computer Mathematics, 92, 2261-2272.
https://doi.org/10.1080/00207160.2014.977879
[7] Liu, J.K. and Li, S.J. (2015) A Projection Method for Convex Constrained Monotone Nonlinear Equations with Applications. Computers and Mathematics with Applications, 70, 2442-2453.
https://doi.org/10.1016/j.camwa.2015.09.014
[8] Xiao, Y.H. and Zhu, H. (2013) A Conjugate Gradient Method to Solve Convex Constrained Monotone Equations with Applications in Compressive Sensing. Journal of Mathematical Analysis and Applications, 405, 310-319.
https://doi.org/10.1016/j.jmaa.2013.04.017
[9] Yu, G.H., Niu, S.Z. and Ma, J.H. (2013) Multivariate Spectral Gradient Projection Method for Nonlinear Monotone Equations with Convex Constraints. Journal of Industrial and Management Optimization, 9, 117-129.
https://doi.org/10.3934/jimo.2013.9.117
[10] Polak, E. (1969) The Conjugate Gradient Method in Extreme Problems. USSR Computational Mathematics and Mathematical Physics, 9, 94-112.
https://doi.org/10.1016/0041-5553(69)90035-4
[11] Polak, E. and Ribière, G. (1969) Note sur la convergence de méthodes de directions conjugées. Revue Francaise d’Informatique et de Recherche Opératinelle, 3, 35-43.
https://doi.org/10.1051/m2an/196903R100351
[12] Zhang, L. and Li, J.L. (2011) A New Globalization Technique for Nonlinear Conjugate Gradient Methods for Nonconvex Minimization. Applied Mathematics and Computation, 217, 10295-10304.
https://doi.org/10.1016/j.amc.2011.05.032
[13] Hu, Y.P. and Wei, Z.X. (2014) A Modified Liu-Storey Conjugate Gradient Projection Algorithm for Nonlinear Monotone Equations. International Mathematical Forum, 9, 1767-1777.
https://doi.org/10.12988/imf.2014.411197
[14] Yuan, G.L. and Hu, W.J. (2018) A Conjugate Gradient Algorithm for Large-Scale Unconstrained Optimization Problems and Nonlinear Equations. Journal of Inequalities and Applications, 1, Article No.: 113.
https://doi.org/10.1186/s13660-018-1703-1
[15] Yuan, G.L., Meng, Z.H. and Li, Y. (2016) A Modified Hestenes and Stiefel Conjugate Gradient Algorithm for Large-Scale Nonsmooth Minimizations and Nonlinear Equations. Journal of Optimization Theory and Applications, 168, 129-152.
https://doi.org/10.1007/s10957-015-0781-1
[16] Yuan, G.L., Wei, Z.X. and Li, G.Y. (2014) A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs. Journal of Computational and Applied Mathematics, 255, 86-96.
https://doi.org/10.1016/j.cam.2013.04.032
[17] Yuan, G.L. and Zhang, M.J. (2015) A Three-Terms Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Nonlinear Equations. Journal of Computational and Applied Mathematics, 286, 186-195.
https://doi.org/10.1016/j.cam.2015.03.014
[18] Yuan, G.L. and Zhang, M.J. (2013) A Modified Hestenes-Stiefel Conjugate Gradient Algorithm for Large-Scale Optimization. Numerical Functional Analysis and Optimization, 34, 914-937.
https://doi.org/10.1080/01630563.2013.777350
[19] Yuan, G.L., Wei, Z.X. and Zhao, Q.M. (2014) A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Optimization Problems. IIE Transactions, 46, 397-413.
https://doi.org/10.1080/0740817X.2012.726757
[20] Yu, Z.S., Lin, J., Sun, J., Xiao, Y.H., Liu, L.Y. and Li, Z.H. (2009) Spectral Gradient Projection Method for Monotone Nonlinear Equations with Convex Constraints. Applied Numerical Mathematics, 59, 2416-2423.
https://doi.org/10.1016/j.apnum.2009.04.004
[21] Dolan, E.D. and Moré, J.J. (2002) Benchmarking Optimization Software with Performance Profiles. Mathematical Programming, 91, 201-213.
https://doi.org/10.1007/s101070100263
[22] Elad, M. (2010) Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer Science & Business Media, LCC, Berlin.
[23] Fessler, J.A. (2010) Model-Based Image Reconstruction for MRI. IEEE Signal Processing Magazine, 27, 81-89.
https://doi.org/10.1109/MSP.2010.936726
[24] Romberg, J.K. (2008) Imaging via Compressive Sampling. IEEE Signal Processing Magazine, 25, 14-20.
https://doi.org/10.1109/MSP.2007.914729
[25] Yang, J.C., Wright, J., Huang, T.S. and Ma, Y. (2010) Image Super-Resolution via Sparse Representation. IEEE Transactions on Image Processing, 19, 2861-2873.
https://doi.org/10.1109/TIP.2010.2050625
[26] Figueiredo, M., Nowak, R. and Wright, S.J. (2007) Gradient Projection for Sparse Reconstruction, Application to Compressed Sensing and Other Inverse Problems. IEEE Journal of Selected Topics in Signal Processing, 1, 586-597.
https://doi.org/10.1109/JSTSP.2007.910281
[27] Xiao, Y.H., Wang, Q.Y. and Hu, Q.J. (2011) Non-Smooth Equations Based Method for l1-Norm Problems with Applications to Compressed Sensing. Nonlinear Analysis: Theory, Methods & Applications, 74, 3570-3577.
https://doi.org/10.1016/j.na.2011.02.040

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.