A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization

Abstract

We propose a new unified path to approximately smoothing the nonsmooth exact penalty function in this paper. Based on the new smooth penalty function, we give a penalty algorithm to solve the constrained optimization problem, and discuss the convergence of the algorithm under mild conditions.

Share and Cite:

Liu, B. (2021) A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization. Open Journal of Optimization, 10, 61-70. doi: 10.4236/ojop.2021.103005.

1. Introduction

We consider the following constrained optimization problem

(P) min f ( x ) s .t . g j ( x ) 0, j = 1,2, , m , (1)

where f , g j : n , j = 1 , 2 , , m are continuously differentiable functions. This model has important applications in many fields such as industry, engineering, and computational science. There are many optimization methods to solve this kind of problem, and the penalty function method is one of the most important methods. In the penalty function method, the original constraint conditions are reflected to the new objective function by constructing penalty function, and then the original constrained optimization problem is transformed into a series of unconstrained optimization problems.

In the many penalty functions that have been proposed, the exact penalty function is often discussed, such as the l 1 penalty function and the l p penalty function.

The classical l 1 penalty function (Zangwill [1] ) is given as

L 1 ( x , β ) = f ( x ) + β j = 1 m max { g j ( x ) , 0 } , (2)

where β > 0 is a penalty parameter. The l p penalty function is given as

L p ( x , β ) = f ( x ) + β [ j = 1 m max { g j ( x ) , 0 } ] p , (3)

where β > 0 is a penalty parameter and 0 < p < 1 . But these exact penalty functions are often nonsmooth, which hampers the use of fast convergent algorithms such as the conjugate gradient method, the Newton method, and the quasi-Newton method. Many scholars have proposed smooth approximations to the classical exact penalty functions, which can be found in the references ( [2] - [14] ), and different penalty algorithms have been given to solve different optimization problems. In [10] [11] and [14], smooth approximations to l 1 penalty function were proposed for nonlinear inequality constrained optimization problems. Different smoothing penalty functions were also proposed in [13] to solve the global optimization problems. To solve the problem (P), [7] proposed two smooth approximations to the exact penalty function

L 1 2 ( x , β ) = f ( x ) + β j = 1 m g j + ( x ) .

In [6] and [12], some smoothing techniques for the above exact penalty function were also given.

Smoothed penalty methods can also be applied to solve the optimization problems with large scale such as the network-structured problems and the minimax problems in [3], and the traffic flow network models in [8].

[5] gave a family of smoothing penalty functions to the l 1 penalty function and established a simple penalty algorithm.

In this paper, a new unified smooth approximation path to the l p penalty function is proposed for the problem (P). On the basis of the proposed smoothing penalty functions, a new approximate algorithm is established, and the convergence of the algorithm is discussed under appropriate conditions.

Remark 1 we assume in this paper that

inf x n f 0 ( x ) > 0. (4)

The above assumption is common since if it is not satisfied, then we can take the place of f 0 ( x ) by e f 0 ( x ) + 1 .

2. Approximately Smoothing Exact Penalty Functions

For the l p penalty function (3), we give a new family of smooth approximation in this section as follows,

L p ( x , β , r ) = f ( x ) + [ r j = 1 m ψ ( β 1 p g j ( x ) r ) ] p , (5)

where r > 0 is a parameter and the function ψ : + is a continuously differentiable function, and for any t , ψ ( t ) 0 .

Here we assume the function ψ satisfies the following properties:

(a1) ψ ( ) is monotonically increasing, and ψ ( 0 ) > 0 ;

(a2) lim t + ψ ( t ) t = 1 .

It is easy to show that the following functions are all examples of the function ψ ( t ) .

ψ 1 ( t ) = { 2 e t , if t < 0 ; t + log ( 1 + t ) + 2 , if t 0 ;

ψ 2 ( t ) = { 0 , if t < 0 ; 2 3 t 2 , if 0 t 1 ; t 1 3 e 1 t , if t > 1 ;

ψ 3 ( t ) = log ( 1 + e t ) ;

ψ 4 ( t ) = t 2 + 4 + t 2 ;

ψ 5 ( t ) = { 1 2 e t , if t 0 ; 1 2 e t + t , if t > 0 ;

ψ 6 ( t ) = { e t + 1 , if t 0 ; t + 2 , if t > 0 ;

ψ 7 ( t ) = { 0 , if t < 1 ; ( t + 1 ) 2 4 , if 1 t 1 ; t , if t > 1.

From (a1) and (a2), it is easy to know that

lim r 0 + r ψ ( t r ) = t + ,

where t + = max { 0 , t } .

It follows that

L p ( x , β , r ) = f ( x ) + [ r j = 1 m ψ ( β 1 p g j ( x ) r ) ] p f ( x ) + β [ j = 1 m g j + ( x ) ] p , ( r 0 + ) .

We move on to the properties of the function ψ ( ) and look at the following proposition.

Proposition 2.1 If ψ ( ) satisfies the properties (a1) and (a2), then for any u m , σ ( u ) = j = 1 m ψ ( u j ) satisfies the following properties.

(b1) For any real number ε > 0 , there exists a positive real number η ε > 0 such that

lim inf c k + inf u + ε σ ( c k u ) c k η ε ,

where u j + = max { 0 , u j } and u + = ( u 1 + , u 2 + , , u m + ) T .

(b2) For c k + ( k ) , there exist ε k 0 + ( k ) such that

lim sup k sup u + ε k σ ( c k u ) c k = 0.

(b3) There exists a constant σ 0 such that

σ ( u ) σ 0 , forany u m .

(b4) There exists a constant σ 1 such that

σ ( u ) σ 1 , forany u 0.

Proof. We first show that σ ( u ) = j = 1 m ψ ( u j ) satisfies the property (b1).

For u + ε , there exists a j 0 such that u j ε m . Otherwise, if j , u j < ε m , then u + = j = 1 m ( u j + ) 2 < ε . Since ψ ( ) is a monotonically increasing positive function, we have that

inf u + ε σ ( c k u ) c k = inf u + ε 1 c k j = 1 m ψ ( c k u j ) inf u j 0 ε m 1 c k ψ ( c k u j 0 ) = 1 c k ψ ( c k ε m ) ,

where the inequality is got by the positiveness of ψ ( ) , and the last equality is got by that ψ ( ) is increasing.

Again by the property (a2) of ψ ( ) , we obtain that

lim inf c k + inf u + ε σ ( c k u ) c k lim inf c k + 1 c k ψ ( c k ε m ) = lim inf c k + m c k ε ψ ( c k ε m ) ε m = ε m .

Let η ε = ε m , then we prove that σ ( u ) satisfies the property (b1).

We now show that σ ( u ) = j = 1 m ψ ( u j ) satisfies the property (b2).

For c k + ( k ) , set ε k = 1 c k , and σ 1 = m ψ ( 1 ) . Since ψ ( ) is increasing, we have

sup u + ε k σ ( c k u ) = sup u + ε k j = 1 m ψ ( c k u j ) sup u j ε k , j = 1 , , m j = 1 m ψ ( c k u j ) = m ψ ( c k ε k ) = m ψ ( 1 ) .

Since ψ ( ) 0 , σ ( u ) satisfies the property (b2).

Since ψ ( ) 0 and ψ ( ) is increasing, we can easily get the properties (b3) and (b4).

3. Smooth Penalty Algorithm and Its Convergence

We propose an algorithm based on the penalty function L p ( x , β , r ) and discuss its global convergence.

Algorithm 3.1 Step 0. Let β 0 = 1 , r 0 = 1 , ω 0 = 1 , and set k : = 0 .

Step 1. Find an

x k arg min x n L p ( x , β k , r k ) , (6)

or x k satisfies the following inequality

L p ( x k , β k , r k ) inf x n L p ( x , β k , r k ) + ω k . (7)

Step 2. Let

r k + 1 = { 1 2 r k , if 0 g + ( x k ) r k ; r k , otherwise .

β k + 1 = { β k , if g + ( x k ) = 0 ; 2 β k , otherwise .

Step 3. Set ω k + 1 = 1 2 ω k , k : = k + 1 , and return to Step 1.

Now we study the global convergence of the algorithm. For an ε 0 , we define the relaxed feasible set of the problem (P) by

Ω ε = { x n | g j ( x ) ε ,1, , m } .

Thus Ω 0 can denote the feasible set of (P). In this paper we always suppose that Ω 0 . We denote the optimal solution set of (P) by Ω 0 * .

The perturbation function of (P) is defined as

θ f ( ε ) = inf x Ω ε f ( x ) .

Then the optimal value of (P) is

θ f ( 0 ) = inf x Ω 0 f ( x ) .

It can be easily showed that θ f ( ε ) is upper semi-continuous at ε = 0 . Thus the continuity of θ f ( ε ) at ε = 0 is equivalent to the lower semi-continuity of θ f ( ε ) at ε = 0 . Set

F ε = { x n | f ( x ) θ f ( 0 ) + ε }

and

S k ( ε ) = { x n | L ( x , β k , r k ) inf z n L ( z , β k , r k ) + ε } .

Now we give the following lemma.

Lemma 3.1 The sequence { r k } generated by Algorithm 3.1 converges to 0.

Proof. Assume to the contrary that { r k } does not converge to 0, then by that { r k } decreases monotonically, there exists a k 0 such that k k 0 , r k = r k 0 . It follows from Step 2 of Algorithm 3.1 that k k 0 , r k = r k 0 , g + ( x k ) > r k = r k 0 , and lim k β k = + .

Let x ¯ Ω 0 , then g ( x ¯ ) 0 . By Proposition 2.1, we know that k k 0 ,

L p ( x k , β k , r k 0 ) inf x n L p ( x , β k , r k ) + ω k L p ( x ¯ , β k , r k 0 ) + ω k = f ( x ¯ ) + [ r k 0 σ ( β k 1 p g ( x ¯ ) r k 0 ) ] p + ω k f ( x ¯ ) + [ r k 0 σ 1 ] p + ω k . (8)

where σ 1 is given by Proposition 2.1.

For sufficiently large k k 0 , by Proposition 2.1, we have that

L p ( x k , β k , r k 0 ) = f ( x k ) + [ r k 0 σ ( β k 1 p g ( x k ) r k 0 ) ] p [ r k 0 σ ( β k 1 p g ( x k ) r k 0 ) ] p β k inf g + ( x ) > r k 0 [ r k 0 β k 1 p σ ( β k 1 p g ( x ) r k 0 ) ] p 1 2 β k η r k 0 p .

The last inequality can be got by the property (b1) of Proposition 2.1, where c k = β k 1 p r k 0 . By that lim k β k = + , the right side of the above last inequality goes to , which contradicts with (8). So { r k } generated by Algorithm 2.1 converges to 0.

Lemma 3.2 ε > 0 , for all sufficiently large k, it holds that S k ( ε ) Ω ε .

Proof. Assume to the contrary that there exists an ε 0 > 0 and a subsequence K N such that k K , z k S k ( ε 0 ) , but z k Ω ε 0 . Then there exists a subsequence K 0 K and an index j 0 { 1,2, , m } such that k K 0 ,

g j 0 ( z k ) > ε 0 . (9)

Then by Lemma 3.1, we have for sufficiently large k K 0 that g + ( z k ) > ε 0 r k . Then it follows from Step 2 of Algorithm 3.1 that lim k β k = + .

By (9) and the property (b1) of Proposition 2.1, for sufficiently large k, we have that

inf x n L p ( x , β k , r k ) + ε 0 L p ( z k , β k , r k ) = f ( z k ) + [ r k σ ( β k 1 p g ( z k ) r k ) ] p β k inf g + ( x ) ε 0 [ r k 0 β k 1 p σ ( β k 1 p g ( x ) r k 0 ) ] p 1 2 β k η ε 0 p . (10)

Then by lim k β k = + , the right side of the last inequality of (10) goes to .

Let x ¯ Ω 0 , then g ( x ¯ ) 0 . By Proposition 2.1, we know that k k 0 ,

inf x n L p ( x , β k , r k ) + ε 0 L p ( x ¯ , β k , r k ) + ε 0 = f ( x ¯ ) + [ r k σ ( β k 1 p g ( x ¯ ) r k ) ] p + ε 0 f ( x ¯ ) + [ r k σ 1 ] p + ε 0 , (11)

which contradicts with (10).

Theorem 3.1 (Perturbation Theorem) Assume that { x k } is a sequence generated by Algorithm 3.1, then it holds that

1) lim k f ( x k ) = lim ε 0 + θ f ( ε ) ;

2) lim k L p ( x k , β k , r k ) = lim ε 0 + θ f ( ε ) ;

3) lim k [ r k σ ( β k 1 p g ( x k ) r k ) ] p = 0.

Proof. Since the perturbation function θ f ( ε ) is monotonically decreasing on ε > 0 , and k K , θ f ( ε ) θ f ( 0 ) , it holds that lim ε 0 + θ f ( ε ) exists and is finite. By Proposition 2.1, for 1 r k + ( k ) , ε k 0 + , such that

lim sup k sup u + ε k r k σ ( u r k ) = 0. (12)

Choose an ε k > 0 such that β k 1 p ε k ε k , and set ε ¯ k = ε k m , then we have

lim k θ f ( ε ¯ k ) = lim ε 0 + θ f ( ε ) . (13)

Then we again choose δ k > 0 and δ k 0 ( k ) . By the definition of infimum, for each k, there exists a z k Ω ε ¯ k such that

f ( z k ) θ f ( ε ¯ k ) + δ k .

Since z k Ω ε ¯ k , we have that g j ( z k ) ε ¯ k = ε k m , i = 1 , , m , then we can obtain that

β k g + ( z k ) β k ε k ε k . (14)

On the other side, for any ε > 0 , by the proof of Lemma 3.2, we have for all sufficiently large k that

x k Ω ε . (15)

Thus, for any ε > 0 , by the property (b3) and Proposition 2.1, we have that

θ f ( ε ) f ( x k ) f ( x k ) + [ r k σ ( β k 1 p g ( x k ) r k ) ] p [ r k σ 0 ] p = L p ( x k , β k , r k ) [ r k σ 0 ] p inf x n L p ( x , β k , r k ) + ω k [ r k σ 0 ] p f ( z k ) + [ r k σ ( β k 1 p g ( z k ) r k ) ] p + ω k [ r k σ 0 ] p θ f ( ε ¯ k ) + δ k + [ r k sup u + ε k σ ( u r k ) ] p + ω k [ r k σ 0 ] p .

Let k , and put two sides of the above inequality to the limit, we obtain that 1)-3) hold.

Theorem 3.2 Assume that { x k } is a sequence generated by Algorithm 3.1, then every accumulation point of { x k } is an optimal solution of the problem (P)

Proof. By Lemma 3.2, for sufficiently large k, we have

x k Ω ε . (16)

Suppose that x * is an accumulation point of { x k } , by the continuity of g j ( i = 1 , , m ) and (16), we know that x * Ω ε . Again by the arbitrariness of ε , we have that x * Ω 0 .

By the Perturbation Theorem, we obtain that f ( x * ) = lim k f ( x k ) = lim ε 0 + θ f ( ε ) θ f ( 0 ) .

4. Conclusions

In this paper, we propose a uniform path of smooth approximation for the classical nonsmooth penalty function. Our model contains some of the existing models. In addition, we also give a class of relaxed smooth penalty algorithm, and prove the convergence of the algorithm under some weak conditions.

In the future work, we will use the model and algorithm of this paper to carry out numerical experiments and compare with some existing methods. We also consider applying the model and algorithm in this paper to the study of power market equilibrium optimization problem.

Fund

This research is supported by National Natural Science Foundation of China (11771255, 11801325), Young Innovation Teams of Shandong Province (2019KJI013) and the Natural Science Foundations of Shandong Province (ZR2015AL011).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Zangwill, W.I. (1967) Nonlinear Programming via Penalty Function. Management Science, 13, 334-358.
https://doi.org/10.1287/mnsc.13.5.344
[2] Bental, A. and Teboulle, M. (1989) A Smoothing Technique for Non-Differentiable Optimization Problems. Lecture Notes in Mathematics, Springer Verlag, Berlin, Vol. 1405, 1-11.
https://doi.org/10.1007/BFb0083582
[3] Pinar, M.C. and Zenios, S.A. (1994) On Smoothing Exact Penalty Functions for Convex Constarained Optimization. SIAM Journal on Optimization, 4, 486-511.
https://doi.org/10.1137/0804027
[4] Auslender, A., Cominetti, R. and Haddou, M. (1997) Asymptotic Analysis for Penalty and Barrier Methods in Convex and Linear Programming. Mathematics of Operational Research, 22, 43-62.
https://doi.org/10.1287/moor.22.1.43
[5] Gonzaga, C.C. and Castillo, R.A. (2003) A Nonlinear Programming Algorithm Based on Non-Coercive Penalty Functions. Mathematical Programming, 96, 87-101.
https://doi.org/10.1007/s10107-002-0332-z
[6] Wu, Z.Y., Bai, F.S., Yang, X.Q. and Zhang, L.S. (2004) An Exact Lower Order Penalty Function and Its Smoothing in Nonlinear Programming. Optimization, 53, 51-68.
https://doi.org/10.1080/02331930410001662199
[7] Meng, Z.Q., Dang, C.Y. and Yang, X.Q. (2006) On the Smoothing of the Square-Root Exact Penalty Function for Inequality Constrained Optimization. Computational Optimization and Applications, 35, 375-398.
https://doi.org/10.1007/s10589-006-8720-6
[8] Herty, M., Klar, A., Singh, A.K. and Spellucci, P. (2007) Smoothed Penalty Algorithms for Optimization of Nonlinear Models. Computational Optimization and Applications, 37, 157-176.
https://doi.org/10.1007/s10589-007-9011-6
[9] Di Pillo, G., Lucidi, S. and Rinaldi, F. (2012) An Approach to Constrained Global Optimization Based on Exact Penalty Functions. Journal of Global Optimization, 54, 251-260.
https://doi.org/10.1007/s10898-010-9582-0
[10] Lian, S.J. (2012) Smoothing Approximation to l1 Exact Penalty Function for Inequality Constrained Optimization. Applied Mathematics and Computation, 219, 3113-3121.
https://doi.org/10.1016/j.amc.2012.09.042
[11] Xu, X.S., Meng, Z.Q., Sun, J.W., Huang, L.G. and Shen, R. (2013) A Second-Order Smooth Penalty Function Algorithm for Constarained Optimization Problems. Com-putational Optimization and Application, 55, 155-172.
https://doi.org/10.1007/s10589-012-9504-9
[12] Lian, S.J. and Duan, Y.Q. (2016) Smoothing of the Lower-Order Exact Penalty Function for Inequality Constrained Optimization. Journal of Inequalities and Applications, 185, 1-12.
https://doi.org/10.1186/s13660-016-1126-9
[13] Wu, Z.Y., Lee, H.W.J., Bai, F.S. and Zhang, L.S. (2017) Quadratic Smoothing Approximation to l1 Exact Penalty Function in Global Optimization. Journal of Industrial and Management Optimization, 1, 533-547.
https://doi.org/10.3934/jimo.2005.1.533
[14] Liu, B.Z. (2019) A Smoothing Penalty Function Method for the Constrained Optimization Problem. Open Journal of Optimization, 8, 113-126.
https://doi.org/10.4236/ojop.2019.84010

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.