A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization ()
1. Introduction
We consider the following constrained optimization problem
(P)
(1)
where
are continuously differentiable functions. This model has important applications in many fields such as industry, engineering, and computational science. There are many optimization methods to solve this kind of problem, and the penalty function method is one of the most important methods. In the penalty function method, the original constraint conditions are reflected to the new objective function by constructing penalty function, and then the original constrained optimization problem is transformed into a series of unconstrained optimization problems.
In the many penalty functions that have been proposed, the exact penalty function is often discussed, such as the
penalty function and the
penalty function.
The classical
penalty function (Zangwill [1] ) is given as
(2)
where
is a penalty parameter. The
penalty function is given as
(3)
where
is a penalty parameter and
. But these exact penalty functions are often nonsmooth, which hampers the use of fast convergent algorithms such as the conjugate gradient method, the Newton method, and the quasi-Newton method. Many scholars have proposed smooth approximations to the classical exact penalty functions, which can be found in the references ( [2] - [14] ), and different penalty algorithms have been given to solve different optimization problems. In [10] [11] and [14], smooth approximations to
penalty function were proposed for nonlinear inequality constrained optimization problems. Different smoothing penalty functions were also proposed in [13] to solve the global optimization problems. To solve the problem (P), [7] proposed two smooth approximations to the exact penalty function
In [6] and [12], some smoothing techniques for the above exact penalty function were also given.
Smoothed penalty methods can also be applied to solve the optimization problems with large scale such as the network-structured problems and the minimax problems in [3], and the traffic flow network models in [8].
[5] gave a family of smoothing penalty functions to the
penalty function and established a simple penalty algorithm.
In this paper, a new unified smooth approximation path to the
penalty function is proposed for the problem (P). On the basis of the proposed smoothing penalty functions, a new approximate algorithm is established, and the convergence of the algorithm is discussed under appropriate conditions.
Remark 1 we assume in this paper that
(4)
The above assumption is common since if it is not satisfied, then we can take the place of
by
.
2. Approximately Smoothing Exact Penalty Functions
For the
penalty function (3), we give a new family of smooth approximation in this section as follows,
(5)
where
is a parameter and the function
is a continuously differentiable function, and for any
,
.
Here we assume the function
satisfies the following properties:
(a1)
is monotonically increasing, and
;
(a2)
.
It is easy to show that the following functions are all examples of the function
.
From (a1) and (a2), it is easy to know that
where
.
It follows that
We move on to the properties of the function
and look at the following proposition.
Proposition 2.1 If
satisfies the properties (a1) and (a2), then for any
,
satisfies the following properties.
(b1) For any real number
, there exists a positive real number
such that
where
and
.
(b2) For
, there exist
such that
(b3) There exists a constant
such that
(b4) There exists a constant
such that
Proof. We first show that
satisfies the property (b1).
For
, there exists a
such that
. Otherwise, if
,
, then
. Since
is a monotonically increasing positive function, we have that
where the inequality is got by the positiveness of
, and the last equality is got by that
is increasing.
Again by the property (a2) of
, we obtain that
Let
, then we prove that
satisfies the property (b1).
We now show that
satisfies the property (b2).
For
, set
, and
. Since
is increasing, we have
Since
,
satisfies the property (b2).
Since
and
is increasing, we can easily get the properties (b3) and (b4).
3. Smooth Penalty Algorithm and Its Convergence
We propose an algorithm based on the penalty function
and discuss its global convergence.
Algorithm 3.1 Step 0. Let
,
,
, and set
.
Step 1. Find an
(6)
or
satisfies the following inequality
(7)
Step 2. Let
Step 3. Set
,
, and return to Step 1.
Now we study the global convergence of the algorithm. For an
, we define the relaxed feasible set of the problem (P) by
Thus
can denote the feasible set of (P). In this paper we always suppose that
. We denote the optimal solution set of (P) by
.
The perturbation function of (P) is defined as
Then the optimal value of (P) is
It can be easily showed that
is upper semi-continuous at
. Thus the continuity of
at
is equivalent to the lower semi-continuity of
at
. Set
and
Now we give the following lemma.
Lemma 3.1 The sequence
generated by Algorithm 3.1 converges to 0.
Proof. Assume to the contrary that
does not converge to 0, then by that
decreases monotonically, there exists a
such that
,
. It follows from Step 2 of Algorithm 3.1 that
,
,
, and
.
Let
, then
. By Proposition 2.1, we know that
,
(8)
where
is given by Proposition 2.1.
For sufficiently large
, by Proposition 2.1, we have that
The last inequality can be got by the property (b1) of Proposition 2.1, where
. By that
, the right side of the above last inequality goes to
, which contradicts with (8). So
generated by Algorithm 2.1 converges to 0.
Lemma 3.2
, for all sufficiently large k, it holds that
.
Proof. Assume to the contrary that there exists an
and a subsequence
such that
,
, but
. Then there exists a subsequence
and an index
such that
,
(9)
Then by Lemma 3.1, we have for sufficiently large
that
. Then it follows from Step 2 of Algorithm 3.1 that
.
By (9) and the property (b1) of Proposition 2.1, for sufficiently large k, we have that
(10)
Then by
, the right side of the last inequality of (10) goes to
.
Let
, then
. By Proposition 2.1, we know that
,
(11)
which contradicts with (10).
Theorem 3.1 (Perturbation Theorem) Assume that
is a sequence generated by Algorithm 3.1, then it holds that
1)
2)
3)
Proof. Since the perturbation function
is monotonically decreasing on
, and
,
, it holds that
exists and is finite. By Proposition 2.1, for
,
, such that
(12)
Choose an
such that
, and set
, then we have
(13)
Then we again choose
and
. By the definition of infimum, for each k, there exists a
such that
Since
, we have that
, then we can obtain that
(14)
On the other side, for any
, by the proof of Lemma 3.2, we have for all sufficiently large k that
(15)
Thus, for any
, by the property (b3) and Proposition 2.1, we have that
Let
, and put two sides of the above inequality to the limit, we obtain that 1)-3) hold.
Theorem 3.2 Assume that
is a sequence generated by Algorithm 3.1, then every accumulation point of
is an optimal solution of the problem (P)
Proof. By Lemma 3.2, for sufficiently large k, we have
(16)
Suppose that
is an accumulation point of
, by the continuity of
and (16), we know that
. Again by the arbitrariness of
, we have that
.
By the Perturbation Theorem, we obtain that
.
4. Conclusions
In this paper, we propose a uniform path of smooth approximation for the classical nonsmooth penalty function. Our model contains some of the existing models. In addition, we also give a class of relaxed smooth penalty algorithm, and prove the convergence of the algorithm under some weak conditions.
In the future work, we will use the model and algorithm of this paper to carry out numerical experiments and compare with some existing methods. We also consider applying the model and algorithm in this paper to the study of power market equilibrium optimization problem.
Fund
This research is supported by National Natural Science Foundation of China (11771255, 11801325), Young Innovation Teams of Shandong Province (2019KJI013) and the Natural Science Foundations of Shandong Province (ZR2015AL011).