A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization

Abstract

We propose a new unified path to approximately smoothing the nonsmooth exact penalty function in this paper. Based on the new smooth penalty function, we give a penalty algorithm to solve the constrained optimization problem, and discuss the convergence of the algorithm under mild conditions.

Share and Cite:

Liu, B. (2021) A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization. Open Journal of Optimization, 10, 61-70. doi: 10.4236/ojop.2021.103005.

1. Introduction

We consider the following constrained optimization problem

(P) $\begin{array}{ll}\mathrm{min}\hfill & f\left(x\right)\hfill \\ \text{s}\text{.t}\text{.}\hfill & {g}_{j}\left(x\right)\le 0,j=1,2,\cdots ,m,\hfill \end{array}$ (1)

where $f,{g}_{j}:{\Re }^{n}\to \Re ,j=1,2,\cdots ,m$ are continuously differentiable functions. This model has important applications in many fields such as industry, engineering, and computational science. There are many optimization methods to solve this kind of problem, and the penalty function method is one of the most important methods. In the penalty function method, the original constraint conditions are reflected to the new objective function by constructing penalty function, and then the original constrained optimization problem is transformed into a series of unconstrained optimization problems.

In the many penalty functions that have been proposed, the exact penalty function is often discussed, such as the ${l}_{1}$ penalty function and the ${l}_{p}$ penalty function.

The classical ${l}_{1}$ penalty function (Zangwill [1] ) is given as

${L}_{1}\left(x,\beta \right)=f\left(x\right)+\beta \underset{j=1}{\overset{m}{\sum }}\mathrm{max}\left\{{g}_{j}\left(x\right),0\right\},$ (2)

where $\beta >0$ is a penalty parameter. The ${l}_{p}$ penalty function is given as

${L}_{p}\left(x,\beta \right)=f\left(x\right)+\beta {\left[\underset{j=1}{\overset{m}{\sum }}\mathrm{max}\left\{{g}_{j}\left(x\right),0\right\}\right]}^{p},$ (3)

where $\beta >0$ is a penalty parameter and $0. But these exact penalty functions are often nonsmooth, which hampers the use of fast convergent algorithms such as the conjugate gradient method, the Newton method, and the quasi-Newton method. Many scholars have proposed smooth approximations to the classical exact penalty functions, which can be found in the references ( [2] - [14] ), and different penalty algorithms have been given to solve different optimization problems. In [10] [11] and [14], smooth approximations to ${l}_{1}$ penalty function were proposed for nonlinear inequality constrained optimization problems. Different smoothing penalty functions were also proposed in [13] to solve the global optimization problems. To solve the problem (P), [7] proposed two smooth approximations to the exact penalty function

${L}_{\frac{1}{2}}\left(x,\beta \right)=f\left(x\right)+\beta \underset{j=1}{\overset{m}{\sum }}\sqrt{{g}_{j}^{+}\left(x\right)}.$

In [6] and [12], some smoothing techniques for the above exact penalty function were also given.

Smoothed penalty methods can also be applied to solve the optimization problems with large scale such as the network-structured problems and the minimax problems in [3], and the traffic flow network models in [8].

[5] gave a family of smoothing penalty functions to the ${l}_{1}$ penalty function and established a simple penalty algorithm.

In this paper, a new unified smooth approximation path to the ${l}_{p}$ penalty function is proposed for the problem (P). On the basis of the proposed smoothing penalty functions, a new approximate algorithm is established, and the convergence of the algorithm is discussed under appropriate conditions.

Remark 1 we assume in this paper that

$\underset{x\in {\Re }^{n}}{\mathrm{inf}}{f}_{0}\left(x\right)>0.$ (4)

The above assumption is common since if it is not satisfied, then we can take the place of ${f}_{0}\left(x\right)$ by ${\text{e}}^{{f}_{0}\left(x\right)}+1$.

2. Approximately Smoothing Exact Penalty Functions

For the ${l}_{p}$ penalty function (3), we give a new family of smooth approximation in this section as follows,

${L}_{p}\left(x,\beta ,r\right)=f\left(x\right)+{\left[r\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left(\frac{{\beta }^{\frac{1}{p}}{g}_{j}\left(x\right)}{r}\right)\right]}^{p},$ (5)

where $r>0$ is a parameter and the function $\psi :\Re \to {\Re }_{+}$ is a continuously differentiable function, and for any $t\in \Re$, $\psi \left(t\right)\ge 0$.

Here we assume the function $\psi$ satisfies the following properties:

(a1) $\psi \left(\cdot \right)$ is monotonically increasing, and ${\psi }^{\prime }\left(0\right)>0$ ;

(a2) $\underset{t\to +\infty }{\mathrm{lim}}\frac{\psi \left(t\right)}{t}=1$.

It is easy to show that the following functions are all examples of the function $\psi \left(t\right)$.

${\psi }_{1}\left(t\right)=\left\{\begin{array}{ll}2{\text{e}}^{t},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t<0;\hfill \\ t+\mathrm{log}\left(1+t\right)+2,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t\ge 0;\hfill \end{array}$

${\psi }_{2}\left(t\right)=\left\{\begin{array}{ll}0,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t<0;\hfill \\ \frac{2}{3}{t}^{2},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}0\le t\le 1;\hfill \\ t-\frac{1}{3}{\text{e}}^{1-t},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t>1;\hfill \end{array}$

${\psi }_{3}\left(t\right)=\mathrm{log}\left(1+{\text{e}}^{t}\right);$

${\psi }_{4}\left(t\right)=\frac{\sqrt{{t}^{2}+4}+t}{2};$

${\psi }_{5}\left(t\right)=\left\{\begin{array}{ll}\frac{1}{2}{\text{e}}^{t},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t\le 0;\hfill \\ \frac{1}{2}{\text{e}}^{-t}+t,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t>0;\hfill \end{array}$

${\psi }_{6}\left(t\right)=\left\{\begin{array}{ll}{\text{e}}^{t}+1,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t\le 0;\hfill \\ t+2,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t>0;\hfill \end{array}$

${\psi }_{7}\left(t\right)=\left\{\begin{array}{ll}0,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t<-1;\hfill \\ \frac{{\left(t+1\right)}^{2}}{4},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}-1\le t\le 1;\hfill \\ t,\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}t>1.\hfill \end{array}$

From (a1) and (a2), it is easy to know that

$\underset{r\to {0}^{+}}{\mathrm{lim}}r\psi \left(\frac{t}{r}\right)={t}^{+},$

where ${t}^{+}=\mathrm{max}\left\{0,\text{\hspace{0.17em}}t\right\}$.

It follows that

${L}_{p}\left(x,\beta ,r\right)=f\left(x\right)+{\left[r\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left(\frac{{\beta }^{\frac{1}{p}}{g}_{j}\left(x\right)}{r}\right)\right]}^{p}\to f\left(x\right)+\beta {\left[\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{g}_{j}^{+}\left(x\right)\right]}^{p},\left(r\to {0}^{+}\right).$

We move on to the properties of the function $\psi \left(\cdot \right)$ and look at the following proposition.

Proposition 2.1 If $\psi \left(\cdot \right)$ satisfies the properties (a1) and (a2), then for any $u\in {\Re }^{m}$, $\sigma \left(u\right)=\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left({u}_{j}\right)$ satisfies the following properties.

(b1) For any real number $\epsilon >0$, there exists a positive real number ${\eta }_{\epsilon }>0$ such that

$\underset{{c}_{k}\to +\infty }{\mathrm{lim}\mathrm{inf}}\underset{‖{u}^{+}‖\ge \epsilon }{\mathrm{inf}}\frac{\sigma \left({c}_{k}u\right)}{{c}_{k}}\ge {\eta }_{\epsilon },$

where ${u}_{j}^{+}=\mathrm{max}\left\{0,\text{\hspace{0.17em}}{u}_{j}\right\}$ and ${u}^{+}={\left({u}_{1}^{+},{u}_{2}^{+},\cdots ,{u}_{m}^{+}\right)}^{\text{T}}$.

(b2) For ${c}_{k}\to +\infty \left(k\to \infty \right)$, there exist ${\epsilon }_{k}\to {0}^{+}\left(k\to \infty \right)$ such that

$\underset{k\to \infty }{\mathrm{lim}\mathrm{sup}}\underset{‖{u}^{+}‖\le {\epsilon }_{k}}{\mathrm{sup}}\frac{\sigma \left({c}_{k}u\right)}{{c}_{k}}=0.$

(b3) There exists a constant ${\sigma }_{0}$ such that

$\sigma \left(u\right)\ge {\sigma }_{0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{forany}\text{\hspace{0.17em}}u\in {\Re }^{m}.$

(b4) There exists a constant ${\sigma }_{1}$ such that

$\sigma \left(u\right)\le {\sigma }_{1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{forany}\text{\hspace{0.17em}}u\le 0.$

Proof. We first show that $\sigma \left(u\right)=\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left({u}_{j}\right)$ satisfies the property (b1).

For $‖{u}^{+}‖\ge \epsilon$, there exists a ${j}_{0}$ such that ${u}_{j}\ge \frac{\epsilon }{\sqrt{m}}$. Otherwise, if $\forall j$, ${u}_{j}<\frac{\epsilon }{\sqrt{m}}$, then $‖{u}^{+}‖=\sqrt{{\sum }_{j=1}^{m}{\left({u}_{j}^{+}\right)}^{2}}<\epsilon$. Since $\psi \left(\cdot \right)$ is a monotonically increasing positive function, we have that

$\begin{array}{c}\underset{‖{u}^{+}‖\ge \epsilon }{\mathrm{inf}}\frac{\sigma \left({c}_{k}u\right)}{{c}_{k}}=\underset{‖{u}^{+}‖\ge \epsilon }{\mathrm{inf}}\frac{1}{{c}_{k}}\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left({c}_{k}{u}_{j}\right)\\ \ge \underset{{u}_{{j}_{0}}\ge \frac{\epsilon }{\sqrt{m}}}{\mathrm{inf}}\frac{1}{{c}_{k}}\psi \left({c}_{k}{u}_{{j}_{0}}\right)\\ =\frac{1}{{c}_{k}}\psi \left({c}_{k}\frac{\epsilon }{\sqrt{m}}\right),\end{array}$

where the inequality is got by the positiveness of $\psi \left(\cdot \right)$, and the last equality is got by that $\psi \left(\cdot \right)$ is increasing.

Again by the property (a2) of $\psi \left(\cdot \right)$, we obtain that

$\begin{array}{c}\underset{{c}_{k}\to +\infty }{\mathrm{lim}\mathrm{inf}}\underset{‖{u}^{+}‖\ge \epsilon }{\mathrm{inf}}\frac{\sigma \left({c}_{k}u\right)}{{c}_{k}}\ge \underset{{c}_{k}\to +\infty }{\mathrm{lim}\mathrm{inf}}\frac{1}{{c}_{k}}\psi \left({c}_{k}\frac{\epsilon }{\sqrt{m}}\right)\\ =\underset{{c}_{k}\to +\infty }{\mathrm{lim}\mathrm{inf}}\frac{\sqrt{m}}{{c}_{k}\epsilon }\psi \left({c}_{k}\frac{\epsilon }{\sqrt{m}}\right)\frac{\epsilon }{\sqrt{m}}\\ =\frac{\epsilon }{\sqrt{m}}.\end{array}$

Let ${\eta }_{\epsilon }=\frac{\epsilon }{\sqrt{m}}$, then we prove that $\sigma \left(u\right)$ satisfies the property (b1).

We now show that $\sigma \left(u\right)=\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\psi \left({u}_{j}\right)$ satisfies the property (b2).

For ${c}_{k}\to +\infty \left(k\to \infty \right)$, set ${\epsilon }_{k}=\frac{1}{{c}_{k}}$, and ${\sigma }_{1}=m\psi \left(1\right)$. Since $\psi \left(\cdot \right)$ is increasing, we have

$\begin{array}{c}\underset{‖{u}^{+}‖\le {\epsilon }_{k}}{\mathrm{sup}}\sigma \left({c}_{k}u\right)=\underset{‖{u}^{+}‖\le {\epsilon }_{k}}{\mathrm{sup}}\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\text{ }\psi \left({c}_{k}{u}_{j}\right)\\ \le \underset{{u}_{j}\le {\epsilon }_{k},j=1,\cdots ,m}{\mathrm{sup}}\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }\text{ }\psi \left({c}_{k}{u}_{j}\right)\\ =m\psi \left({c}_{k}{\epsilon }_{k}\right)\\ =m\psi \left(1\right).\end{array}$

Since $\psi \left(\cdot \right)\ge 0$, $\sigma \left(u\right)$ satisfies the property (b2).

Since $\psi \left(\cdot \right)\ge 0$ and $\psi \left(\cdot \right)$ is increasing, we can easily get the properties (b3) and (b4). $\square$

3. Smooth Penalty Algorithm and Its Convergence

We propose an algorithm based on the penalty function ${L}_{p}\left(x,\beta ,r\right)$ and discuss its global convergence.

Algorithm 3.1 Step 0. Let ${\beta }_{0}=1$, ${r}_{0}=1$, ${\omega }_{0}=1$, and set $k:=0$.

Step 1. Find an

${x}^{k}\in \mathrm{arg}\underset{x\in {\Re }^{n}}{\mathrm{min}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right),$ (6)

or ${x}^{k}$ satisfies the following inequality

${L}_{p}\left({x}^{k},{\beta }_{k},{r}_{k}\right)\le \underset{x\in {\Re }^{n}}{\mathrm{inf}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right)+{\omega }_{k}.$ (7)

Step 2. Let

${r}_{k+1}=\left\{\begin{array}{ll}\frac{1}{2}{r}_{k},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}0\le ‖{g}^{+}\left({x}^{k}\right)‖\le {r}_{k};\hfill \\ {r}_{k},\hfill & \text{ }\text{otherwise}.\text{ }\hfill \end{array}$

${\beta }_{k+1}=\left\{\begin{array}{ll}{\beta }_{k},\hfill & \text{ }\text{if}\text{ }\text{\hspace{0.17em}}‖{g}^{+}\left({x}^{k}\right)‖=0;\hfill \\ 2{\beta }_{k},\hfill & \text{ }\text{otherwise}.\text{ }\hfill \end{array}$

Step 3. Set ${\omega }_{k+1}=\frac{1}{2}{\omega }_{k}$, $k:=k+1$, and return to Step 1.

Now we study the global convergence of the algorithm. For an $\epsilon \ge 0$, we define the relaxed feasible set of the problem (P) by

${\Omega }_{\epsilon }=\left\{x\in {\Re }^{n}|{g}_{j}\left(x\right)\le \epsilon ,1,\cdots ,m\right\}.$

Thus ${\Omega }_{0}$ can denote the feasible set of (P). In this paper we always suppose that ${\Omega }_{0}\ne \varnothing$. We denote the optimal solution set of (P) by ${\Omega }_{0}^{*}$.

The perturbation function of (P) is defined as

${\theta }_{f}\left(\epsilon \right)=\underset{x\in {\Omega }_{\epsilon }}{\mathrm{inf}}f\left(x\right).$

Then the optimal value of (P) is

${\theta }_{f}\left(0\right)=\underset{x\in {\Omega }_{0}}{\mathrm{inf}}f\left(x\right).$

It can be easily showed that ${\theta }_{f}\left(\epsilon \right)$ is upper semi-continuous at $\epsilon =0$. Thus the continuity of ${\theta }_{f}\left(\epsilon \right)$ at $\epsilon =0$ is equivalent to the lower semi-continuity of ${\theta }_{f}\left(\epsilon \right)$ at $\epsilon =0$. Set

${F}_{\epsilon }=\left\{x\in {\Re }^{n}|f\left(x\right)\le {\theta }_{f}\left(0\right)+\epsilon \right\}$

and

${S}_{k}\left(\epsilon \right)=\left\{x\in {\Re }^{n}|L\left(x,{\beta }_{k},{r}_{k}\right)\le \underset{z\in {\Re }^{n}}{\mathrm{inf}}L\left(z,{\beta }_{k},{r}_{k}\right)+\epsilon \right\}.$

Now we give the following lemma.

Lemma 3.1 The sequence $\left\{{r}_{k}\right\}$ generated by Algorithm 3.1 converges to 0.

Proof. Assume to the contrary that $\left\{{r}_{k}\right\}$ does not converge to 0, then by that $\left\{{r}_{k}\right\}$ decreases monotonically, there exists a ${k}_{0}$ such that $\forall k\ge {k}_{0}$, ${r}_{k}={r}_{{k}_{0}}$. It follows from Step 2 of Algorithm 3.1 that $\forall k\ge {k}_{0}$, ${r}_{k}={r}_{{k}_{0}}$, $‖{g}^{+}\left({x}^{k}\right)‖>{r}_{k}={r}_{{k}_{0}}$, and $\underset{k\to \infty }{\mathrm{lim}}{\beta }_{k}=+\infty$.

Let $\stackrel{¯}{x}\in {\Omega }_{0}$, then $g\left(\stackrel{¯}{x}\right)\le 0$. By Proposition 2.1, we know that $\forall k\ge {k}_{0}$,

$\begin{array}{c}{L}_{p}\left({x}^{k},{\beta }_{k},{r}_{{k}_{0}}\right)\le \underset{x\in {\Re }^{n}}{\mathrm{inf}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right)+{\omega }_{k}\\ \le {L}_{p}\left(\stackrel{¯}{x},{\beta }_{k},{r}_{{k}_{0}}\right)+{\omega }_{k}\\ =f\left(\stackrel{¯}{x}\right)+{\left[{r}_{{k}_{0}}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left(\stackrel{¯}{x}\right)}{{r}_{{k}_{0}}}\right)\right]}^{p}+{\omega }_{k}\\ \le f\left(\stackrel{¯}{x}\right)+{\left[{r}_{{k}_{0}}{\sigma }_{1}\right]}^{p}+{\omega }_{k}.\end{array}$ (8)

where ${\sigma }_{1}$ is given by Proposition 2.1.

For sufficiently large $k\ge {k}_{0}$, by Proposition 2.1, we have that

$\begin{array}{c}{L}_{p}\left({x}^{k},{\beta }_{k},{r}_{{k}_{0}}\right)=f\left({x}^{k}\right)+{\left[{r}_{{k}_{0}}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({x}^{k}\right)}{{r}_{{k}_{0}}}\right)\right]}^{p}\\ \ge {\left[{r}_{{k}_{0}}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({x}^{k}\right)}{{r}_{{k}_{0}}}\right)\right]}^{p}\\ \ge {\beta }_{k}\underset{‖{g}^{+}\left(x\right)‖>{r}_{{k}_{0}}}{\mathrm{inf}}{\left[\frac{{r}_{{k}_{0}}}{{\beta }_{k}^{\frac{1}{p}}}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left(x\right)}{{r}_{{k}_{0}}}\right)\right]}^{p}\\ \ge \frac{1}{2}{\beta }_{k}{\eta }_{{r}_{{k}_{0}}}^{p}.\end{array}$

The last inequality can be got by the property (b1) of Proposition 2.1, where ${c}_{k}=\frac{{\beta }_{k}^{\frac{1}{p}}}{{r}_{{k}_{0}}}$. By that $\underset{k\to \infty }{\mathrm{lim}}{\beta }_{k}=+\infty$, the right side of the above last inequality goes to $\infty$, which contradicts with (8). So $\left\{{r}_{k}\right\}$ generated by Algorithm 2.1 converges to 0. $\square$

Lemma 3.2 $\forall \epsilon >0$, for all sufficiently large k, it holds that ${S}_{k}\left(\epsilon \right)\subseteq {\Omega }_{\epsilon }$.

Proof. Assume to the contrary that there exists an ${\epsilon }_{0}>0$ and a subsequence $K\subset N$ such that $\forall k\in K$, $\exists {z}^{k}\in {S}_{k}\left({\epsilon }_{0}\right)$, but ${z}^{k}\notin {\Omega }_{{\epsilon }_{0}}$. Then there exists a subsequence ${K}_{0}\subseteq K$ and an index ${j}_{0}\in \left\{1,2,\cdots ,m\right\}$ such that $\forall k\in {K}_{0}$,

${g}_{{j}_{0}}\left({z}^{k}\right)>{\epsilon }_{0}.$ (9)

Then by Lemma 3.1, we have for sufficiently large $k\in {K}_{0}$ that $‖{g}^{+}\left({z}^{k}\right)‖>{\epsilon }_{0}\ge {r}_{k}$. Then it follows from Step 2 of Algorithm 3.1 that $\underset{k\to \infty }{\mathrm{lim}}{\beta }_{k}=+\infty$.

By (9) and the property (b1) of Proposition 2.1, for sufficiently large k, we have that

$\begin{array}{c}\underset{x\in {\Re }^{n}}{\mathrm{inf}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right)+{\epsilon }_{0}\ge {L}_{p}\left({z}^{k},{\beta }_{k},{r}_{k}\right)\\ =f\left({z}^{k}\right)+{\left[{r}_{k}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({z}^{k}\right)}{{r}_{k}}\right)\right]}^{p}\\ \ge {\beta }_{k}\underset{‖{g}^{+}\left(x\right)‖\ge {\epsilon }_{0}}{\mathrm{inf}}{\left[\frac{{r}_{{k}_{0}}}{{\beta }_{k}^{\frac{1}{p}}}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left(x\right)}{{r}_{{k}_{0}}}\right)\right]}^{p}\\ \ge \frac{1}{2}{\beta }_{k}{\eta }_{{\epsilon }_{0}}^{p}.\end{array}$ (10)

Then by $\underset{k\to \infty }{\mathrm{lim}}{\beta }_{k}=+\infty$, the right side of the last inequality of (10) goes to $\infty$.

Let $\stackrel{¯}{x}\in {\Omega }_{0}$, then $g\left(\stackrel{¯}{x}\right)\le 0$. By Proposition 2.1, we know that $\forall k\ge {k}_{0}$,

$\begin{array}{c}\underset{x\in {\Re }^{n}}{\mathrm{inf}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right)+{\epsilon }_{0}\le {L}_{p}\left(\stackrel{¯}{x},{\beta }_{k},{r}_{k}\right)+{\epsilon }_{0}\\ =f\left(\stackrel{¯}{x}\right)+{\left[{r}_{k}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left(\stackrel{¯}{x}\right)}{{r}_{k}}\right)\right]}^{p}+{\epsilon }_{0}\\ \le f\left(\stackrel{¯}{x}\right)+{\left[{r}_{k}{\sigma }_{1}\right]}^{p}+{\epsilon }_{0},\end{array}$ (11)

which contradicts with (10). $\square$

Theorem 3.1 (Perturbation Theorem) Assume that $\left\{{x}^{k}\right\}$ is a sequence generated by Algorithm 3.1, then it holds that

1) $\underset{k\to \infty }{\mathrm{lim}}f\left({x}^{k}\right)=\underset{\epsilon \to {0}^{+}}{\mathrm{lim}}{\theta }_{f}\left(\epsilon \right);$

2) $\underset{k\to \infty }{\mathrm{lim}}{L}_{p}\left({x}^{k},{\beta }_{k},{r}_{k}\right)=\underset{\epsilon \to {0}^{+}}{\mathrm{lim}}{\theta }_{f}\left(\epsilon \right);$

3) $\underset{k\to \infty }{\mathrm{lim}}{\left[{r}_{k}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({x}^{k}\right)}{{r}_{k}}\right)\right]}^{p}=0.$

Proof. Since the perturbation function ${\theta }_{f}\left(\epsilon \right)$ is monotonically decreasing on $\epsilon >0$, and $\forall k\in K$, ${\theta }_{f}\left(\epsilon \right)\le {\theta }_{f}\left(0\right)$, it holds that $\underset{\epsilon \to {0}^{+}}{\mathrm{lim}}{\theta }_{f}\left(\epsilon \right)$ exists and is finite. By Proposition 2.1, for $\frac{1}{{r}_{k}}\to +\infty \left(k\to \infty \right)$, $\exists {\epsilon }_{k}\to {0}^{+}$, such that

$\underset{k\to \infty }{\mathrm{lim}\mathrm{sup}}\underset{‖{u}^{+}‖\le {\epsilon }_{k}}{\mathrm{sup}}{r}_{k}\sigma \left(\frac{u}{{r}_{k}}\right)=0.$ (12)

Choose an ${{\epsilon }^{\prime }}_{k}>0$ such that ${\beta }_{k}^{\frac{1}{p}}{{\epsilon }^{\prime }}_{k}\le {\epsilon }_{k}$, and set ${\stackrel{¯}{\epsilon }}_{k}=\frac{{{\epsilon }^{\prime }}_{k}}{\sqrt{m}}$, then we have

$\underset{k\to \infty }{\mathrm{lim}}{\theta }_{f}\left({\stackrel{¯}{\epsilon }}_{k}\right)=\underset{\epsilon \to {0}^{+}}{\mathrm{lim}}{\theta }_{f}\left(\epsilon \right).$ (13)

Then we again choose ${\delta }_{k}>0$ and ${\delta }_{k}\to 0\left(k\to \infty \right)$. By the definition of infimum, for each k, there exists a ${z}^{k}\in {\Omega }_{{\stackrel{¯}{\epsilon }}_{k}}$ such that

$f\left({z}^{k}\right)\le {\theta }_{f}\left({\stackrel{¯}{\epsilon }}_{k}\right)+{\delta }_{k}.$

Since ${z}^{k}\in {\Omega }_{{\stackrel{¯}{\epsilon }}_{k}}$, we have that ${g}_{j}\left({z}^{k}\right)\le {\stackrel{¯}{\epsilon }}_{k}=\frac{{{\epsilon }^{\prime }}_{k}}{\sqrt{m}},i=1,\cdots ,m$, then we can obtain that

${\beta }_{k}‖{g}^{+}\left({z}^{k}\right)‖\le {\beta }_{k}{{\epsilon }^{\prime }}_{k}\le {\epsilon }_{k}.$ (14)

On the other side, for any $\epsilon >0$, by the proof of Lemma 3.2, we have for all sufficiently large k that

${x}^{k}\in {\Omega }_{\epsilon }.$ (15)

Thus, for any $\epsilon >0$, by the property (b3) and Proposition 2.1, we have that

$\begin{array}{c}{\theta }_{f}\left(\epsilon \right)\le f\left({x}^{k}\right)\\ \le f\left({x}^{k}\right)+{\left[{r}_{k}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({x}^{k}\right)}{{r}_{k}}\right)\right]}^{p}-{\left[{r}_{k}{\sigma }_{0}\right]}^{p}\\ ={L}_{p}\left({x}^{k},{\beta }_{k},{r}_{k}\right)-{\left[{r}_{k}{\sigma }_{0}\right]}^{p}\\ \le \underset{x\in {\Re }^{n}}{\mathrm{inf}}{L}_{p}\left(x,{\beta }_{k},{r}_{k}\right)+{\omega }_{k}-{\left[{r}_{k}{\sigma }_{0}\right]}^{p}\\ \le f\left({z}^{k}\right)+{\left[{r}_{k}\sigma \left(\frac{{\beta }_{k}^{\frac{1}{p}}g\left({z}^{k}\right)}{{r}_{k}}\right)\right]}^{p}+{\omega }_{k}-{\left[{r}_{k}{\sigma }_{0}\right]}^{p}\\ \le {\theta }_{f}\left({\stackrel{¯}{\epsilon }}_{k}\right)+{\delta }_{k}+{\left[{r}_{k}\underset{‖{u}^{+}‖\le {\epsilon }_{k}}{\mathrm{sup}}\sigma \left(\frac{u}{{r}_{k}}\right)\right]}^{p}+{\omega }_{k}-{\left[{r}_{k}{\sigma }_{0}\right]}^{p}.\end{array}$

Let $k\to \infty$, and put two sides of the above inequality to the limit, we obtain that 1)-3) hold. $\square$

Theorem 3.2 Assume that $\left\{{x}^{k}\right\}$ is a sequence generated by Algorithm 3.1, then every accumulation point of $\left\{{x}^{k}\right\}$ is an optimal solution of the problem (P)

Proof. By Lemma 3.2, for sufficiently large k, we have

${x}^{k}\in {\Omega }_{\epsilon }.$ (16)

Suppose that ${x}^{*}$ is an accumulation point of $\left\{{x}^{k}\right\}$, by the continuity of ${g}_{j}\left(i=1,\cdots ,m\right)$ and (16), we know that ${x}^{*}\in {\Omega }_{\epsilon }$. Again by the arbitrariness of $\epsilon$, we have that ${x}^{*}\in {\Omega }_{0}$.

By the Perturbation Theorem, we obtain that $f\left({x}^{*}\right)=\underset{k\to \infty }{\mathrm{lim}}f\left({x}^{k}\right)=\underset{\epsilon \to {0}^{+}}{\mathrm{lim}}{\theta }_{f}\left(\epsilon \right)\le {\theta }_{f}\left(0\right)$. $\square$

4. Conclusions

In this paper, we propose a uniform path of smooth approximation for the classical nonsmooth penalty function. Our model contains some of the existing models. In addition, we also give a class of relaxed smooth penalty algorithm, and prove the convergence of the algorithm under some weak conditions.

In the future work, we will use the model and algorithm of this paper to carry out numerical experiments and compare with some existing methods. We also consider applying the model and algorithm in this paper to the study of power market equilibrium optimization problem.

Fund

This research is supported by National Natural Science Foundation of China (11771255, 11801325), Young Innovation Teams of Shandong Province (2019KJI013) and the Natural Science Foundations of Shandong Province (ZR2015AL011).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

 [1] Zangwill, W.I. (1967) Nonlinear Programming via Penalty Function. Management Science, 13, 334-358. https://doi.org/10.1287/mnsc.13.5.344 [2] Bental, A. and Teboulle, M. (1989) A Smoothing Technique for Non-Differentiable Optimization Problems. Lecture Notes in Mathematics, Springer Verlag, Berlin, Vol. 1405, 1-11. https://doi.org/10.1007/BFb0083582 [3] Pinar, M.C. and Zenios, S.A. (1994) On Smoothing Exact Penalty Functions for Convex Constarained Optimization. SIAM Journal on Optimization, 4, 486-511. https://doi.org/10.1137/0804027 [4] Auslender, A., Cominetti, R. and Haddou, M. (1997) Asymptotic Analysis for Penalty and Barrier Methods in Convex and Linear Programming. Mathematics of Operational Research, 22, 43-62. https://doi.org/10.1287/moor.22.1.43 [5] Gonzaga, C.C. and Castillo, R.A. (2003) A Nonlinear Programming Algorithm Based on Non-Coercive Penalty Functions. Mathematical Programming, 96, 87-101. https://doi.org/10.1007/s10107-002-0332-z [6] Wu, Z.Y., Bai, F.S., Yang, X.Q. and Zhang, L.S. (2004) An Exact Lower Order Penalty Function and Its Smoothing in Nonlinear Programming. Optimization, 53, 51-68. https://doi.org/10.1080/02331930410001662199 [7] Meng, Z.Q., Dang, C.Y. and Yang, X.Q. (2006) On the Smoothing of the Square-Root Exact Penalty Function for Inequality Constrained Optimization. Computational Optimization and Applications, 35, 375-398. https://doi.org/10.1007/s10589-006-8720-6 [8] Herty, M., Klar, A., Singh, A.K. and Spellucci, P. (2007) Smoothed Penalty Algorithms for Optimization of Nonlinear Models. Computational Optimization and Applications, 37, 157-176. https://doi.org/10.1007/s10589-007-9011-6 [9] Di Pillo, G., Lucidi, S. and Rinaldi, F. (2012) An Approach to Constrained Global Optimization Based on Exact Penalty Functions. Journal of Global Optimization, 54, 251-260. https://doi.org/10.1007/s10898-010-9582-0 [10] Lian, S.J. (2012) Smoothing Approximation to l1 Exact Penalty Function for Inequality Constrained Optimization. Applied Mathematics and Computation, 219, 3113-3121. https://doi.org/10.1016/j.amc.2012.09.042 [11] Xu, X.S., Meng, Z.Q., Sun, J.W., Huang, L.G. and Shen, R. (2013) A Second-Order Smooth Penalty Function Algorithm for Constarained Optimization Problems. Com-putational Optimization and Application, 55, 155-172. https://doi.org/10.1007/s10589-012-9504-9 [12] Lian, S.J. and Duan, Y.Q. (2016) Smoothing of the Lower-Order Exact Penalty Function for Inequality Constrained Optimization. Journal of Inequalities and Applications, 185, 1-12. https://doi.org/10.1186/s13660-016-1126-9 [13] Wu, Z.Y., Lee, H.W.J., Bai, F.S. and Zhang, L.S. (2017) Quadratic Smoothing Approximation to l1 Exact Penalty Function in Global Optimization. Journal of Industrial and Management Optimization, 1, 533-547. https://doi.org/10.3934/jimo.2005.1.533 [14] Liu, B.Z. (2019) A Smoothing Penalty Function Method for the Constrained Optimization Problem. Open Journal of Optimization, 8, 113-126. https://doi.org/10.4236/ojop.2019.84010