The Second-Order Differential Equation System with the Feedback Controls for Solving Convex Programming

Abstract

In this paper, we establish the second-order differential equation system with the feedback controls for solving the problem of convex programming. Using Lagrange function and projection operator, the equivalent operator equations for the convex programming problems under the certain conditions are obtained. Then a second-order differential equation system with the feedback controls is constructed on the basis of operator equation. We prove that any accumulation point of the trajectory of the second-order differential equation system with the feedback controls is a solution to the convex programming problem. In the end, two examples using this differential equation system are solved. The numerical results are reported to verify the effectiveness of the second-order differential equation system with the feedback controls for solving the convex programming problem.

Share and Cite:

Chen, X. , Wang, L. , Sun, J. and Yuan, Y. (2022) The Second-Order Differential Equation System with the Feedback Controls for Solving Convex Programming. Open Journal of Applied Sciences, 12, 977-989. doi: 10.4236/ojapps.2022.126067.

1. Introduction

We consider the problem of convex programming, which is to find a vector ${x}^{\ast }\in \Omega$ such that

${x}^{\ast }\in \underset{}{\mathrm{arg}\mathrm{min}}\left\{f\left(x\right):g\left(x\right)\le 0,x\in Q\right\},$ (1.1)

where $f:{\Re }^{n}\to \Re$ and $g:{\Re }^{n}\to {\Re }^{m}$ are two mappings, Q is a convex closed set, “argmin” represents the set of minimum points.

The Lagrange function of the problem (1.1) is $\mathcal{L}\left(x,p\right)=f\left(x\right)+〈p,g\left(x\right)〉$, where $x\in Q\subseteq {\Re }^{n}$, $p\in P\subseteq {\Re }^{m}$ and P is a convex closed set. Then we know that $\mathcal{L}\left(x,p\right)$ is a function convex in x and concave in p. In the general case, if $\left({x}^{\ast },{p}^{\ast }\right)$ is the solution to the problem (1.1), it satisfies the following inequalities

$\mathcal{L}\left({x}^{\ast },p\right)\le \mathcal{L}\left({x}^{\ast },{p}^{\ast }\right)\le \mathcal{L}\left(x,{p}^{\ast }\right).$ (1.2)

More generally, the function $\mathcal{L}\left(x,p\right)$ can be a saddle function.

Convex optimization problems have important applications in many fields. Recently, Wang, Hong and Kai [1] are devoted to a novel smoothing function method for convex quadratic programming problem with mixed constraints, which has important application in mechanics and engineering science. The problem is reformulated as a system of non-smooth equations, and then a smoothing function for the system of non-smooth equations is proposed. The condition of convergences of this iteration algorithm is given. Asadi, Mansouri and Zangiabadi [2] present a neighborhood following primal-dual interior-point algorithm for solving symmetric cone convex quadratic programming problems, where the objective function is a convex quadratic function and the feasible set is the intersection of an affine subspace and a symmetric cone attached to a Euclidean Jordan algebra. Yuan, Zhang and Huang [3] propose an arc-search interior-point algorithm for convex quadratic programming with a wide neighborhood of the central path, which searches the optimizers along the ellipses that approximate the entire central path.

Antipin [4] considered the synthesis of control laws for nonlinear objects whose set of equilibrium states is defined by the problems of convex programming or degenerate saddle functions. Based on the projection operator, a first-order differential equation system with composite controls was established. Moreover, the trajectory of process of this system could be converged monotonically in norm to one of the equilibrium points. It is worth mentioning that the differential equation methods for solving the minimization problems and the variational inequalities which were studied by Antipin [5] - [11] are different from the traditional differential equation method and neural network. Without using the Lyapunov function but only applying the properties of the projection operator and the related function, the stationary of the equilibrium point of the differential equation can be proved. Thus the convergence of the solutions of the primal problems can be obtained. However, Antipin’s work on solving different types of optimization problems and variational inequality problems by using differential equation methods is theoretical results, and no numerical results are given. Based on the research of the above scholars, this paper will continue to use the differential equation method to solve a class of convex optimization problems. In addition to giving the convergence theoretical results of the solutions of variational inequalities, numerical examples will also be given to illustrate the effectiveness of the differential equation method.

Recently, inspired by the ideas of the above research results, Wang et al. [12] - [16] constructed the different differential equation systems for solving the differential variational inequalities. For example, Wang, Li and Zhang [12] considered the differential equation method for solving the box constrained variational inequality problems and proved that the equilibrium solution to the differential equation system is locally asymptotically stable by verifying the locally asymptotical stability of the equilibrium positions of the differential inclusion problems. Wang, Chen and Sun [15] established the system of differential equations based on the projection operator for the variational inequality problem with the cyclically monotone mapping. Using an important inequality for the cyclically monotone mapping, any accumulation point of the trajectory of the differential equation system was proved to be a solution to the variational inequality problem. Wang, Chen and Sun [16] constructed the second-order differential equation system with the controlled process for solving the variational inequality with constraints and proved that any accumulation point of the trajectory of the second-order differential equation system is a solution to the variational inequality with constraints. Nazemi and Sabeghi [17] [18] applied neural network model to solve convex second-order cone constrained variational inequality problems. Kwelegano et al. [19] studied an approximate solution to the problem of splitting equality variational inequality.

In next section, based on the saddle function (1.2) and the projection operator, the second-order differential equation system with the feedback controls will be established for solving the convex programming problem (1.1). We will prove that any accumulation point of the trajectory of the second-order differential equation system with the feedback controls is a solution to the convex programming problem in Section 3. At last, two examples are solved by using this differential equation system. The numerical results are reported to verify the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).

2. Preliminaries

The projection operator to a convex set is quite useful for establishing the second-order differential equation system. Now we recall the following definitions.

Let C be a convex closed set, for every $x\in {\Re }^{n}$, there is a unique $\stackrel{^}{x}$ in C such that

$‖x-\stackrel{^}{x}‖=\mathrm{min}\left\{‖x-y‖|y\in C\right\}.$ (2.1)

The point $\stackrel{^}{x}$ is the projection of x onto C, denoted by ${\Pi }_{C}\left(x\right)$. The projection operator ${\Pi }_{C}:{\Re }^{n}\to C$ is well defined over ${\Re }^{n}$ and it is a nonexpensive mapping.

Lemma 2.1. [20] Let H be a real Hilbert space and $C\subset H$ be a closed convex set. For a given $z\in H$ , $u\in C$ satisfies the inequality

$〈u-z,v-u〉\ge 0,\text{ }\forall v\in C,$ (2.2)

if and only if $u-{\Pi }_{C}\left(z\right)=0$ .

Assuming that the function $\mathcal{L}\left(x,p\right)$ is differentiable, it is easy to show that $\left({x}^{*},{p}^{*}\right)$ is the saddle point of the inequalities (1.2) if and only if $\left({x}^{*},{p}^{*}\right)$ satisfies the following system by using Lemma 2.1.

${x}^{\ast }={\Pi }_{Q}\left({x}^{\ast }-\alpha \nabla {\mathcal{L}}_{x}\left({x}^{\ast },{p}^{\ast }\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{p}^{\ast }={\Pi }_{P}\left({p}^{\ast }+\alpha \nabla {\mathcal{L}}_{p}\left({x}^{\ast },{p}^{\ast }\right)\right),$ (2.3)

where ${\Pi }_{Q}\left(.\right)$ and ${\Pi }_{P}\left(.\right)$ are the projections of the vectors on the set Q and P, and $\nabla {\mathcal{L}}_{x}\left(x,p\right)$ and $\nabla {\mathcal{L}}_{p}\left(x,p\right)$ are the vector gradients of the function $\mathcal{L}\left(x,p\right)$ in the variables x and p, respectively. Then in view of linearity of the function in the variable p, we have $\nabla {\mathcal{L}}_{p}\left(x,p\right)=g\left(x\right)$, and because the set p coincides with the positive orthant, i.e. $P={\Re }_{+}^{m}$, we rewrite the system (2.3) as follows.

${x}^{\ast }={\Pi }_{Q}\left({x}^{\ast }-\alpha \nabla {\mathcal{L}}_{x}\left({x}^{\ast },{p}^{\ast }\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{p}^{\ast }={\Pi }_{+}\left({p}^{\ast }+\alpha g\left(x\right)\right),$ (2.4)

where $\alpha >0$, and ${\Pi }_{+}\left(.\right)$ is the operator of projection on $P={\Re }_{+}^{m}$.

Similar to Antipin [4], we establish a system of second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).

${\mu }_{1}\frac{{\text{d}}^{2}x}{\text{d}{t}^{2}}+{\beta }_{1}\frac{\text{d}x}{\text{d}t}+x={\Pi }_{Q}\left(x-\alpha \nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left({t}_{0}\right)={x}_{0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{˙}{x}\left({t}_{0}\right)={\stackrel{˙}{x}}_{0},$ (2.5)

${\mu }_{2}\frac{{\text{d}}^{2}p}{\text{d}{t}^{2}}+{\beta }_{2}\frac{\text{d}p}{\text{d}t}+p={\Pi }_{+}\left(p+\alpha g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\left({t}_{0}\right)={p}_{0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{˙}{p}\left({t}_{0}\right)={\stackrel{˙}{p}}_{0},$ (2.6)

$\stackrel{¯}{u}={\Pi }_{+}\left(p+\alpha g\left(x\right)\right),$ (2.7)

where ${\mu }_{1}>0,{\beta }_{1}>0,{\mu }_{2}>0,{\beta }_{2}>0$ and $\alpha >0$ are parameters. It is easy to see that the system (2.5)-(2.7) can be changed to the system (23)-(25) in Antipin [4] when ${\mu }_{1}=0,{\beta }_{1}=1,{\mu }_{2}=0,{\beta }_{2}=1$.

For simplicity, we denote $\stackrel{¨}{x}=\frac{{\text{d}}^{2}x}{\text{d}{t}^{2}}$, $\stackrel{˙}{x}=\frac{\text{d}x}{\text{d}t}$, $\stackrel{¨}{p}=\frac{{\text{d}}^{2}p}{\text{d}{t}^{2}}$ and $\stackrel{˙}{p}=\frac{\text{d}p}{\text{d}t}$.

Using Lemma 2.1, the above Equations (2.5)-(2.7) are transformed into the following variational inequalities (2.8)-(2.10), respectively.

$〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+\alpha \nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right),z-x-{\mu }_{1}\stackrel{¨}{x}-{\beta }_{1}\stackrel{˙}{x}〉\ge 0,\text{ }\forall z\in Q,$ (2.8)

$〈{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\alpha g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right),y-p-{\mu }_{2}\stackrel{¨}{p}-{\beta }_{2}\stackrel{˙}{p}〉\ge 0,\text{ }\forall y\in {\Re }_{+}^{m},$ (2.9)

$〈\stackrel{¯}{u}-p-\alpha g\left(x\right),u-\stackrel{¯}{u}〉\ge 0,\text{ }\forall u\in {\Re }_{+}^{m}.$ (2.10)

In order to prove the convergence of the solution to problem (1.1) by using the second-order differential equation system with the feedback controls (2.5)-(2.7), it is necessary that the gradient satisfy the Lipschitz condition.

Thus, suppose that

$\mathcal{L}\left(x+h,p\right)-\mathcal{L}\left(x,p\right)-〈\nabla {\mathcal{L}}_{x}\left(x,p\right),h〉\le \frac{1}{2}{L}_{1}{|h|}^{2}$ (2.11)

for all x and $x+h$ from Q and p from P, where ${L}_{1}$ is a constant and

$\mathcal{L}\left(x,p+h\right)-\mathcal{L}\left(x,p\right)-〈\nabla {\mathcal{L}}_{p}\left(x,p\right),h〉\ge \frac{1}{2}{L}_{2}{|h|}^{2}$ (2.12)

for all p and $p+h$ from P and x from Q, where ${L}_{2}$ is a constant.

3. The Second-Order Differential Equation System

The following theorem shows that the equilibrium points of the second-order differential equations with the feedback controls (2.5)-(2.7) are asymptotically stable.

Theorem 3.1. Assume that the set of solutions to problem (1.1) is not empty, the gradients $\nabla f\left(x\right)$ of the objective function and $\nabla g\left(x\right)$ of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant ${L}_{0}$ and the vector constant L, the map $g\left(x\right)$ satisfies the Lipschitz condition with the constant $|g|$ , the trajectory $\stackrel{¯}{u}={\Pi }_{+}\left(p+\alpha g\left(x\right)\right)$ for all $t\ge {t}_{0}$ is bounded by the vector constant C, i.e., $\stackrel{¯}{u}\le C$ , and the parameter $\alpha$ is chosen from the condition $0<\alpha <\frac{\sqrt{{M}^{2}+16{|g|}^{2}}-M}{4{|g|}^{2}}$ , $\frac{1}{2K}<{\beta }_{1}<{\mu }_{1} and $\frac{2}{3}<{\beta }_{2}<{\mu }_{2}<\frac{3}{4}{\beta }_{2}^{2}$ where $M={L}_{0}+〈L,C〉$ , $K=1-\frac{\alpha }{2}M-{\alpha }^{2}{|g|}^{2}$ then the trajectory of the second-order differential equations with the feedback controls (2.5)-(2.7) converges monotonically in norm to one of the equilibrium points, i.e., $x\left(t\right)\to {x}^{\ast }\in {X}^{\ast }$ and $p\left(t\right)\to {p}^{\ast }\in {P}^{\ast }$ for all ${x}^{0}$ and ${p}^{0}$ .

Proof. Let $z={x}^{\ast }$ in (2.8), which yields that

$〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+\alpha \nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right),{x}^{\ast }-x-{\mu }_{1}\stackrel{¨}{x}-{\beta }_{1}\stackrel{˙}{x}〉\ge 0.$ (3.1)

Using the convexity of the function $\mathcal{L}\left(x,y\right)$ in x in the form of the inequality

$〈\nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right),{x}^{\ast }-x〉\le \mathcal{L}\left({x}^{\ast },\stackrel{¯}{u}\right)-\mathcal{L}\left(x,\stackrel{¯}{u}\right),$ (3.2)

and we add $\alpha \mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)-\alpha \mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)$ in (3.1), then we have

$\begin{array}{l}{‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}+〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},x-{x}^{\ast }〉+\alpha \mathcal{L}\left(x,\stackrel{¯}{u}\right)-\alpha \mathcal{L}\left({x}^{\ast },\stackrel{¯}{u}\right)\\ \text{ }+\alpha \mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)-\alpha \mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)+\alpha 〈\nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right),{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}〉\le 0.\end{array}$ (3.3)

Since the gradients $\nabla f\left(x\right)$ of the objective function and $\nabla g\left(x\right)$ of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant ${L}_{0}$ and the vector constant L, and the trajectory $\stackrel{¯}{u}={\Pi }_{+}\left(p+\alpha g\left(x\right)\right)$ for all $t\ge {t}_{0}$ is bounded by the vector constant C, i.e., $\stackrel{¯}{u}\le C$, we can compute that

$\begin{array}{l}\mathcal{L}\left({\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+x,\stackrel{¯}{u}\right)-\mathcal{L}\left(x,\stackrel{¯}{u}\right)-〈\nabla {\mathcal{L}}_{x}\left(x,\stackrel{¯}{u}\right),{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}〉\\ =f\left({\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+x\right)+〈\stackrel{¯}{u},g\left(\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+x\right)〉-f\left(x\right)-〈\stackrel{¯}{u},g\left(x\right)〉\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }-〈\nabla f\left(x\right),{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}〉-〈\nabla {g}^{\text{T}}\left(x\right)\stackrel{¯}{u},{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}〉\\ \le \frac{1}{2}\left({L}_{0}+〈L,C〉\right){‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}.\end{array}$ (3.4)

It follows from the inequalities (1.2), we know that $-\mathcal{L}\left({x}^{*},\stackrel{¯}{u}\right)\ge -\mathcal{L}\left(x,{p}^{*}\right)$. Thus we have

$\begin{array}{l}\mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)-\mathcal{L}\left({x}^{\ast },\stackrel{¯}{u}\right)\\ \ge \mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)-\mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},{p}^{*}\right)\\ =〈\stackrel{¯}{u},g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)〉-〈{p}^{*},g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)〉.\end{array}$ (3.5)

By using the above two inequalities, we can get the following inequality from the inequality (3.3).

$\begin{array}{l}\left(1-\frac{\alpha }{2}\left({L}_{0}+〈L,C〉\right){‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}+〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},x-{x}^{\ast }〉\\ \text{ }+\alpha \left(\mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},\stackrel{¯}{u}\right)-\mathcal{L}\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},{p}^{\ast }\right)\right)\le 0,\end{array}$ (3.6)

which can be changed into that

$\begin{array}{l}\left(1-\frac{\alpha }{2}\left({L}_{0}+〈L,C〉\right)\right){‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}+〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},x-{x}^{\ast }〉\\ \text{ }+\alpha 〈\stackrel{¯}{u}-{p}^{\ast },g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)〉\le 0.\end{array}$ (3.7)

Let $y={p}^{\ast }$ in (2.9), we can get that

$〈{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\alpha g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right),{p}^{\ast }-p-{\mu }_{2}\stackrel{¨}{p}-{\beta }_{2}\stackrel{˙}{x}〉\ge 0,$ (3.8)

and let $u=p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}$ in (2.10), we yield that

$〈\stackrel{¯}{u}-p-\alpha g\left(x\right),p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉\ge 0.$ (3.9)

It is easy to show that from (3.9)

$\begin{array}{l}〈\stackrel{¯}{u}-p,p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉+\alpha 〈g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)-g\left(x\right),p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉\\ \text{ }-\alpha 〈g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right),p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉\ge 0.\end{array}$ (3.10)

Now, we consider the following relation

$\begin{array}{c}‖p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}‖=‖{\Pi }_{+}\left(p+\alpha g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)\right)-{\Pi }_{+}\left(p+\alpha g\left(x\right)\right)‖\\ \le \alpha ‖g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right)-g\left(x\right)‖\\ \le \alpha |g|‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖.\end{array}$ (3.11)

Using the above relation, we rewrote the inequality (3.9) as follows.

$\begin{array}{l}〈\stackrel{¯}{u}-p,p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉+{\alpha }^{2}{|g|}^{2}{‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}‖}^{2}\\ \text{ }-\alpha 〈g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right),p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉\ge 0.\end{array}$ (3.12)

Adding (3.8) and (3.12), we have

$\begin{array}{l}〈{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p},{p}^{\ast }-p-{\mu }_{2}\stackrel{¨}{p}-{\beta }_{2}\stackrel{˙}{p}〉-\alpha 〈g\left(x+{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}\right),{p}^{\ast }-\stackrel{¯}{u}〉\\ +〈\stackrel{¯}{u}-p,p+{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}-\stackrel{¯}{u}〉+{\alpha }^{2}{|g|}^{2}{‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}\ge 0.\end{array}$ (3.13)

Using the relations

${‖{p}_{1}-{p}_{2}‖}^{2}={‖{p}_{1}-{p}_{3}‖}^{2}+2〈{p}_{1}-{p}_{3},{p}_{3}-{p}_{2}〉+{‖{p}_{3}-{p}_{2}‖}^{2}$ (3.14)

and

$\frac{1}{4}{‖{p}_{1}-{p}_{2}‖}^{2}\le \frac{1}{2}{‖{p}_{1}-{p}_{3}‖}^{2}+\frac{1}{2}{‖{p}_{3}-{p}_{2}‖}^{2},$ (3.15)

the above inequality (3.13) can be transformed into the following

$\begin{array}{l}\frac{3}{4}{‖{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}‖}^{2}+〈{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p},p-{p}^{\ast }〉-{\alpha }^{2}{|g|}^{2}{‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}\\ \text{ }+\alpha 〈g\left({\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}+x\right),{p}^{\ast }-\stackrel{¯}{u}〉\le 0.\end{array}$ (3.16)

Summing (3.7) and (3.16), we get that

$\begin{array}{l}\frac{3}{4}{‖{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p}‖}^{2}+〈{\mu }_{2}\stackrel{¨}{p}+{\beta }_{2}\stackrel{˙}{p},p-{p}^{\ast }〉\\ +\left(1-\alpha /2\left({L}_{0}+〈L,C〉\right)-{\alpha }^{2}{|g|}^{2}\right){‖{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x}‖}^{2}+〈{\mu }_{1}\stackrel{¨}{x}+{\beta }_{1}\stackrel{˙}{x},x-{x}^{\ast }〉\le 0.\end{array}$ (3.17)

The inequality (3.17) can be calculated by using the relations (3.14) and (3.15) in the following.

$\begin{array}{l}\frac{3}{4}{\mu }_{2}^{2}{‖\stackrel{¨}{p}‖}^{2}+\frac{3}{4}{\beta }_{2}^{2}{‖\stackrel{˙}{p}‖}^{2}+\frac{3}{2}{\mu }_{2}{\beta }_{2}〈\stackrel{¨}{p},\stackrel{˙}{p}〉+K{\mu }_{1}^{2}{‖\stackrel{¨}{x}‖}^{2}+K{\beta }_{1}^{2}{‖\stackrel{˙}{x}‖}^{2}+2{\mu }_{1}{\beta }_{1}K〈\stackrel{¨}{x},\stackrel{˙}{x}〉\\ \text{ }+{\mu }_{2}〈\stackrel{¨}{p},p-{p}^{\ast }〉+{\beta }_{2}〈\stackrel{˙}{p},p-{p}^{\ast }〉+{\mu }_{1}〈\stackrel{¨}{x},x-{x}^{\ast }〉+{\beta }_{1}〈\stackrel{˙}{x},x-{x}^{\ast }〉\le 0,\end{array}$ (3.18)

where $K=1-\frac{\alpha }{2}M-{\alpha }^{2}{|g|}^{2}$ and $M={L}_{0}+〈L,C〉$. We have $K>0$ since $\alpha$ is chosen from $0<\alpha <\frac{\sqrt{{M}^{2}+16{|g|}^{2}}-M}{4{|g|}^{2}}$.

According to the following relations

$\begin{array}{l}\frac{1}{2}\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}{‖x-{x}^{\ast }‖}^{2}=\parallel \stackrel{˙}{x}{\parallel }^{2}+〈x-{x}^{\ast },\stackrel{¨}{x}〉,\text{\hspace{0.17em}}\frac{1}{2}\frac{\text{d}}{\text{d}t}{‖\stackrel{˙}{x}‖}^{2}=〈\stackrel{˙}{x},\stackrel{¨}{x}〉,\\ \frac{1}{2}\frac{\text{d}}{\text{d}t}{‖x-{x}^{\ast }‖}^{2}=〈\stackrel{˙}{x},x-{x}^{\ast }〉,\end{array}$ (3.19)

the inequality (3.18) can be transformed into the following

$\begin{array}{l}\frac{3}{4}{\mu }_{2}^{2}{‖\stackrel{¨}{p}‖}^{2}+\left(\frac{3}{4}{\beta }_{2}^{2}-{\mu }_{2}\right){‖\stackrel{˙}{p}‖}^{2}+K{\mu }_{1}^{2}{‖\stackrel{¨}{x}‖}^{2}+\left(K{\beta }_{1}^{2}-{\mu }_{1}\right){‖\stackrel{˙}{x}‖}^{2}\\ +\frac{3}{4}\frac{\text{d}}{\text{d}t}{\mu }_{2}{\beta }_{2}{‖\stackrel{˙}{p}‖}^{2}+{\mu }_{1}{\beta }_{1}K\frac{\text{d}}{\text{d}t}{‖\stackrel{˙}{x}‖}^{2}+\frac{{\mu }_{2}}{2}\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}{‖p-{p}^{\ast }‖}^{2}\\ +\frac{{\beta }_{2}}{2}\frac{\text{d}}{\text{d}t}{‖p-{p}^{\ast }‖}^{2}+\frac{{\mu }_{1}}{2}\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}{‖x-{x}^{\ast }‖}^{2}+\frac{{\beta }_{1}}{2}\frac{\text{d}}{\text{d}t}{‖x-{x}^{\ast }‖}^{2}\le 0.\end{array}$ (3.20)

Let $\phi \left(x\right)=\frac{1}{2}{‖x-{x}^{\ast }‖}^{2}$ and $\varphi \left(p\right)=\frac{1}{2}{‖p-{p}^{\ast }‖}^{2}$, the inequality (3.20) means that

$\begin{array}{l}{\mu }_{2}\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}\varphi \left(p\right)+{\beta }_{2}\frac{\text{d}}{\text{d}t}\varphi \left(p\right)+{\mu }_{1}\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}\phi \left(x\right)+{\beta }_{1}\frac{\text{d}}{\text{d}t}\phi \left(x\right)+\frac{3}{4}{\mu }_{2}^{2}{‖\stackrel{¨}{p}‖}^{2}\\ +\left(\frac{3}{4}{\beta }_{2}^{2}-{\mu }_{2}\right){‖\stackrel{˙}{p}‖}^{2}+K{\mu }_{1}^{2}{‖\stackrel{¨}{x}‖}^{2}+\left(K{\beta }_{1}^{2}-{\mu }_{1}\right){‖\stackrel{˙}{x}‖}^{2}+\frac{3}{4}\frac{\text{d}}{\text{d}t}{\mu }_{2}{\beta }_{2}{‖\stackrel{˙}{p}‖}^{2}\\ \text{ }+{\mu }_{1}{\beta }_{1}K\frac{\text{d}}{\text{d}t}{‖\stackrel{˙}{x}‖}^{2}\le 0.\end{array}$ (3.21)

The inequality (3.21) can be integrated from t0 to t as follows.

$\begin{array}{l}{\mu }_{1}\frac{\text{d}}{\text{d}t}\phi \left(x\right)+{\beta }_{1}\phi \left(x\right)+K{\mu }_{1}^{2}{\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{x}‖}^{2}+\left(K{\beta }_{1}^{2}-{\mu }_{1}\right){\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{x}‖}^{2}+{\mu }_{1}{\beta }_{1}K{‖\stackrel{˙}{x}‖}^{2}\\ +{\mu }_{2}\frac{\text{d}}{\text{d}t}\varphi \left(p\right)+{\beta }_{2}\varphi \left(p\right)+\frac{3}{4}{\mu }_{2}^{2}{\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{p}‖}^{2}+\left(\frac{3}{4}{\beta }_{2}^{2}-{\mu }_{2}\right){\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{p}‖}^{2}+\frac{3}{4}{\mu }_{2}{\beta }_{2}{‖\stackrel{˙}{p}‖}^{2}\le {C}_{0},\end{array}$ (3.22)

where ${C}_{0}={\mu }_{2}\frac{\text{d}}{\text{d}t}\varphi \left({p}_{0}\right)+{\beta }_{2}\varphi \left({p}_{0}\right)+{\mu }_{1}\frac{\text{d}}{\text{d}t}\phi \left({x}_{0}\right)+{\beta }_{1}\phi \left({x}_{0}\right)+\frac{3}{4}{\mu }_{2}{\beta }_{2}{‖{\stackrel{˙}{p}}_{0}‖}^{2}+\frac{1}{2}{\mu }_{1}{\beta }_{1}K{‖{\stackrel{˙}{x}}_{0}‖}^{2}.$. It follows from $\frac{1}{2K}<{\beta }_{1}<{\mu }_{1} and $\frac{2}{3}<{\beta }_{2}<{\mu }_{2}<\frac{3}{4}{\beta }_{2}^{2}$ that $K{\beta }_{1}^{2}-{\mu }_{1}>0$ and $\frac{3}{4}{\beta }_{2}^{2}-{\mu }_{2}>0$. Thus there exists a constant ${C}_{1}$ such that

${\mu }_{1}\frac{\text{d}}{\text{d}t}\phi \left(x\right)+{\beta }_{1}\phi \left(x\right)\le {C}_{1},$ (3.23)

which can be equivalent to change into

${\mu }_{1}\mathrm{exp}\left(-\frac{{\beta }_{1}}{{\mu }_{1}}t\right)\frac{\text{d}}{\text{d}t}\left(\mathrm{exp}\left(\frac{{\beta }_{1}}{{\mu }_{1}}\right)\phi \left(x\right)\right)\le {C}_{1}.$ (3.24)

That is,

$\frac{\text{d}}{\text{d}t}\left(\mathrm{exp}\left(\frac{{\beta }_{1}}{{\mu }_{1}}t\right)\phi \left(x\right)\right)\le {C}_{1}\frac{1}{{\mu }_{1}}\mathrm{exp}\left(\frac{{\beta }_{1}}{{\mu }_{1}}t\right).$ (3.25)

By integrating (3.25), we have

$\mathrm{exp}\left(\frac{{\beta }_{1}}{{\mu }_{1}}t\right)\phi \left(x\right)\le \frac{{C}_{1}}{{\beta }_{1}}\mathrm{exp}\left(\frac{{\beta }_{1}}{{\mu }_{1}}t\right)+{C}_{2},$ (3.26)

where ${C}_{2}$ is a constant. We conclude that

$\phi \left(x\right)\le \frac{{C}_{1}}{{\beta }_{1}}+{C}_{2}\mathrm{exp}\left(-\frac{{\beta }_{1}}{{\mu }_{1}}t\right),$ (3.27)

which means that $\phi \left(x\right)$ is bounded for all $t\to \infty$. Similarly, we get $\varphi \left(p\right)$ is bounded for all $t\to \infty$.

The function $\phi \left(x\right)$ and $\varphi \left(p\right)$ is strongly convex, and it is well known that each of its Lebesgue sets is bounded. Thus the trajectory $x\left(t\right)$ and $p\left(t\right)$ is bounded. That is, there exists a constant ${C}_{3}$ and ${C}_{4}$ such that

${‖x\left(t\right)-{x}^{\ast }‖}^{2}\le {C}_{3},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{‖p\left(t\right)-{p}^{\ast }‖}^{2}\le {C}_{4}.$ (3.28)

Now we claim that ${\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{x}‖}^{2}\text{d}\tau <\infty$, ${\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{x}‖}^{2}\text{d}\tau <\infty$, ${\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{p}‖}^{2}\text{d}\tau <\infty$ and ${\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{p}‖}^{2}\text{d}\tau <\infty$. We firstly show that $‖\stackrel{˙}{x}‖$ and $‖\stackrel{˙}{p}‖$ is bounded. It follows from the inequality (3.22) that

$\frac{\text{d}}{\text{d}t}\phi \left(x\right)+\frac{{\beta }_{1}}{{\mu }_{1}}\phi \left(x\right)+{\beta }_{1}K{‖\stackrel{˙}{x}‖}^{2}\le {C}_{5},$ (3.29)

where ${C}_{5}$ is a constant. The above inequality means that

$〈\stackrel{˙}{x},x-{x}^{\ast }〉+\frac{{\beta }_{1}}{{\mu }_{1}}\phi \left(x\right)+{\beta }_{1}K{‖\stackrel{˙}{x}‖}^{2}\le {C}_{5}.$ (3.30)

Due to $〈\stackrel{˙}{x},x-{x}^{\ast }〉=-\frac{1}{2}‖\stackrel{˙}{x}‖-\frac{1}{2}{‖x-{x}^{\ast }‖}^{2}+\frac{1}{2}{‖\stackrel{˙}{x}+x-{x}^{\ast }‖}^{2}$, the above inequality infers that

$\left({\beta }_{1}K-\frac{1}{2}\right){‖\stackrel{˙}{x}‖}^{2}+\frac{1}{2}\left(\frac{{\beta }_{1}}{{\mu }_{1}}-1\right){‖x-{x}^{\ast }‖}^{2}\le {C}_{6},$ (3.31)

It follows from $\frac{1}{2K}<{\beta }_{1}<{\mu }_{1} that ${\beta }_{1}K-\frac{1}{2}>0$ and $\frac{{\beta }_{1}}{{\mu }_{1}}-1<0$. We conclude that ${‖\stackrel{˙}{x}‖}^{2}$ is bounded in the following.

$\left({\beta }_{1}K-\frac{1}{2}\right){‖\stackrel{˙}{x}‖}^{2}\le \frac{1}{2}\left(1-\frac{{\beta }_{1}}{{\mu }_{1}}\right){‖x-{x}^{\ast }‖}^{2}+{C}_{6}\le \frac{1}{2}\left(1-\frac{{\beta }_{1}}{{\mu }_{1}}\right){C}_{3}^{2}+{C}_{6},$ (3.32)

that is, ${‖\stackrel{˙}{x}‖}^{2}$ is bounded. It follows from

$|\frac{\text{d}}{\text{d}t}\varphi \left(x\right)|=|〈\stackrel{˙}{x},x-{x}^{\ast }〉|\le ‖\stackrel{˙}{x}‖‖x-{x}^{\ast }‖$ (3.33)

that $\frac{\text{d}}{\text{d}t}\varphi \left(x\right)$ also has lower bound. In the same way, ${‖\stackrel{˙}{p}‖}^{2}$ and $\frac{\text{d}}{\text{d}t}\varphi \left(p\right)$ is also bounded. Thus there exists a constant ${C}_{7}$ such that

$K{\mu }_{1}^{2}{\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{x}‖}^{2}+\left(K{\beta }_{1}^{2}-{\mu }_{1}\right){\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{x}‖}^{2}+\frac{3}{4}{\mu }_{2}^{2}{\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{p}‖}^{2}+\left(\frac{3}{4}{\beta }_{2}^{2}-{\mu }_{2}\right){\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{p}‖}^{2}\le {C}_{7},$ (3.34)

which yields that the integrals ${\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{x}‖}^{2}\text{d}\tau <\infty$, ${\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{x}‖}^{2}\text{d}\tau <\infty$, ${\int }_{{t}_{0}}^{t}{‖\stackrel{¨}{p}‖}^{2}\text{d}\tau <\infty$ and ${\int }_{{t}_{0}}^{t}{‖\stackrel{˙}{p}‖}^{2}\text{d}\tau <\infty$, converge as $t\to \infty$.

Assuming that there exists an $\epsilon >0$ such that $‖\stackrel{¨}{x}\left(t\right)‖\ge \epsilon$, $‖\stackrel{¨}{p}\left(t\right)‖\ge \epsilon$, $‖\stackrel{˙}{x}\left(t\right)‖\ge \epsilon$, and $‖\stackrel{˙}{p}\left(t\right)‖\ge \epsilon$ for all $t\ge {t}_{0}$, we obtain a contradiction to the convergence of integrals. Hence, there exists a subsequence of time moments ${t}_{i}\to \infty$ such that $‖\stackrel{¨}{x}\left({t}_{i}\right)‖\to 0$, $‖\stackrel{¨}{p}\left({t}_{i}\right)‖\to 0$, $‖\stackrel{˙}{x}\left({t}_{i}\right)‖\to 0$ and $‖\stackrel{˙}{p}\left({t}_{i}\right)‖\to 0$. Since $x\left(t\right)$ and $p\left(t\right)$ are bounded, we know that $x\left({t}_{i}\right)$ and $p\left({t}_{i}\right)$ are bounded. We choose the subsequences $x\left({t}_{{i}_{j}}\right)$ and $p\left({t}_{{i}_{j}}\right)$ of $x\left({t}_{i}\right)$ and $p\left({t}_{i}\right)$, then there exist ${x}^{\prime }$ and ${p}^{\prime }$ such that $x\left({t}_{{i}_{j}}\right)\to {x}^{\prime }$, $p\left({t}_{{i}_{j}}\right)\to {p}^{\prime }$, $‖\stackrel{¨}{x}\left({t}_{{i}_{j}}\right)‖\to 0$, $‖\stackrel{¨}{p}\left({t}_{{i}_{j}}\right)‖\to 0$, $‖\stackrel{˙}{x}\left({t}_{{i}_{j}}\right)‖\to 0$ and $‖\stackrel{˙}{p}\left({t}_{{i}_{j}}\right)‖\to 0$. as $j\to \infty$.

Let us consider the second-order differential equation system with the feedback controls (2.5)-(2.7) or the variational inequalities (2.8)-(2.10) for all ${t}_{{i}_{j}}$, and take the limit as $j\to \infty$, we have

${x}^{\prime }={\Pi }_{Q}\left({x}^{\prime }-\alpha \nabla {L}_{x}\left({x}^{\prime },{p}^{\prime }\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{p}^{\prime }={\Pi }_{+}\left({p}^{\prime }+\alpha g\left({x}^{\prime }\right)\right),$ (3.35)

which means that $\left({x}^{\prime },{p}^{\prime }\right)$ is a solution of problem (1.1) from (1.2) and (2.4). This completes the proof. $\square$

4. Numerical Results

In this section, we test two examples by the system (2.5)-(2.7). The transient behaviors of the proposed second-order differential equation system with the feedback controls are demonstrated in each example. The numerical implementation is coded by Matlab R2019a running on a PC with Intel i7 7700HQ of 2.8 GHz CPU and the ordinary differential equation solver adopted is ode45, which uses a Runge-Kutta (4, 5) formula.

Example 4.1. Consider the nonlinear convex programming problem

$\begin{array}{l}\mathrm{min}f\left(x\right)\\ \text{s}\text{.t}.\text{\hspace{0.17em}}\text{ }-10\le {x}_{i}\le 10,\text{ }\left(i=1,2,3,4\right).\end{array}$ (4.1)

where $\begin{array}{c}f\left(x\right)=100\left({x}_{2}-{x}_{1}^{2}\right)+{\left(1-{x}_{1}\right)}^{2}+90{\left({x}_{4}-{x}_{3}^{2}\right)}^{2}+{\left(1-{x}_{3}\right)}^{2}\\ \text{\hspace{0.17em}}\text{ }+10.1\left[{\left({x}_{2}-1\right)}^{2}+{\left({x}_{4}-1\right)}^{2}\right]+19.8\left({x}_{2}-1\right)\left({x}_{4}\right)-1\end{array}$, which has been discussed in Xiao and Harker [21]. Its optimal solution is ${x}^{\ast }={\left(1,1,1,1\right)}^{\text{T}}$. For problem (4.1), $g\left(x\right):{\Re }^{4}\to {\Re }^{8}$ can be defined by

$g\left(x\right)=\left(\begin{array}{c}{x}_{1}-10\\ {x}_{2}-10\\ {x}_{3}-10\\ {x}_{4}-10\\ -{x}_{1}-10\\ -{x}_{2}-10\\ -{x}_{3}-10\\ -{x}_{4}-10\end{array}\right),$

and $g\left(x\right)\le 0$.

Figure 1 describes the convergence behaviors of the trajectory $x\left(t\right)$ of the second-order differential equation system with the feedback controls (2.5)-(2.7) from a random initial point, which shows that the trajectories of the system (2.5)-(2.7) for solving problem 1 converge to the solution ${x}^{\ast }={\left(1,1,1,1\right)}^{\text{T}}$.

Example 4.2. Consider the variational inequality with constraints problem

$〈F\left(x\right),y-x〉\ge 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall y\in {\Re }_{+}^{5},$ (4.2)

where $F\left(x\right)=\left(\begin{array}{c}\mathrm{arctan}\left({x}_{1}-1\right)\\ \mathrm{arctan}\left({x}_{2}-2\right)\\ \mathrm{arctan}\left({x}_{3}-3\right)\\ \mathrm{arctan}\left({x}_{4}-4\right)\\ \mathrm{arctan}\left({x}_{5}-5\right)\end{array}\right)$, and its solution is ${x}^{\ast }={\left(1,2,3,4,5\right)}^{\text{T}}$.

The problem can be transformed into the following nonlinear convex programming problem

$\begin{array}{l}\mathrm{min}f\left(x\right)\\ \text{s}\text{.t}\text{.}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }x\in {\Re }_{+}^{5}.\end{array}$ (4.3)

where $F\left(x\right)$ is the gradient of the $f\left(x\right)$, and $g\left(x\right):{\Re }^{5}\to {\Re }^{5}$ can be defined by

$g\left(x\right)=\left(\begin{array}{c}-{x}_{1}\\ -{x}_{2}\\ -{x}_{3}\\ -{x}_{4}\\ -{x}_{5}\end{array}\right),$

and $g\left(x\right)\le 0$.

For problem (4.2), Figure 2 describes the convergence behaviors of the trajectory $x\left(t\right)$ of the second-order differential equation system with the feedback

Figure 1. Transient behavior of $x\left(t\right)$ of the system (5)-(7) for solving problem (1).

Figure 2. Transient behavior of $x\left(t\right)$ of the system (5)-(7) for solving problem (3).

controls (2.5)-(2.7) from three random initial points, which means that the trajectories of the system (2.5)-(2.7) for solving problem 3 converge to the solution ${x}^{\ast }={\left(1,2,3,4,5\right)}^{\text{T}}$.

It can be seen from Figure 1 and Figure 2 that the trajectories of the second-order differential equation system with the feedback controls (2.5)-(2.7) converge to the solutions of the original problem, which further illustrates the effectiveness of the second-order differential equation system with the feedback controls for solving the convex programming problem.

5. Conclusion

In this paper, we establish a system of second-order differential equation with the feedback controls based on the projection operator for solving the problem of convex programming (1.1). Firstly, we get the saddle point inequalities (1.2) by using the Lagrange function of problem (1.1). Inspired by Antipin [4], we investigate the properties of the saddle functions and we prove the accumulated points of the trajectory of the second-order differential equation system with the feedback controls are the solutions to the convex programming problem (1.1). At last, we compute two examples by using the second-order differential equation system with the feedback controls, which show that the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming.

Acknowledgements

Some of the results in this paper were presented at the Proceeding of the 11th World Congress on Intelligent Control and Automation, 2014, see https://ieeexplore.ieee.org/document/7052904. The research is supported by the National Natural Science Foundation of China under project No. 11801381 and No.11901422.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

 [1] Wang, R., Hong, S., Kai, R., et al. (2014) Fixed-Point Iteration Method for Solving the Convex Quadratic Programming with Mixed Constraints. Applied Mathematics, 5, 256-262. https://doi.org/10.4236/am.2014.52027 [2] Asadi, S., Mansouri, H. and Zangiabadi, M. (2019) A Primal-Dual Interior-Point Algorithm for Symmetrie Cone Convex Quadratic Programming Based on the Commutative Class Directions. Applied Mathematics, 35, 359-373. https://doi.org/10.1007/s10255-018-0789-z [3] Yuan, B., Zhang, M. and Huang, Z. (2017) A Wide Neighborhood Arc-Search Interior-Point Algorithm for Convex Quadratic Programming. Journal of Natural Science of Wuhan University, 22, 465-471. https://doi.org/10.1007/s11859-017-1274-x [4] Antipin, A.S. (2003) Feedback-Controlled Saddle Gradient Processes. Automation and Remote Control, 55, 311-320. [5] Antipin, A.S. (2000) From Optima to Equilibria, Dynamics of Non-Homogeneous Systems. Proceedings of ISA RAS, 3, 35-64. [6] Antipin, A.S. (2000) Solving Variational Inequalities with Coupling Constraints with the Use of Differential Equations. Differential Equations, 36, 1587-1596. https://doi.org/10.1007/BF02757358 [7] Antipin, A.S. (2001) Differential Equations for Equilibrium Problems with Coupled Constraints. Nonlinear Analysis, 47, 1833-1844. https://doi.org/10.1016/S0362-546X(01)00314-5 [8] Antipin, A.S. (2003) Minimization of Convex Functions on Convex Sets by Means of Differential Equations. Differential Equations, 30, 1365-1375. [9] Antipin, A.S. (2003) Controlled Proximal Differential Systems for Saddle Problems. Differential Equations, 28, 1498-1510. [10] Antipin, A.S. (1995) On Differential Prediction-Type Gradient Methods for Computing Fixed Points of Extremal Mappings. Differential Equations, 31, 1754-1763. https://doi.org/10.1007/978-3-642-79459-9_3 [11] Antipin, A.S. (2003) On Finite Convergence of Processes to a Sharp Minimum and to a Smooth Minimum with a Sharp Derivative. Differential Equations, 30, 1703-1713. [12] Wang, L., Li, Y. and Zhang, L. (2011) A Differential Equation Method for Solving Box Constrained Variational Inequality Problems. Journal of Industrial Management Optimization, 7, 183-198. https://doi.org/10.3934/jimo.2011.7.183 [13] Wang, L. and Wang, S. (2014) A Second-Order Differential Equation Method for Equilibrium Programming with Constraits. Proceeding of the 11th World Congress on Intelligent Control and Automation, Shenyang, 29 June-4 July 2014, 1279-1284. https://doi.org/10.1109/WCICA.2014.7052904 [14] Wang, L. and Wang, S. (2015) The Differential Equation Method for Variational Inequality with Constraints. ICIC Express Letters, 9, 2728-2794. [15] Wang, L., Chen, X. and Sun, J. (2020) A Differential Equation Method for the Variational Inequality Problem with the Cyclically Monotone Mapping. Linear and Nonlinear, 6, 287-296. [16] Wang, L., Chen, X. and Sun, J. (2021) The Second-Order Differential Equation System with the Controlled Process for Variational Inequality with Constraints. Complexity, 2021, Article ID: 9936370. https://doi.org/10.1155/2021/9936370 [17] Nazemi, A. and Sabeghi, A. (2019) A Novel Gradient-Based Neural Network for Solving Convex Second-Order Cone Constrained Variational Inequality Problems. Journal of Computational and Applied Mathematics, 347, 343-356. https://doi.org/10.1016/j.cam.2018.08.030 [18] Nazemi, A. and Sabeghi, A. (2020) A New Noural Network Framework for Solving Convex Second-Order Cone Constrained Variational Inequality Problems with an Application in Multi-Finger Robot Hands. Journal of Experimental and Theoretical Artificial Intelligence, 20, 181-203. https://doi.org/10.1080/0952813X.2019.1647559 [19] Kwelegano, K., Zegeye, H. and Boikanyo, O.A. (2021) An Iterative Method for Split Equality Variational Inequality Problems for Non-Lipschitz Pseudomonotone Mappings. Rendiconti del Circolo Matematico di Palermo Series 2, 1-24. https://doi.org/10.1007/s12215-021-00608-8 [20] Mosco, U. (1976) Implicit Variational Problems and Quasi-Variational Inequalities, Lecture Notes in Mathematics, Vol. 543, Springer-Verlag, Berlin. https://doi.org/10.1007/BFb0079943 [21] Xiao, B. and Harker, P.T. (1994) A Nonsmooth Newton Method for Variational Inequalities, II: Numerical Results. Mathematical Programming, 65, 195-216. https://doi.org/10.1007/BF01581696