A Modified Newton Method Based on the Metric Projector for Solving SOCCVI Problem

Abstract

In this paper, based on the second-order sufficient condition, the Clarke's generalized Jacobian of the Karush-Kuhn-Tucker system of the second-order cone constrained variational inequality (SOCCVI) problem that is nonsingular is proved by us. A modified Newton method with Armijo line search is presented. Three illustrative examples are given to show how the modified Newton method works.

Share and Cite:

Liu, H. , Sun, J. and Wang, L. (2022) A Modified Newton Method Based on the Metric Projector for Solving SOCCVI Problem. Journal of Applied Mathematics and Physics, 10, 1036-1054. doi: 10.4236/jamp.2022.104071.

1. Introduction

There have been many publications about computational approaches to solve the optimization problems such as linear programming, nonlinear programming, variational inequalities, and complementarity problems, see [1] - [6] and references therein. Some complementarity functions, such as nature function and Fischer-Burmeister (FB) function, have been widely and deeply studied for dealing with nonlinear complementarity problems and variational inequality problems with polyhedral cone constraints, see the famous book by Facchiniei and Pang [7]. A lot of methods for complementarity problems, variational inequality problems and nonsmooth equations have been studied by some researchers, see [7] - [16]. Based on the above research, we used the Fischer-Burmeister operator over the second order cone to deal with second-order cone constrained variational inequality (SOCCVI) problems, see [17]. However, in [17], we have only studied the first-order necessary conditions for SOCCVI problem, and no results about the second-order sufficient conditions of SOCCVI have appeared.

In this paper, we define the second-order sufficient condition of SOCCVI based on the metric projector. Based on the second-order sufficient condition and the constraint nondegeneracy, we prove the nonsingularity of the Clarke's generalized Jacobian of the Karush-Kuhn-Tucker system, constructed by the metric projector.

The second-order cone constrained variational inequality (SOCCVI) problem is to find a Q satisfying

f ( a ) , b a 0 , b Q , (1)

where the set Q is finitely representable and expressed as

Q = { a IR n | g ( a ) = 0 , h ( a ) K } .

Here , denotes the Euclidean inner product, f : IR n IR n , g : IR n IR m and h : IR n IR l are continuously differentiable functions, and K is a Cartesian product of second-order cones (or Lorentz cones), expressed as

K = K l 1 × K l 2 × × K l p , (2)

with l 0 , l 1 , l 2 , , l p 1 , l 1 + l 2 + + l p = l . We denote h ( a ) = ( h 1 ( a ) , , h p ( a ) ) T and h i = ( h 0 i , h ¯ i ) : IR n IR l i for i { 1, , p } . So we have the following equivalence relations

h ( a ) K h i ( a ) K l i , i { 1 , , p } h 0 i ( a ) h ¯ i ( a ) , i { 1 , , p } .

The convex second-order cone program (CSOCP):

min F ( a ) s .t . g ( a ) = 0 h ( a ) K (3)

where h : IR n IR m , g : IR n IR l , and F : IR n IR , is the special case of the SOCCVI problem. The SOCCVI can be solved by analyzing its KKT conditions:

{ L F ( x , μ , λ ) = 0 , h ( a ) , λ = 0 , h ( a ) K , λ K , g ( a ) = 0 , (4)

where L F ( a , μ , λ ) = f ( a ) + g ( a ) μ h ( a ) λ is the variational inequality Lagrangian function, μ IR m and λ IR l . In [18], we also point out that the neural network approach for SOCCVI was already studied.

As mentioned earlier, this paper investigates the characterizations of the strong regularity of Karush-Kuhn-Tucker (KKT) for SOCCVI via the metric projector. In this paper, we use the sufficient condition of the nonsingularity the Clarke's generalized Jacobian of the KKT system of (1), which deduces the nonsingularity of the Clarke’s generalized Jacobian of this system and the strong regularity of the KKT point. We employ modified Newton method for solving the SOCCVI problem and obverse the numerical performance.

2. Preliminaries

In this section, we organize some basic knowledge points. Most of these basic knowledge points can be found in [19].

We briefly recall some concepts associated with SOC, which are helpful for understanding the target problems and our analysis techniques. For any two vectors a = ( a 0 , a ¯ ) and b = ( b 0 , b ¯ ) in × n 1 , we use the Euclidean inner product a , b : = a T b , and the norm is induced by this inner product, i.e., a = a T a . And we define their Jordan product as a b : = ( a b T , b 0 a ¯ + a 0 b ¯ ) . Then, ( × n 1 , ) together with the element e = ( 1,0, ,0 ) T × n 1 give rise to a Jordan algebra. Note that a 2 means x x and x + y means the usual componentwise addition of vectors. It is known that a 2 K n for all a R n . Moreover, if a K n , then there exists a unique vector in K n , denoted by a 1 / 2 , such that ( a 1 / 2 ) 2 = a 1 / 2 a 1 / 2 = a . We also denote | a | : = ( a 2 ) 1 / 2 . Any a = ( a 0 , a ¯ ) × n 1 has the following spectral decomposition:

a = λ 1 ( a ) v 1 ( a ) + λ 2 ( a ) v 2 ( a ) , (5)

where λ i ( a ) , v i ( a ) are the spectral values and the associated spectral vectors of a, given by

λ i ( a ) = a 0 + ( 1 ) i a ¯ , v i ( a ) = { 1 2 ( 1 , ( 1 ) i a ¯ a ¯ ) , if a ¯ 0 ; 1 2 ( 1 , ( 1 ) i w ) , if a ¯ = 0 , (6)

for i = 1 , 2 , where c is any vector in n 1 satisfying w = 1 .

Suppose a = ( a 0 , a ¯ ) × n 1 having the spectral decomposition as (5), then the merit projector of u onto K n , denoted by K n ( x ) , is

Π K n ( a ) = [ λ 1 ( a ) ] + v 1 ( a ) + [ λ 2 ( a ) ] + v 2 ( a ) , (7)

here [ λ i ( a ) ] + = max { 0 , λ i ( a ) } , i = 1 , 2 . we have

Π K n ( x ) = { 1 2 ( 1 + a 0 a ¯ ) ( a ¯ , a ¯ ) , if | a 0 | < a ¯ , ( a 0 , a ¯ ) , if a ¯ < a 0 , 0 , if a ¯ a 0 . (8)

Lemma 2.1. Let the metric projection operator Π K n ( ) is directionally differentiable at x for any t n ,

Π K n ( a , t ) = { J Π K n ( a ) t , if a n \ { K n K n } t , if a int K n t 2 [ v 1 ( a ) T t ] v 1 ( a ) , if a b d K n \ { 0 } 0 , if a int K n 2 [ v 2 ( a ) T v ] + v 2 ( a ) , if a b d K n \ { 0 } Π K n ( t ) , if a = 0

where

J Π K n ( a ) = 1 2 ( 1 a ¯ T a ¯ a ¯ a ¯ I + a 0 a ¯ I a 0 a ¯ a ¯ a ¯ T a ¯ 2 )

[ v 1 ( x ) t T ] : = min { 0 , v 1 ( a ) T t } , [ v 2 ( a ) t T ] + : = max { 0 , v 2 ( a ) T t } .

We recall from Lemma 2.5 in [20] the characterization of the tangent cone of a second-order cone at a point in it.

Lemma 2.2. Consider the second-order cone K n and let a K n . Then, the tangent cone and the second-order tangent cone of K n at a are

T K n ( a ) = { n , if a int K n , K n , if a = 0 , { t = ( t 0 , t ¯ ) × n 1 | t ¯ , a ¯ a 0 t 0 0 } , if a b d K n \ { 0 } .

and

T K n 2 ( a , t ) = { n , if a int T K n , T K n ( t ) , if a = 0 , { c = ( c 0 , c ¯ ) × n 1 | c ¯ , s ¯ c 0 a 0 t 0 2 t ¯ 2 } , otherwise .

For the convenience of discussions, we need the definition of the tangent cone, regular and normal cone of a closed set at a point, all the concepts are taken from [21].

Definition 2.1. For a closed set K n and a point a ¯ K , define the following sets: the tangent (Bouligand) cone

T K ( a ) : = l i m d 0 sup K a ¯ d ,

theregular (Frchet)normal cone

N ^ K ( a ¯ ) : = { w n | w , a a ¯ o ( a a ¯ ) , a K } ,

thelimiting (in the sense of Mordukhovich)normal cone

N K ( a ¯ ) : = l i m K a a ¯ sup N ^ K ( a ¯ ) .

If K is a closed convex set, then T K ( a ¯ ) = v ( K + a ¯ ) N ^ K ( a ¯ ) = N K ( a ¯ ) = T K ( a ¯ ) = { w K | w , a 0 } .

Let A , B , Z and N be finite-dimensional real Hilbert spaces and f is a mapping from A × B × Z to N. If f is Fréchet differentiable at ( a , b , z ) A × B × Z , then we use J f ( a , b , z ) (respectively, J a f ( a , b , z ) ) to denote the Fréchet derivative of f at ( a , b , z ) (respectively, the partial Fréchet derivative of f at ( a , b , z ) with respect to a). And let f ( a , b , z ) : = J f ( a , b , z ) T be the adjoint of J f ( a , b , z ) (respectively, a f ( a , b , z ) : = J a f ( a , b , z ) T ), where the adjoint operator ( ) T satisfies the following formula: if the operation is consistent, then a , M b = M T a , b . If f is twice Fréchet differentiable at ( a , b , z ) A × B × Z , we define

J 2 f ( a , b , z ) : = J ( J f ) ( a , b , z ) , J a a 2 F ( a , b , z ) : = J a ( J a f ) ( a , b , z ) ,

2 f ( a , b , z ) : = J ( f ) ( a , b , z ) , a a 2 F ( a , b , z ) : = J a ( a F ) ( a , b , z ) .

3. The Optimality Condition and Nonsingularity Theorem

Let L f ( a , μ , λ ) = f ( a ) + J g ( a ) T μ J h ( a ) T λ be the Lagrangian of (1), where ( μ , λ ) = ( μ , λ 1 , , λ p ) m × l 1 × × l p = m × l . Let a * be a locally optimal solution to (1). Then, a * satisfies the following Karush-Kuhn-Tucker condition. Using the NR function and the definition of the normal cone, the KKT condition can be expressed as

H ( a , μ , λ ) = ( L f ( a * , μ , λ ) g ( a * ) h ( a ) Π K ( h ( a ) λ ) ) = 0

Mimicking the arguments described as in [17], we can verify that the KKT system (4) is equivalent to the following unconstrained smooth minimization problem:

min Φ ( z ) : = 1 2 H ( z ) 2 , (9)

where z = ( a , μ , λ ) R n + m + l and S ( z ) is given by

H ( z ) = [ L f ( a , μ , λ ) g ( x ) ϕ NR ( h l 1 ( a ) , λ l 1 ) ϕ NR ( h l q ( a ) , λ l q ) ] ,

with h l i ( a ) , λ l i IR l i . In other words, Φ ( z ) is a smooth merit function for the KKT system (4).

By [20], we give the following definition and theorem.

Definition 3.1. Let a * be a feasible point of (1) such that Λ ( a * ) = ( μ , λ ) ϕ . We say that the second-order sufficient condition holds at a * if

sup ( μ , λ ) Λ ( a * ) { J a L f ( a * , μ , λ ) t , t + t T H ( a * , λ ) t } > 0 , t Q ( a * ) \ { 0 } (10)

Theorem 3.1. Let ( a * , μ * , λ * ) is the KKT point of (1). Assume that

(i) Λ ( a * ) ϕ ;

(ii) thesecond-order sufficient condition (10)holds;

(iii) λ * int N K ( h ( a * ) ) holds;

(iv) thefollowing constraint nondegeneracy holds,

( J g ( a * ) J h ( a * ) ) n + l i n T { 0 m } × K ( g ( a * ) , h ( a * ) ) = m × l , (11)

then J H ( a * , μ * , λ * ) is nonsingular.

Proof. It follows from λ * int N K ( h ( a * ) ) that Π K is differentiable at ( h ( a * ) λ * ) . Let ( t a , t μ , t λ ) n × m × l . we have

J H ( a * , μ * , λ * ) ( t a t μ t λ ) = ( J a L f ( a * , μ * , λ * ) t a + J g ( a * ) T t μ J h ( a * ) T t λ J g ( a * ) t a J h ( a * ) t a Π K ( h ( a * ) λ * ; J h ( a * ) t a t λ ) ) (12)

Let J H ( a * , μ * , λ * ) ( t a t μ t λ ) = 0 (We need to show t a = 0 , t μ = 0 , t λ = 0 ). We get

{ J g ( a * ) t a = 0 J h ( a * ) t a = Π K ( h ( a * ) λ * ; J h ( a * ) t a t λ ) (13)

which implies that t a Q ( a * ) . From the first line of (12), we get that

J a L f ( a * , μ * , λ * ) t a , t a + J h ( a * ) t a , t λ = 0 (14)

Define the index sets:

I * = { i : h i ( a ) int K l i , i = 1 , , p } ; B * = { i : h i ( a ) b d r y K l i , h i ( a ) 0 } ; Z * = { i : h i ( a ) = 0 } .

If φ ( b 0 , b ) = b 0 b , ( b 0 , y ) n , we have

φ ( b 0 , b ) = ( 1 b b )

Note that

Q K ( h ( a * ) ) = { t n : J h ( a * ) t T K ( h ( a * ) ) }

and

T K ( h ( a * ) ) = { t | h 0 i ( a * ) T t J h ¯ i ( a * ) t h 0 i ( a * ) 0, i B * h 0 i ( a * ) t J h i ( a * ) t 0, i Z * }

From λ h ( a ) , we can deduce that

λ = { λ | λ i = 0 , i I * λ i = σ ( h 0 i ( a * ) , h ¯ i ( a * ) ) , σ > 0 , i B * λ i int K i , i Z * }

Hence

[ h ( a * ) λ * ] i = { h i ( a * ) int K i , i I * ( ( 1 σ ) h 0 i ( a * ) , ( 1 + σ ) h ¯ i ( a * ) ) , i B * λ i int K i , i Z *

Condition (iii) of Theorem 3.1 means

Q ( a * ) = { t | J g ( a * ) = 0 , J h i ( a * ) t = 0 , i Z * J h i ( a * ) t T K ( h i ( a * ) ) , λ i , J h i ( a * ) t = 0 , i B * }

and Q ( a * ) is a linear space. Therefore,

δ * ( λ | T K 2 ( h ( a * ) , J h ( a * ) t ) ) = i B * λ 0 i h 0 i ( a * ) [ h 0 i ( a * ) T t a 2 J h ¯ i ( a * ) t a 2 ]

In addition, by Lemma 2.3 and (13), we can deduce that

Π K ( h i ( a * ) λ * i ; J h i ( a * ) t a t λ i ) = 1 2 ( 1 ( b ¯ i ) T b ¯ b ¯ i b ¯ I l i + b 0 i b ¯ I l i b 0 i b ¯ i ( b ¯ i ) T b ¯ b ¯ 2 ) ( h 0 i ( a * ) T t a t λ 0 i J h i ( a * ) t a t λ ¯ i ) = M ( J h i ( a * ) t a t λ i ) = J h i ( a * ) t a , (15)

where b = h i ( a ) λ i . From (15), we have

[ I M ] J h i ( a * ) t = M t λ i ,

that is

1 2 h 0 i ( a ) i t a 1 2 b ¯ i T b ¯ i J h ¯ i ( a ) t = 1 2 t λ 0 i b ¯ i T b ¯ i 2 t λ ¯ i 1 2 b ¯ i b ¯ i h 0 i ( a ) T t a 1 2 c i J h ¯ i ( a ) t a 1 2 J h ¯ i ( a ) t a = 1 2 b ¯ i b ¯ t λ 0 i 1 2 t λ i 1 2 c i d λ i ,

where c i = h ¯ i ( a ) h ¯ i ( a ) .

Case (I). If i B * , we have

Π K i ( h i ( a ) λ ¯ i ; J h i ( a ) t a t λ i ) = 1 2 ( 1 c i T c i 2 1 + σ I 1 σ 1 + σ c i c i T ) ( J h i ( a ) t a t λ i ) = M i ( J h i ( a ) t a t λ i ) = J h i ( a ) t a (16)

When i B * , λ i = ( σ h 0 i ( a ) , σ h ¯ i ( a ) ) . Now we need to prove that t a T Q ( a ) and

J h 0 i ( a ) t a h ¯ i ( a ) T J h ¯ i ( a ) t a h ¯ i ( a ) (17)

From h 0 i ( a ) = h i ( a ) , we have

λ i = ( σ h 0 i ( a ) σ h ¯ i ( a ) ) = σ h 0 i ( a ) ( 1 c i )

where c i = 1 and c i = h ¯ i ( a ) h 0 i ( a ) . Hence

λ i M i = ( 1 c i 2 , c i T 2 1 + σ c i T + 1 σ 1 + σ c i T c i 2 ) = ( 0 , 0 ) . (18)

(16) and (18) imply

λ i , J h i ( a ) t a = 0

which means t a Q ( a ) .

It follows from (16) that

M i ( J h i ( a ) t a t λ i ) = J h i ( a ) t a ( M i I ) J h i ( a ) t a = M i t λ i ( 1, c i T ) ( 1 2 1 2 c i T 1 2 c i σ 1 + σ I 1 2 1 σ 1 + σ c i c i T ) ( h 0 i ( a ) T t a J h i ( a ) T t a ) = ( 1, c i T ) ( 1 2 1 2 c i T 1 2 c i 1 1 + σ I 1 2 1 σ 1 + σ c i c i T ) ( t λ 0 i t λ i ) (19)

Therefore, we can deduce that

( 1 , 1 2 c i T + σ 1 + σ c i T + 1 2 1 σ 1 + σ c i T ) ( h 0 i ( a ) T t a J h ¯ i ( a ) t a ) = 0

which is

( 1, c i T ) ( h 0 i ( a ) T t a J h ¯ i ( a ) t a ) = 0 (20)

From (20), we get

h 0 i ( a ) T t a = h ¯ i ( a ) T h ¯ i ( a ) t a h ¯ i ( a ) .

Hence, (17) holds.

Case II. Let i Z * . From the second equation of (13), we can get that

Π K i ( 0 λ i ; J h i ( a ) t a t λ i ) = J h i ( a ) t a

It follows from the projection of the second-order cone that

J h i ( a ) t a = 0.

Case III. Let i I * . From the second equation of (13), we can get that

Π K i ( h i ( a ) , J h i ( a ) t a t λ i ) = J h i ( a ) t a t λ i = J h i ( a ) t a

Then t λ i = 0 .

To sum up the above three situations, t a Q ( a ) implies

{ J h i ( a ) t a = 0 , i Z h 0 i ( a ) h 0 i ( a ) T t a = h ¯ i ( a ) J h ¯ i ( a ) t a , i B .

From (12) and (13), we assume that

J a L f ( a * , μ * , λ * ) t a + J g ( a * ) T t μ J h ( a * ) T a λ = 0 (21)

J g ( a * ) t a = 0 (22)

J h ( a * ) t a Π K ( h ( a * ) λ * ; J h ( a * ) t a t λ ) = 0 (23)

By (21) and (22), we have

0 = t a , J a L f ( a * , μ * , λ * ) t a + J g ( a * ) T t μ J h ( a * ) T t λ = t a , J a L f ( a * , μ * , λ * ) t a i B * J h i ( a * ) T t a , t λ i .

For i B * ,

J h i ( a * ) T t a , t λ i = h 0 i ( a * ) t a t λ 0 i + J h ¯ i ( a * ) T t a , t λ ¯ i = h 0 i ( a * ) T t a h ¯ i ( a * ) h ¯ i ( a * ) t λ ¯ i + J h ¯ i ( a * ) T t a , t λ ¯ i = h ¯ i ( a * ) T J h ¯ i ( a * ) t a h ¯ i ( a * ) 2 h ¯ i ( a * ) T t λ ¯ i + J h ¯ i ( a * ) T t a , t λ ¯ i = ( J h ¯ i ( a * ) t a ) T [ I h ¯ i ( a * ) h ¯ i ( a * ) T h ¯ i ( a * ) 2 ] , t λ ¯ i (24)

By (19), we get

( 1 2 h 0 i ( a * ) T t a + 1 2 h ¯ i ( a * ) T h ¯ i ( a * ) J h ¯ i ( a * ) t a 1 2 c i ( h 0 i ( a * ) T t a c i T J h ¯ i ( a * ) t a 1 σ 1 + σ ) σ 1 + σ J h ¯ i ( a * ) t a ) = ( 1 2 t λ 0 i + 1 2 c i T t λ ¯ i 1 2 c i ( t λ 0 i 1 σ 1 + σ c i T d λ i ) + 1 1 + σ t λ ¯ i ) (25)

Since

1 2 c i ( h 0 i ( a * ) T t a c i T J h ¯ i ( a * ) t a 1 σ 1 + σ ) σ 1 + σ J h ¯ i ( a * ) t a = 1 2 c i h 0 i ( a ¯ ) T t a 1 2 1 σ 1 + σ c i c i T J h ¯ i ( a * ) T t a σ 1 + σ J h ¯ i ( a * ) t a = 1 2 c i ( h 0 i ( a * ) T t a 1 σ 1 + σ v i T J h ¯ i ( a * ) t a ) σ 1 + σ J h ¯ i ( a * ) t a = 1 2 c i ( h 0 i ( a * ) T t a 1 σ 1 + σ h i ( a * ) T t a ) σ 1 + σ J h ¯ i ( a * ) t a = σ 1 + σ ( c i h 0 i ( a * ) T t a J h ¯ i ( a * ) t a ) (26)

and

1 2 c i ( t λ 0 i 1 σ 1 + σ c i T t λ i ) + 1 1 + σ t λ ¯ i = 1 2 c i ( t λ 0 i + 1 σ 1 + σ t λ 0 i ) + 1 1 + σ t λ ¯ i = 1 1 + σ c i t λ 0 i + 1 1 + σ t λ ¯ i = 1 1 + σ ( c i t λ 0 i + t λ ¯ i ) (27)

From (25), (26) and (27), we get that

1 1 + σ ( c i t λ 0 i + t λ ¯ i ) = σ 1 + σ ( c i h 0 i ( a * ) T t a J h ¯ i ( a * ) t a ) ,

that is

c i t λ 0 i + t λ ¯ i = σ ( c i h 0 i ( a * ) T t a J h ¯ i ( a * ) t a ) . (28)

Note that

c i t λ 0 i + t λ ¯ i = ( I c i c i T ) t λ ¯ i = ( I h ¯ i ( a * ) h ¯ i ( a * ) T h ¯ i ( a * ) 2 ) t λ ¯ i (29)

It follows from (24), (28) and (29) that

J h i ( a ) T t a , t λ i = J h ¯ i ( a ) t a , ( I h ¯ i ( a ) h ¯ i ( a ) T h ¯ i ( a ) 2 ) t λ ¯ i = σ ( J h i ( a ) t a , c i h 0 i ( a ) T t a J h ¯ i ( a ) t a 2 ) = i B * λ 0 i h 0 i ( a ) ( h 0 i ( a ) T t a 2 J h ¯ 0 i ( a ) t a 2 ) = ( t a ) T ( j = 1 p λ 0 j h 0 j ( a ) h j ( a ) ( 1 0 T 0 I ) ( h j ( a ) ) T ) t a . = ( t a ) T H ( a * , λ ) t a (30)

From (30) and (14), we have

J a L f ( a * , μ * , λ * ) t a , t a + t T H ( a * , λ ) t = 0.

From the second-order sufficient condition, we have t a = 0 . Hence, from (21), we deduce

J g ( a * ) T t μ + J h ( a * ) T t λ = 0.

(23) and condition (iv) of Theorem 3.1 imply d μ = 0 , d λ = 0 . Therefore, J H ( a * , μ * , λ * ) is nonsingular.

4. A Modified Newton Method and Numerical Experiments

In this section, we use a modified Newton algorithm to solve the unconstrained smooth minimization problem (9).

The presented algorithm is actually a counterpart in the case of second order cone constrained VI problems of ( [7], Algorithm 9.1.10), which is used to solve polyhedral cone constrained VI problems. Note that although S ( z ) is nonsmooth, the merit function Φ ( z ) is continuously differentiable if f is, see ( [7], Proposition 1.5.3). In the following proposition we give the relationship between the merit function and the KKT condition.

Proposition 4.1. Suppose that f , h and g are continuously differentiable. If J H ( a , μ , λ ) is nonsingular, then every stationary point of the merit function Φ ( a , μ , λ ) is a KKT triple of the SOCCVI.

The proof of Proposition 4.1 is similar to that of Proposition 4.1 in [17], so we omit it here.

Algorithm 4.1

Data Given z 0 = ( a 0 , μ 0 , λ 0 ) n × m × l , σ > 0 , p > 1 and γ ( 0,1 ) .

Step 1 Set k = 0 .

Step 2 If z k = ( a k , μ k , λ k ) is a stationary point of Φ stop.

Step 3 Find a solution t k of the system

H ( z k ) + J H ( z k ) t = 0. (31)

If the system (31) is not solvable or if the condition

Φ ( z k ) T t k σ t k p (32)

is not satisfied, (re)set t k Φ ( z k ) .

Step 4 Find the smallest nonnegative integer i k such that, with i = i k ,

Φ ( z k + 2 i t k ) Φ ( z k ) + γ 2 i Φ ( z k ) T t k ; (33)

set τ k 2 i k .

Step 5 Set z k + 1 z k + τ k t k and k k + 1 ; go to step 2.

The superlinear convergence of Algorithm 4.1 can be obtained by reference to Algorithm 4.1 in [17]. To demonstrate effectiveness of the Newton method, some illustrative SOCCVI problems are tested. Observe that all the trajectories were successfully able to converge to the SOCCVI solution.

5. Numerical Experiments

Example 5.1. Consider the SOCCVI problem (1) where

f ( x ) = ( x 1 5 2 x 2 1 x 3 2 2 x 4 2 x 5 + 1 )

and

Q = { x 5 : h ( x ) = x K 5 } .

This problem has a solution x * = ( 5,0.5,2,1, 1 ) T . It can be verified that the Lagrangian function for this example is

L ( x , μ , λ ) = f ( x ) λ .

The gradient of the Lagrangian function is

L ( x , μ , λ ) = [ f ( x ) I 5 × 5 ] ,

where I is the identity mapping and f ( x ) is the gradient of f ( x ) . For example 5.1, Table 1 is a comparison chart of the results of different complementarity functions.

In our simulations, the initial points are x 0 = ( 1,0,0,0,0 ) T , λ 0 = ( 0,0,0,0,0 ) T , and the stopping criterion is set at Φ ( z ) 1 × 10 6 . The trajectories of Algorithm 4.1 for the above problems are shown in Figure 1 and Figure 2.

Example 5.2. Consider the SOCCVI problem (1) where

Table 1. The results of Example 5.1.

Figure 1. Transient behavior of x in Example 5.1 with σ = 0.01 and γ = 0.45 .

Figure 2. Convergence behavior of the error x x * for Example 5.1.

f ( x ) = ( x 1 4 e x 2 2.178 3 x 3 + 4 tan x 4 + 1 x 5 + 2 2 x 6 1 )

and

Q = { x 6 : h ( x ) = x K 6 } .

This problem has an approximate solution x * = ( 4,0.9999, 1.3333, 0.7854, 2,0.5 ) T . It can be verified that the Lagrangian function for this example is

L ( x , μ , λ ) = f ( x ) λ .

The gradient of the Lagrangian function is

L ( x , μ , λ ) = [ f ( x ) I 5 × 5 ] ,

where I is the identity mapping and f ( x ) is the gradient of f ( x ) . For example 5.2, Table 2 is a comparison chart of the results of different complementarity functions.

In our simulations, the initial points are x 0 = ( 1,0,0,0,0,0 ) T , λ 0 = ( 0,0,0,0,0,0 ) T , and the stopping criterion is set at Φ ( z ) 1 × 10 6 . The trajectories of Algorithm 4.1 for the above problems are shown in Figure 3 and Figure 4.

Table 2. The results of Example 5.2.

Figure 3. Transient behavior of x in Example 5.2 with σ = 0.01 and γ = 0.45 .

Figure 4. Convergence behavior of the error x x * for Example 5.2.

Table 3. The results of Example 5.3.

Example 5.3. Consider the SOCCVI problem (1) where

f ( x ) = ( x 1 6 4 x 2 3 e x 3 2.718 tan x 4 1 x 5 2 3 x 6 1 )

and

Q = { x 5 : h ( x ) = x K 5 } .

For example 5.3, Table 3 is a comparison chart of the results of different complementarity functions.

This problem has a solution x * = ( 6,0.75,0.9999,0.7854,2, 0.3333 ) T . In our simulations, the initial points are x 0 = ( 1,0,0,0,0,0 ) T , λ 0 = ( 0,0,0,0,0,0 ) T , and the stopping criterion is set at Φ ( z ) 1 × 10 6 . The trajectories of Algorithm 4.1 for the above problems are shown in Figure 5 and Figure 6.

Example 5.4. Consider the SOCCVI problem (1) where

Figure 5. Transient behavior of x in Example 5.3 with σ = 0.01 and γ = 0.45 .

Figure 6. Convergence behavior of the error x x * for Example 5.3.

f ( x ) = ( 2 x 1 4 e x 2 1 3 x 3 4 sin x 4 x 5 )

and

Q = { x 5 : h ( x ) = x K 5 } .

This problem has a solution x * = ( 2,0,1.3333,0,0 ) T . In our simulations, the initial points are x 0 = ( 1,0,0,0,0 ) T , λ 0 = ( 0,0,0,0,0 ) T , and the stopping criterion is set at Φ ( z ) 1 × 10 6 . Comparison of decay rates of x ( t ) x * for the two complementary function (FB function and NR function) in Figure 7 The error plots shown in Figure 7 reveal that the NR function and FB function have almost the same convergence rates.

Figure 7. Convergence behavior of the error x x * for Example 4.

6. Conclusion

In this paper, we target the second-order cone constrained variational inequality (SOCCVI) problem by using the modified Newton algorithm which is applied in [17], based on the metric projector. We have established that for a locally optimal solution to a SOCCVI problem, the nonsingularity of the Clarke’s generalized Jacobian of the KKT system, constructed by the metric projector, is equivalent to the second-order sufficient condition and constraint nondegeneracy. Three numerical simulations are conducted which show the efficiency of the Newton algorithm. This paper improves our previous work [17].

Founding

This work is supported by National Natural Science Foundation of China (Grant No.11301348).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Chen, J.-S. (2006) The Semismooth-Related Properties of a Merit Function and a Decent Method for the Nonlinear Complementarity Problem. Journal of Global Optimization, 36, 565-580.
https://doi.org/10.1007/s10898-006-9027-y
[2] Chen, J.-S., Gao, H.-T. and Pan, S.-H. (2009) A Derivative-Free R-Linear Convergent Algorithm Based on the Generalized Fischer-Burmeister Merit Function. Journal of Computational and Applied Mathematics, 232, 455-471.
https://doi.org/10.1016/j.cam.2009.06.022
[3] Chen, J.-S. and Pan, S.-H. (2008) A Family of NCP Functions and a Descent Method for the Nonlinear Complementarity Problem. Computational Optimization and Applications, 40, 389-404.
https://doi.org/10.1007/s10589-007-9086-0
[4] Chen, X., Qi, L. and Sun, D. (1998) Global and Superlinear Convergence of the Smoothing Newton Method and Its Application to General Box Constrained Variational Inequalities. Mathematics of Computation, 67, 519-540.
https://doi.org/10.1090/S0025-5718-98-00932-6
[5] Facchinei, F. and Pang, J. (2003) Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York.
https://doi.org/10.1007/b97544
[6] Wright, S.J. (1994) An Infeasible-Interior-Point Algorithm for Linear Complementarity Problems. Mathematical Programming, 67, 29-51.
https://doi.org/10.1007/BF01582211
[7] Facchiniei, F. and Pang, J.-S. (2003) Finite-Dimensinal Variational Inequalities and Complementarity Problems, Volume II. Springer-Verlag, New York.
https://doi.org/10.1007/b97543
[8] Chen, X.D., Sun, D. and Sun, J. (2003) Complementarity Functions and Numerical Experiments on Some Smoothing Newton Methods for Second-Order-Cone Complementarity Problems. Computational Optimization and Applications, 25, 39-56.
https://doi.org/10.1023/A:1022996819381
[9] Fischer, A. (1992) A Special Newton-Type Optimization Method. Optimization, 24, 269-284.
https://doi.org/10.1080/02331939208843795
[10] Ferris, M.C., Pang, J.-S. and EDS (1997) Complementarity and Variational Problems: State of the Art. SIAM, Philadelphia.
[11] Pan, S. and Chen, J.S. (2008) A Semismooth Newton Method for the COCCP Based on a One-Parametric Class of SOC Complementarity Functions. Computational Optimization and Applications, 45, 59-88.
https://doi.org/10.1007/s10589-008-9166-9
[12] Pang, J.-S. and Qi, L. (1993) Nonsmooth Equations: Motivation and Algorithms. SIAM Journal on Optimization, 3, 443-465.
https://doi.org/10.1137/0803021
[13] Qi, L. and Sun, J. (1993) A Nonsmooth Version of Newton’s Method. Mathematical Programming, 58, 353-367.
https://doi.org/10.1007/BF01581275
[14] Smale, S. and Gleason, A.M. (1987) Algorithms for Solving Equations. In: Proceedings of the International Congress of Mathematicians, American Mathematical Society, Providence, 172-195.
[15] Sun, D. and Sun, J. (2005) Strong Semismoothness of the Fischer-Burmeister SDC and SOC Complementarity Functions. Mathematical Programming, 103, 575-581.
https://doi.org/10.1007/s10107-005-0577-4
[16] Tseng, P. (1998) Merit Functions for Semidefinite Complementarity Problems. Mathematical Programming, 83, 159-185.
https://doi.org/10.1007/BF02680556
[17] Sun, J.-H. and Zhang, L.-W. (2009) A Globally Convergent Method Based on Fischer-Burmeister Operators for Solving Second-Order-Cone Constrained Variational Inequality Problems. Computers and Mathematics with Applications, 58, 1936-1946.
https://doi.org/10.1016/j.camwa.2009.07.084
[18] Sun, J.-H., Chen, J.-S. and Ko, C.-H. (2012) Neural Networks for Solving Second-Order Cone Constrained Variational Inequality Problem. Computational Optimization and Applications, 51, 623-648.
https://doi.org/10.1007/s10589-010-9359-x
[19] Bonnans, J.F. and Shapiro, A. (2000) Perturbation Analysis of Optimization Problems. Spring-Verlag, New York.
https://doi.org/10.1007/978-1-4612-1394-9
[20] Bonnans, J.F. and Héctor, R.C. (2005) Perturbation Analysis of Second-Order Cone Programming Problems. Mathematical Programming, 104, 205-227.
https://doi.org/10.1007/s10107-005-0613-4
[21] Rockafellar, R.T. and Wets, B. (1998) Variational Analysis. Springer-Verlag, New York.
https://doi.org/10.1007/978-3-642-02431-3

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.