A Projection and Contraction Method for P-Order Cone Constraint Stochastic Variational Inequality Problem

Abstract

In this paper, we study the p-order cone constraint stochastic variational inequality problem. We first take the sample average approximation method to deal with the expectation and gain an approximation problem, further the rationality is given. When the underlying function is Lipschitz continuous, we acquire a projection and contraction algorithm to solve the approximation problem. In the end, the method is applied to some numerical experiments and the effectiveness of the algorithm is verified.

Share and Cite:

Zheng, M. , Xu, X. and Sun, J. (2022) A Projection and Contraction Method for P-Order Cone Constraint Stochastic Variational Inequality Problem. Journal of Applied Mathematics and Physics, 10, 1113-1125. doi: 10.4236/jamp.2022.104078.

1. Introduction

Variational inequality has important applications in many aspects such as physics, economic equilibrium theory, cybernetics, engineering, optimization, etc. [1] [2] [3]. Since there are many random factors that cannot be ignored in real life, such as weather, demand, price, etc., stochastic variational inequality has become a research hot-spot of many scholars in recent decades. In this paper, we consider the stochastic variational inequality problem with p-order cone (abbrevd.POSVI), which is to find x C , such that

E [ f ( x , ξ ) ] , y x 0, y C (1)

where , denotes the Euclidean inner product, ξ is a random variable defined in probability Ω , E [ f ( x , ξ ) ] = Ω f ( x , ξ ( ω ) ) d P ξ ( ω ) is the expectation of ξ , f ( x , ξ ) : R n × Ω R n is a given mapping, C = { x | x K p } , K p is a p-order cone, K p is the dual cone of K p , which can be expressed as [4]:

K p : = { x R n | x 0 ( i = 2 n | x i | p ) 1 p } = { x = ( x 0 , x ¯ ) R × R n 1 | x 0 x ¯ p } ( p > 1 )

K p : = { y R n | y 0 ( i = 2 n | y i | q ) 1 q } = { y = ( y 0 , y ¯ ) R × R n 1 | y 0 y ¯ q } = K q

where q > 1 and satisfies 1 p + 1 q = 1 . K p and K p are both closed convex cone, K p is not a self-dual cone when p q , in other words, the K p is not a symmetric cone for p 2 . It is clear that the p-order cone is the second-order cone with n-dimension when p = 2 , which means:

K n : = { ( x 1 , x 2 ) R × R n 1 | x 1 x 2 }

Thus, the POSVI in this paper can be regarded as an extension of the second-order constrained stochastic variational inequality problem. The research of theory and algorithm has made great progress in recent decades. Our research focuses on transforming solving the POSVI into finding the zero point of equation and constructing optimization algorithm under certain condition. Hence, the work below is down first.

Assume that a closed convex subset of R n is C and x R n , then the projection of x on C is [5]:

P C ( x ) = arg min { y x | y C } (2)

Refer to reference [6], x * is the solution of problem (1) if and only if for any α , the equation holds below

x * = P C ( x * α E [ f ( x * , ξ ) ] ) (3)

Define the residual of the equation as G ( x , α )

G ( x , α ) : = x P C ( x α E [ f ( x , ξ ) ] ) (4)

then, solving POSVI only needs to find a zero point of G ( x , α ) .

In the following, we concentrate on constructing algorithm to solve the problem. There are some optimization algorithms that have been applied to stochastic optimization problem, for example, Korpelevich and Antipin study the outer gradient projection algorithm and every iteration should compute the projection twice, see [7] [8]. On the basis of previous research, Tseng proposes a gradient projection algorithm and every iteration only should compute a projection one time, see [9]. Different from the first two algorithms, Duong and Yekini receive step size through Amjo-type search method, which can decrease the difficulty of [9] to estimate the Lipschitz constant, see [10] [11]. Under a mapping is monotone and Lipschitz continuous, Yang and Liu give a projection algorithm and the better of which is that the compute of the step size does not depend on Amjo-like search, see [12] [13] [14]. Combined the above algorithms, we propose a projection and contraction method.

The organizational framework of this paper is as follows: In part 2, we introduce some definitions and conclusions about the p-order cone. In part 3, the basic idea of sample average approximation method is shown and the revelent conclusions are given. In part 4, a projection and contraction method is proposed and it is proved that the sequence generated by the method converges to the real solution. In part 5, a numerical example is given, and the method is applied to solve it. The numerical results claim that the method is effective. In part 6, the work of this paper is summarized and future research topic is given.

2. Preliminaries

In this section, we give some basic concepts and conclusions related to p-order cone in order to facilitate the following research.

Definition 2.1 ( [15] ). Let h : R n R n be Lipschitz continuous then for any x , y R n , there exist a constant L > 0 , such that

h ( x ) h ( y ) L x y

Definition 2.2 ( [15] ). Let h : R n R n . Then for any x , y R n , the mapping h is said to be

(a) monotoneif

x y , h ( x ) h ( y ) 0

(b) strictlymonotone if

x y , h ( x ) h ( y ) > 0

(c) stronglymonotone with constant β > 0 if

x y , h ( x ) h ( y ) β x y 2

Definition 2.3. ( [4] ). For any x = ( x 0 , x ¯ ) R × R n 1 and y = ( y 0 , y ¯ ) R × R n 1 , the Jordan product of p-order cone is expressed by

x y : = [ x , y w ]

where w : = ( w 2 , , w n ) T , and w i = | x 0 | p q | y i | | y 0 | | x i | p q , i = 2 , 3 , , n .

Definition 2.4. ( [16] ). Let x = ( x 0 , x ¯ ) R × R n 1 . Then, the projection of x onto K p is defined as

Π K p ( x ) = { x , x K p 0 , x K p * = K q ν , x ¯ q < x 0 < x ¯ p (5)

where ν = ( ν 0 , ν ¯ ) , ν ¯ = ( ν 2 , ν 3 , , ν n ) T R n 1 with

ν 0 = ν ¯ p = ( | ν 2 | p + | ν 3 | p + + | ν n | p ) 1 p

ν i x i + ν 0 x 0 ν 0 p 1 | ν i | p 2 ν i = 0 , i = 2 , , n .

Definition 2.5. ( [16] ). Let x = ( x 0 , x ¯ ) R × R n 1 , then x can be decomposed into

x = ρ 1 ( x ) v ( 1 ) ( x ) + ρ 2 ( x ) v ( 2 ) ( x )

where

ρ 1 ( x ) = x 0 + x ¯ p 2 ρ 2 ( x ) = x 0 x ¯ p 2

v ( 1 ) ( x ) = [ 1 w ] v ( 2 ) ( x ) = [ 1 w ]

and w = x ¯ x ¯ if x ¯ 0 . If x ¯ = 0 , then any vector in R n 1 satisfying w = 1 .

Lemma 2.1. ( [5] ). If the projection formula is defined as (2), then the properties below can be received:

(i) { x P C ( x ) } T { κ P C ( x ) } 0, x R n , κ C .

(ii) { x y } T { P C ( x ) P C ( y ) } P C ( x ) P C ( y ) 2 , x , y R n .

(iii) P C ( x ) P C ( y ) x y , x , y R n .

(iv) P C ( x ) λ 2 x λ 2 x P C ( x ) 2 , λ C .

3. Sample Average Approximation

If the integral involved in POSVI can be evaluated, then it can be solved as a deterministic stochastic variational inequality problem. However, E [ f ( x , ξ ) ] are usually not accurately evaluated, because the distribution of ξ is unknown and the information of ξ can only be acquired from the past data samples, which inspires us to search a function to approximate E [ f ( x , ξ ) ] . Many scholars have explored approximation methods [17] [18] [19]: sample-path optimization (SPO), sample average approximation (SAA) and stochastic approximation (SA). In this paper, we select SAA method, whose main idea is to generate N independent and identically distributed (i.i.d.) samples ξ 1 , ξ 2 , , ξ N , and use the sample average function

F N ( x ) = 1 N i = 1 N f ( x , ξ i )

to approximate E [ f ( x , ξ ) ] . Then we acquire the problem: find x C , such that

F N ( x ) , y x 0, y C (6)

where C = { x | x K p } , we call (1) as true problem and (6) as SAA problem.

Definition 3.1 ( [20] ) Let f j ( x , ξ ) denote jth component of f ( x , ξ ) , j = 1 , 2 , , N . Define the moment function of f j ( x , ξ ) as:

M f j ( t ) : = E [ e t f j ( x , ξ ) ]

andthe moment function of f j ( x , ξ ) E [ f j ( x , ξ ) ] is

M x ( t ) : = E [ e t ( f j ( x , ξ ) E [ f j ( x , ξ ) ] ) ]

thus,

M f j N ( t ) : = E [ e t ( F j N ( x ) E [ f j ( x , ξ ) ] ) ]

Lemma 3.1. ( [20] ) Let X be a compact subset of C, j = 1 , 2 , , n . Suppose the conditions hold below:

1) The moment function M x ( t ) is finite with respect to (w.r.t.) x in a certain neighborhood of zero;

2)There is a metric function k ( ξ ) : Ω R + ,such that for any ξ Ω ,and x , x X , there is

| f j ( x , ξ ) f j ( x , ξ ) | k j ( ξ ) x x

3) The moment function M k ( t ) = E [ e t k ( ξ ) ] of k ( ξ ) is finite w.r.t. t in a certain neighborhood of zero.

Then for any ε > 0 , there exists c ( ε ) > 0 , β ( ε ) > 0 ,independent of N, such that

Prob { sup x X E [ f ( x , ξ ) ] F N ( x ) ε } c ( ε ) e N β ( ε )

Lemma 3.1 guarantees that the approximation by the SAA method is reasonable.

Lemma 3.2. ( [21] ). Let { x N } be a solution of SAA problem (6) and x * be set of solutions to true problem (1). Suppose that:

a) Lemma 3.1 holds;

b) M f j ( t ) : = lim N M f j N ( t ) ( j = 1 , 2 , , n ) exists, for every x X and t R ;

Then for every ε > 0 , there is c ( ε ) > 0 , β ( ε ) > 0 , independent of N, such that

Prob { d ( x N , x * ) ε } c ( ε ) e N β ( ε ) w.p.1 (7)

forN sufficiently large. d ( x , D ) : = inf x D x x denotes the distance from point x to set D.

Lemma 3.2 studies the optimal solution set of SAA problem (6) convergences to optimal solution of true problem (1) with probability one (w.p.1.). Let

G ( x , α ) : = x P C ( x α F N ( x ) )

again, according to (4) and (3), solving SAA problem (6) is equivalent to find a zero of G ( x , α ) .

4. Projection and Contraction Method and Convergence Analysis

In this part, we propose the projection and contraction method based on the former research and certify the algorithm is convergent. At last, the algorithm is applied to numerical examples that we give. To facilitate our research, we denote the projection of x onto K p as x + and the projection of x onto K p as x . Hence, it is clear that x + , x = 0 for any x R n .

Algorithm 3.1.

Step 0 Given ε > 0 , let 0 < u < w < 1 , τ ( 0 , 2 ) , s ( 0 , 1 ) , α 0 = 1 , and x 0 denotes an arbitrary initial iteration point, Set k : = 0 .

Step 1 Compute G ( x k , α k ) , if G ( x k , α k ) < ε , stop; Otherwise, go to step 2.

Step 2 Let l k be the smallest nonnegative integer, which satisfies α k = α s l k , such that

α k F N ( x k ) F N ( x k G ( x k , α k ) ) G ( x k , α k ) w (8)

if

α k F N ( x k ) F N ( x k G ( x k , α k ) ) G ( x k , α k ) u (9)

then α = 3 2 α k .

Step 3 Calculate

ρ ( x k , α k ) = G ( x k , α k ) T d ( x k , α k ) d ( x k , α k ) 2 (10)

with

d ( x k , α k ) = G ( x k , α k ) α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ] (11)

Step 4 Compute

x k + 1 = ( x k τ ρ ( x k , α k ) d ( x k , α k ) ) + (12)

Set k : = k + 1 ; go to step 1.

Next, the convergence of Algorithm 3.1 is researched.

Lemma 4.1 Suppose that x * is a solution of the SAA problem (6), ρ ( x , α ) and d ( x , α ) are respectively defined by (10) and (11). Then, the inequality holds

x x * , d ( x , α ) G ( x , α ) , d ( x , α )

Proof. From the definition of projection in formula (2), we know G ( x , α ) : = x P C ( x α F N ( x ) ) , obviously the lemma holds.

Lemma 4.2. Let x R n and α ˜ > α > 0 , the following formula holds:

G ( x , α ˜ ) G ( x , α )

Proof. Prove the above inequality is equivalent to proof

G ( x , α ) T ( G ( x , α ˜ ) G ( x , α ) ) 0 , α ˜ > α > 0 (13)

From Formula (2), we can get

P C ( x α F N ( x ) ) P C ( x α ˜ F N ( x ) ) = G ( x , α ˜ ) G ( x , α ) (14)

then from Lemma 2.1 (1) and (13), we get

{ ( x α F N ( x ) ) P C ( x α F N ( x ) ) } T { G ( x , α ˜ ) G ( x , α ) } 0 (15)

thus we have

G ( x , α ) T { G ( x , α ˜ ) G ( x , α ) } α F N ( x ) T { G ( x , α ˜ ) G ( x , α ) } (16)

Let μ : = x α ˜ F N ( x ) and η : = x α F N ( x ) , the formula holds

( α ˜ α ) F N ( x ) T { G ( x , α ˜ ) G ( x , α ) } G ( x , α ˜ ) G ( x , α ) 2 (17)

the above formula holds because of Lemma 2.1. So the inequality (16) and (17) hold, the proof is complete.

Theorem 4.1. Assume that f ( , ξ ) is monotonous and Lipschitz continuous w.r.t. x, and constant 0 < L < 1 . Let the solution set of SAA problem (6) be nonempty. Then the sequence { x k } obtained by Algorithm 3.1 converges to the solution of (6).

Proof. From (10) and (11), it gets that

G ( x k , α k ) , d ( x k , α k ) = G ( x k , α k ) , F N ( x k , α k ) α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ] = G ( x k , α k ) 2 G ( x k , α k ) , α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ] ( 1 L ) G ( x k , α k ) 2 (18)

2 G ( x k , α k ) , d ( x k , α k ) d ( x k , α k ) 2 = 2 G ( x k , α k ) d ( x k , α k ) , d ( x k , α k ) = G ( x k , α k ) + α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ] , d ( x k , α k ) = G ( x k , α k ) + α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ] , G ( x k , α k ) α k [ F N ( x k ) F N ( x k G ( x k , α k ) ) ]

= G ( x k , α k ) 2 α k 2 F N ( x k ) F N ( x k G ( x k , α k ) ) 2 ( 1 L ) G ( x k , α k ) 2 0 (19)

which is equivalent to

G ( x k , α k ) T d ( x k , α k ) d ( x k , α k ) 2 1 2

that is

ρ ( x k , α k ) 1 2

the above inequality holds since the function f ( , ξ ) is Lipschitz continuous. Let x * be a solution of SAA problem (6), then

x k + 1 x * 2 = ( x k τ ρ ( x k , α k ) d ( x k , α k ) ) + x * 2 x k x * τ ρ ( x k , α k ) d ( x k , α k ) 2 = x k x * 2 + τ 2 ρ 2 ( x k , α k ) d ( x k , α k ) 2 2 τ ρ ( x k , α k ) x k x * , d ( x k , α k ) x k x * 2 τ ( 2 τ ) ρ ( x k , α k ) G ( x k , α k ) , d ( x k , α k ) x k x * 2 τ ( 2 τ ) ( 1 L ) 2 G ( x k , α k ) 2

the first inequality holds since the projection operator is non-expansive, the second inequality holds since Lemma 4.1, and the last inequality holds from inequality (18) and (19). Obviously, the sequence { x k } is bounded.

Let x ^ * be a cluster point of { x k } and { x k j } is a subsequence of { x k } , which converges to x ^ * . Denote inf { α k } = α min , since G ( x k , α k ) is continuous, then

G ( x ^ * , α min ) = l i m j G ( x k j , α min ) = 0

Hence, x ^ * is a solution of (6).

Next, it proves that the sequence { x k } has only one cluster point. Suppose that there is x ^ C such that { x k } converges to x ^ , and denote

δ : = x ^ x ^ * > 0

since x ^ * is a cluster point of the sequence { x k } , there is a k i > 0 such that

x k i x ^ * σ 2

thus, it follows that

x k x ^ * x k i x ^ * σ 2 , k k i

and

x k x ^ x ^ x ^ * x k x ^ * > σ σ 2 > σ 2

which contradicts the assumption. Thus it is certified completely, and the sequence { x k } converges to the solution of SAA problem (6).

5. Numerical Experiment

In this section, we certify the effectiveness of algorithm 3.1 by giving some numerical experiments. The tasks are completed by Matlab 2018b, which are installed on a computer that has 3.3 GHz CPU and 4.0 GB memory. We set parameters u = 0.75 , γ = 1.95 respectively, and ε denotes error defined by x ( t ) x * , further, ITER represents average number of iterations and ACPU is the average run time.

Example 5.1. Consider the POSVI below: find x, such that

1 2 D x ξ , y x 0, y C

where x R n , C = { x | x K p } , D is a real symmetric matrix with n dimension, and the element of D is selected randomly by Matlab from the interval [ 0,1 ] . The SAA problem: find x C , such that

1 2 1 N i = 1 N D x ξ i , y x 0 , y C

We choose p = 2 , 3 , 5 , 10 respectively, and the corresponding numerical results are exhibited in Tables 1-4.

Table 1. Solving numerical results when p = 2 .

Table 2. Solving numerical results when p = 3 .

Table 3. Solving numerical results when p = 5 .

The error ε corresponding to every p we choose changes with time as shown in Figures 1-4.

Table 4. Solving numerical results when p = 10 .

Figure 1. When p = 2 .

Figure 2. When p = 3 .

Figure 3. When p = 5 .

Figure 4. When p = 10 .

From Tables 1-4 and Figures 1-4, we can analyze that for different p, as the dimension n increases, ITER and ACPU both become larger and relatively stable, and the error ε becomes smaller and tends to zero. Therefore, generally speaking, the algorithm we proposed is effective.

Specially, p = 2 is a special case. When p = 2, P-order cone degenerates into second order cone. From the numerical results, when p = 2 and p = 3, the operation result is better when n is larger. 3 is the smallest number greater than 2. After many numerical experiments, it is found that with the increase of p, the time for large-scale problems will become longer and longer, until p = 10, the calculation results tend to be stable. That is, after p is greater than 10, the calculation result does not change much. So we chose a middle number, 5.

6. Conclusion

In this paper, a sample average approximation method is applied to approximate stochastic inequality problem with p-order cone. In order to solve p-order cone stochastic variational inequality problem, under moderate condition, we propose a projection and contraction method and prove the iteration sequence produced by the method converges to the solution of the SAA problem. At last, based on the projection and contraction method, we give some numerical examples and the experimental results verify the accuracy of the method. Next, we expect to investigate the numerical solution of circular cone constrained stochastic variational inequality problem. Find other suitable methods to further solve the p-order cone-constrained stochastic variational inequality problem in this paper, and carry out numerical simulation comparison.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Shapiro, A., Dentcheva, D. and Ruszczyński (2009) A Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia.
https://doi.org/10.1137/1.9780898718751
[2] Gürkan, G., Özge, A.Y. and Robinson, S.M. (1996) Sample-Path Solution of Stochastic Variational Inequalities with Applications to Option Pricing. Proceedings Winter Simulation Conference, Coronado, 8-11 December 1996, 337-344.
https://doi.org/10.1145/256562.256646
[3] Agdeppa, R.P., Yamashita, N. and Fukushima, M. (2007) Convex Expected Residual Models for Stochastic Affine Variational Inequality Problems and Its Application to the Traffic Equilibrium Problem. Pacific Journal of Optimization, 49, 211.
[4] Miao, X.H., Qi, Y.L. and Chen, J.S. (2017) On Merit Functions for p-Order Cone Complementarity Problem. Computational Optimization and Applications, 67, 155-173.
[5] He, B.S. and Liao, L. (2002) Improvements of Some Projection Methods for Monotone Nonlinear Variational Inequalities. Journal of Optimization Theory and Applications, 112, C111-C128.
[6] Facchinei, F. and Pang, J.S. (2003) Finite-Dimensional Variational Inequalities and Complementarity Problem. Springer-Verlag, New York.
https://doi.org/10.1007/b97544
[7] Korpelevich, G.M. (1976) The Extragradient Method for Finding Saddle Points and Other Problem. Ekonomika i Matematicheskie Metody, 12, C747-C756.
[8] Antipin, A.S. (1976) On a Method for Convex Programs Using a Symmetrical Modifification of the Lagrange Function. Ekonomika i Matematicheskie Metody, 12, 1164-1173.
[9] Tseng, P. (2000) A Modified Forward-Backward Splitting Method for Maximal Monotone Mapping. SIAM Journal on Control and Optimization, 38, 431-446.
https://doi.org/10.1137/S0363012998338806
[10] Yekini, S. and Olaniyi, S.I. (2017) Strong Convergence Result for Monotone Variational Inequalities. Numerical Algorithms, 76, 259-282.
https://doi.org/10.1007/s11075-016-0253-1
[11] Duong, V.T. and Dang, V.H. (2018) Weak and Strong Convergence Theorems for Variational Inequality Problems. Numerical Algorithms, 78, 1045-1060.
https://doi.org/10.1007/s11075-017-0412-z
[12] Yang, J., Liu, H.W. and Liu, Z.X. (2018) Modified Subgradient Extragradient Algorithms for Solving Monotone Variational Inequalities. Optimization, 67, 2247-2258.
https://doi.org/10.1080/02331934.2018.1523404
[13] Yang, J. and Liu, H.W. (2019) Strong Convergence Result for Solving Monotone Variational Inequalities in Hilbert Space. Numerical Algorithms, 80, 741-752.
https://link.springer.com/article/10.1007/s11075-018-0504-4
https://doi.org/10.1007/s11075-018-0504-4
[14] Yang, J. and Liu, H.W. (2018) A Modified Projected Gradient Method for Monotone Variational Inequalities. Journal of Optimization Theory and Applications, 179, 197-211.
https://doi.org/10.1007/s10957-018-1351-0
[15] Fang, Y.P. and Huang, N.J. (2003) H-Monotone Operator and Resolvent Operator Technique for Variational Inclusions. Applied Mathematics and Computation, 145, 795-803.
https://doi.org/10.1016/S0096-3003(03)00275-3
[16] Miao, X.H., Qi, N. and Chen, J.S. (2017) Projection Formula and One Type of Spectral Factorization Associated with p-Order Cone. Journal of Nonlinear and Convex Analysis, 18, C1699-C1705.
[17] Robbins, H. and Monro, S. (1951) On a Stochastic Approximation Method. Annals of Mathematical Statistic, 22, 400-407.
https://doi.org/10.1214/aoms/1177729586
[18] Jiang, H.Y. and Xu, H.F. (2008) Stochastic Approximation Approaches to the Stochastic Variational Inequality Problems. Transactions on Automatic Control, 53, 1462-1475.
https://doi.org/10.1109/TAC.2008.925853
[19] Huseyin, T. (2008) A Stochastic Approximation Method to Compute Bid Price in Network Revenue Management Problems. Journal on Computing, 20, 596-610.
https://doi.org/10.1287/ijoc.1080.0269
[20] Xu, H.F. (2010) Sample Average Approximation Methods for a Class of Stochastic Variational Inequality Problems. Asia-Pacific Journal of Operational Research, 27, 103-119.
https://doi.org/10.1142/S0217595910002569
[21] Xu, H.F. (2010) Uniform Exponential Convergence of Sample Average Random Functions under General Sampling with Applications in Stochastic Programming. Journal of Mathematical Analysis and Applications, 368, 692-710.
https://doi.org/10.1016/j.jmaa.2010.03.021

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.