A Class of Smoothing Modulus-Based Iterative Method for Solving Implicit Complementarity Problems ()
1. Introduction
Complementarity problems arise widely in many applications of science and engineering, such as elastic contact, economic transport, boundary problems in fluid dynamics, convex quadratic programming and inverse problems [1]. In this paper, we consider the implicit complementarity problems (ICP) [2]: find vectors
, such that
.
where
is a given matrix,
is a given constant vector,
is a mapping from
into itself, and if the mapping
is a zero mapping, the ICP reduces to linear complementarity problem (LCP).
ICP was introduced into the complementarity theory as a mathematical tool in the study of some stochastic optimal control problems, it was first proposed by Bensoussan and Lion in [3]. After decades of research and development, people have obtained the existence theorem of solutions to the implicit complementarity problem [4], and proposed many effective methods for solving implicit complementarity problems. Such as fixed-point method [5] [6], projection iteration method [7] [8], Schwarz method [9] etc. Apart from that Zheng and Qu [10] established a hybrid method for solving implicit complementarity problems with superconvergence, numerical examples show that this method has higher accuracy and faster convergence than some existing methods. Tian et al. [11] proposed an unconstrained and differentiable penalty method for solving the implicit complementarity problems. Xia and Li [12] firstly constructed modulus-based matrix splitting iterative method for solving nonlinear complementary problems. Then Hong and Li [13] presented modulus-based matrix splitting iterative method (MMS) for solving the implicit complementarity problems, and analyzes its convergence. Wang et al. [14] given new convergence conditions of MMS when the system matrix is a positive-definite matrix and an
-matrix. On their basis, Yin et al. [15] proposed a class of accelerated modulus-based matrix splitting iteration methods to solve the implicit complementarity problems. Zheng and Vong [16] proposed a modified modulus-based matrix splitting iterative method to solve the implicit complementarity problems. Wang and Cao [17] constructed a two-step modulus-based matrix splitting iterative methods (TMMS) for solving implicit complementarity problems, numerical experiments show that this iterative method is more effective than the MMS methods.
Dong and Jiang proposed modular iteration method in [18]. Its main idea is to transform the linear complementarity problems into an equivalent fixed-point equation system, and then solve it. Foutayeni et al. [19] constructed a smoothing function to approximate the linear complementarity problems, and proposed a class of modified Newton methods and
-step iteration methods to solve the linear complementarity problems, obtaining an effective smoothing numerical algorithm. In this paper, we extend this method to solve the implicit complementarity problems. Firstly, we transform the implicit complementarity problems into an equivalent implicit fixed-point equation system. Since it is a non-differentiable system of absolute value equations, we introduce a smooth function to approximate the original problem. Then we construct a class of smoothing modulus-based iteration method for solving the approximated system of equations. Finally, we analyze the convergence of the new method, and verify its effectiveness by numerical experiments.
The organization of the paper is as follows. In Section 2, we establish the smoothing modulus-based iteration method for solving the implicit complementarity problems. The convergence of the smoothing modulus-based iteration method is presented in Section 3, and the numerical results about the new methods are shown and discussed in Section 4. In addition, some conclusion is given in Section 5.
2. The Smoothing Modulus-Based Iterative Method
In this section, we consider the following implicit complementarity problems (ICP): Given matrix
and constant vector
, and
is a mapping from
into itself, find vectors
, such that
(1)
Let
,
, transform the implicit complementarity problems into a fixed-point equation:
(2)
where
are two positive constants and
is the identity matrix.
Let
. Since
is an invertible function, then
. The implicit fixed-point equation of (2) is changed as following,
(3)
where
,
.
Lemma 1. (a) If the solution of the implicit complementarity problems (1) is
, then
is the solution of (3).
(b) If the vectorx satisfies (3), then
,
is the solution of (1).
Proof. According to Theorem 2.1 in [13], we can similarly prove it.
For the implicit fixed-point Equation (3), let
be a vector function:
(4)
where
,
.
Due to
,
is non-differentiable. We introduce a smoothing vector function from
[19]
where c is a large positive integer.
Substitute it into the function (4), we can get a smoothing function:
where
,
.
Lemma 2. When
,
uniformly converges to
.
Proof. According to Corollary 2.1 in [19], we similarly prove it. When
,
converges uniformly to
. Therefore,
uniformly converges to
.
From Lemma 2, we know that if
is the solution of
, then
converges to the solution of
. And
is a smooth nonlinear system of equations, so we can use classical Newton method to solve it, in order to get a better initial value, we use following modulus-based iteration method.
Algorithm 1. (Modulus-Based Iteration Method)
Step 1: Given
,
, set
.
Step 2: 1) Calculate the initial vector
, set
,
2) Iterative Computing
by solving the equations
(5)
3)
Step 3: If
, then stop.
Otherwise, set
and return to Step 2.
According to Remark 1 in [13],
, when
. Therefore, there esists some
,
,
is a constant. That means that
is a better approximation vector. So we can construct the following new algorithm.
Algorithm 2. (Smoothing Modulus-Based Newton Method)
Step 1: Given
,
, set
.
Step 2: Get the initial vector
through Algorithm 1.
Step 3: Compute
,
,
.
Step 4: Compute
,
.
Step 5: If
, then stop and output
,
.
Otherwise, let
,
,
, return to Step 2.
The classic Newton method needs to choose a good initial vector for better convergence. We introduce the parameter
, and use the following iterative sequence to improve the classical Newton method.
(6)
We take
, and use the modified Newton method to solve
.
Algorithm 3. (Modified Smoothing Modulus-based Newton method)
Step 1: Given
,
, set
.
Step 2: Get the initial vector
through Algorithm 1.
Step 3: Computing
,
.
Step 4: Use (6) compute
, and
.
Step 5: If
, then stop and output
,
.
Otherwise, let
,
,
, return to Step 2.
On the basis of the modified Newton method, we do
-step iterations in the iterative process, and then construct a smoothing modulus-based
-step iteration method to solve the implicit complementarity problem, and the iterative sequence is as follows:
(7)
we use the
-step iterative method to solve
.
Algorithm 4. (Smoothing Modulus-Based
-step Iterative Method)
Step 1: Given
,
, set
.
Step 2: Get the initial vector
through Algorithm 1.
Step 3: Computing
,
.
Step 4: Use (7) compute
, and
.
Step 5: If
, then stop and output
,
.
Otherwise, let
,
,
, return to Step 2.
3. Convergence Theorem
In this section, we give the convergence of the above algorithms.
Theorem 1. If
is the solution of the system of equations
, the iterative sequence
is second-order convergent in a neighborhood of
.
Proof. According to the convergence theorem of Newton method, it can be obtained that the iterative sequence generated by Algorithm 2 converges to the solution
of the equation
with the order two in a neighborhood of
.
Theorem 2. For nonlinear vector functions
, define the sequence:
if
is the solution of the system of equations
, when
,
, Algorithm 3 converges to
with order three.
Proof The Taylor expansion of the function
at
:
(8)
For (8), let
, we have
(9)
(10)
Then
(11)
Let
, According to (11), that
(12)
Which leads to
(13)
From (8),
(14)
On the other hand
(15)
Equivalent to
(16)
Applying formula (11), (13) and (14), we have
(17)
(18)
In order to get
, the constants
and
must satisfy:
(19)
by solving the above equations, we can get
. The theorem is proved.
Theorem 3. If
is the solution of the system of equations
, Sequence
(20)
for any positive integer m, the sequence produced by Algorithm 4 is converges to
with order
.
Proof. We use the mathematical induction method to prove it. Obviously, when
, the sequence is third-order convergent from Theorem 2, and the theorem holds. Then we assume that
holds, we have
. (21)
The following proves that holds for m, namely
. (22)
Combine formula (20) and assumption (21), we can get
, (23)
. (24)
Using the expansion of
and
at
, we obtain
. (25)
From (24), then we got
. The theorem is proved.
4. Numerical Results
In this section, we use numerical examples to examine the numerical effectiveness of smoothing modulus-based iterative methods from aspects of number of iteration steps (denoted by “IT”), elapsed CPU time in seconds (denoted by “CPU”), and norm of absolute residual vectors (denoted by “RES”). RES is defined as:
,
where
is the kth approximate solution to the ICP.
In this paper, all calculations are run on a machine with a CPU of 1.8 Hz and a memory of 8G, and the programming language is MATLAB (2018b). We choose the
, large positive integer
,
,
,
.
We will compare our smoothing modulus-based iterative method with accelerated modulus-based matrix splitting iteration methods, and its iteration coefficient is 1.6 [15]. The abbreviations of methods are listed in Table 1.
Example 4.1. Let p is a positive integer,
, consider the implicit complementarity problem (1), when
is a tridiagonal block matrix, q is a vector
,
where
is a tridiagonal matrix,
is an identity matrix, the point-to-point mapping
. The numerical results are listed in Table 2.
Example 4.2. Let p is a positive integer,
, consider the implicit complementarity problem (1), when
is a tridiagonal block matrix, q is a vector
,
where
is a tridiagonal matrix,
is an identity matrix, the point-to-point mapping
. The numerical results are listed in Table 3.
From Table 2 and Table 3, for different problem of size n, we list the iteration steps, the CPU time and the residual norms with respect to AMSOR, SMN, MSMN and SM(m + 1) methods. It can be seen that all methods converge quickly. Among these methods, SMN, MSMN and SM(m + 1) methods require
Table 2. Numerical results of Example 4.1.
Table 3. Numerical results of Example 4.2.
less iteration steps and cost less computing time than AMSOR method, and SM(m + 1) is best.
Example 4.3. Let p is a positive integer,
, consider the implicit complementarity problem (1), when
is a tridiagonal block matrix, q is a vector
,
where
is a tridiagonal matrix,
is an identity matrix, the point-to-point mapping
. The numerical results are listed in Table 4.
Example 4.4. Let p is a positive integer,
, consider the implicit complementarity problem (1), when
is a tridiagonal block matrix, q is a vector
Table 4. Numerical results of Example 4.3.
Table 5. Numerical results of Example 4.4.
,
where
is a tridiagonal matrix,
is an identity matrix, the point-to-point mapping
. The numerical results are listed in Table 5.
From Table 4 and Table 5, for the symmetric and asymmetric problem, we list the iteration steps, the CPU time and the residual norms with respect to SMN, MSMN and SM(m + 1) methods respectively, among the three algorithm, SM(m + 1) has the least number of iteration steps, costs the least CPU time, and holds better error accuracy.
5. Conclusion
In this paper, a class of smoothing modulus-based iteration method for solving implicit complementarity problems is proposed, the convergence of the algorithm is analyzed, and numerical experiments show the effectiveness of the method.
Acknowledgements
This work was supported by Natural Science Foundation of China (11661027), the Guangxi Natural Science Foundation (2020GXNSFAA159143), and the Innovation Project of GUET Graduate Education (2021YCXS114).