The Pivot Adaptive Method for Solving Linear Programming Problems

Abstract

A new variant of the Adaptive Method (AM) of Gabasov is presented, to minimize the computation time. Unlike the original method and its some variants, we need not to compute the inverse of the basic matrix at each iteration, or to solve the linear systems with the basic matrix. In fact, to compute the new support feasible solution, the simplex pivoting rule is used by introducing a matrix that we will define. This variant is called “the Pivot Adaptive Method” (PAM); it allows presenting the resolution of a given problem under the shape of successive tables as we will see in example. The proofs that are not given by Gabasov will also be presented here, namely the proofs for the theorem of the optimality criterion and for the theorem of existence of an optimal support, and at the end, a brief comparison between our method and the Simplex Method will be given.

Share and Cite:

Belahcene, S. , Marthon, P. and Aidene, M. (2018) The Pivot Adaptive Method for Solving Linear Programming Problems. American Journal of Operations Research, 8, 92-111. doi: 10.4236/ajor.2018.82008.

1. Introduction

As a branch of mathematics, linear programming is the domain that had the most successful in optimization [1] [2] [3] [4] [5] . Since its formulation from 1930 to 1940 and the development of the Simplex method of Dantzig in 1949 [1] [6] , researchers in various fields have been led to formulate and solve linear problems.

Although, the Simplex algorithm is often effective in practice, the fact that it is not a polynomial algorithm, as shown by Klee and Minty [7] , has incited researchers to propose other algorithms and led to the birth of the interior point algorithms.

In 1979, L. Khachiyan proposed the first polynomial algorithm for linear programming [8] ; it is based on the ellipsoid method, studied by Arkadi Nemirovski and David B. Yudin, a preliminary version of which having been introduced by Naum Z. Shor. Unfortunately, this ellipsoid method has a poor efficiency.

In 1984, N. K. Karmakar published an interior point algorithm which has a polynomial convergence [9],and this caused a renewed interest to interior point methods, as well in linear programming that in nonlinear programming [10] [11] [12] [13] [14] .

Gabasov and Kirillova have generalized the Simplex method in 1995 [15] [16] [17] , and developed the Adaptive Method (AM), a primal-dual method, for linear programming with bounded variables. As the Simplex method, the Adaptive Method is a support method, but it can start from any support (base) and any feasible solution and can move to the optimal solution by interior or boundary points; this method was extended, later, to solve general linear and convex problems [18] [19] [20], and also optimal control problems [21] .

In linear programming, several variants of the AM have been proposed. They generally address the problem of initialization of the method and the choice of the direction [22] [23] .

In this work, a new variant of the adaptive method called “Pivot adaptive method” (PAM) is presented. Unlike the original method and its variants, we need not to compute the inverse of the basic matrix at each iteration [16] , or to solve the linear systems with the basic matrix [23] . In fact, to compute the new feasible solution and the new support, we only need compute the decomposition of vectors columns of the matrix of the problem in the current basis. For this computation we use the simplex pivoting rule [4] by introducing a new matrix GAMMA which allows to minimize the computation time. This new variant of adaptive method allows us to present the resolution of a given problem under the shape of successive tables (as is done with the Simplex method) as we will see in example. The proofs for the theorem of the optimality criterion and for the theorem of existence of an optimal support that are not given by Gabasov [16] will be presented here, and without giving too many details a brief comparison between our method and the Simplex method will be given at the end of this article.

The paper is organized as follows. In Section 2, after giving the statement of the problem, some important definitions are given. In Section 3, we describe step by step the “Pivot Adaptive Method” and in parallel the proofs of the corresponding theorems are given. In Section 4, an example will be solved to illustrate the PAM, and more details will be given to explain how present the resolution under the shape of successive tables. In Section 5, we give a brief comparison between the PAM and the Simplex Method by solving the Klee-Minty problem. Section 6, concludes the paper.

2. Statement of the Problem and Definitions

In this article we consider the primal linear programming problem with bounded variables presented in the following standard form:

( P ) { max F ( x 1 , x 2 , , x n ) = j = 1 j = n c j x j j = 1 j = n a i j x j = b i , i I d j x j d j + , j J . (1)

where: J = { 1 , , n } is the index set of the variables x 1 , x 2 , , x n , I = { 1 , , m } is the index set of the parameters b 1 , b 2 , , b m (corresponds to the constraints), put: J = J B J N , J B J N = , | J B | = m , and introduce the vectors:

x = x ( J ) = ( x j , j J ) = ( x 1 , x 2 , , x n ) = ( x B x N ) , with: x B = x ( J B ) = ( x j , j J B ) , x N = x ( J N ) = ( x j , j J N ) .

c = c ( J ) = ( c j , j J ) = ( c 1 , c 2 , , c n ) = ( c B c N ) , with: c B = c ( J B ) = ( c j , j J B ) , c N = c ( J N ) = ( c j , j J N ) .

b T = b T ( J ) = ( b i , i I ) = ( b 1 , b 2 , , b m ) .

d = d ( J ) = ( d j , j J ) = ( d 1 , d 2 , , d n ) ; d < + .

d + = d + ( J ) = ( d j + , j J ) = ( d 1 + , d 2 + , , d n + ) ; d + < + .

and the ( m × n ) matrix

A = A ( I , J ) = ( a i j , i I , j J ) = ( a 1 , a 2 , , a n ) = ( A 1 T A m T ) , with: a j = ( a 1 j a m j ) ,

j J ( a j : the jth column of the matrix A), and A i T = ( a i 1 , a i 2 , , a i n ) , i I ( A i T : the ith row of the matrix A), A = ( A B / A N ) , A B = A ( I , J B ) , A N = A ( I , J N ) .

We assume that r a n k ( A ) = m < n . Then the problem (1) takes the following form:

( P ) { m a x F ( x ) = c T x A x = b d x d + (2)

A x = b are called the general constraints of (P).

Denote the feasible region of (P) as: H = { x R n , A x = b , d x d + } .

2.1. Definition 1

Each vector of the set H is called a feasible solution of (P).

2.2. Definition 2

Any vector x R n that satisfies the general constraints A x = b of the problem (P) is called a pseudo-feasible solution.

2.3. Definition 3

A feasible solution x 0 is called optimal if: F ( x 0 ) = c T x 0 = max x H c T x .

2.4. Definition 4

For given value ϵ 0 , the solution x ϵ is called an ò-optimal (or suboptimal) if: ( F ( x 0 ) F ( x ϵ ) ) ϵ , where x 0 is an optimal solution for the problem (P).

2.5. Definition 5

The set of m indices J B J ( | J B | = m ) is called a support of (P) if the submatrix A B = A ( I , J B ) is non-singular ( d e t A B 0 ), then the set J N = J \ J B of ( n m ) indices is called non-support, A B is the matrix support, and A N = A ( I , J N ) is the matrix non-support.

2.6. Definition 6

The pair { x , J B } formed by a feasible solution x and the support J B is called a support feasible solution (SF-solution).

2.7. Definition 7

The SF-solution { x , J B } is called non-degenerate if: d j < x j < d j + , j J B .

Recall that, unlike the Simplex Method in which the feasible solution x and the basic indices J B are related intimately, in the adaptive method there are independent.

3. Optimality Criterion

For the smooth running of calculations, in the rest of the article, we assume that the sets J B and J N are vectors of indices (i.e.: the order of indices in these sets is important and must be respected).

3.1. Formula of the Objective Value Increment

Let { x , J B } be the initial SF-solution of the problem (P), and x ¯ = x ¯ ( J ) = ( x ¯ B x ¯ N ) an arbitrary pseudo-feasible solution of (P). We set Δ x = Δ x B + Δ x N = x ¯ x , then the increment of the objective function value is given by:

Δ F ( x ) = F ( x ¯ ) F ( x ) = c T x ¯ c T x = c T Δ x = c B T Δ x B + c N T Δ x N (3)

and since:

A Δ x = A B Δ x B + A N Δ x N = A x ¯ A x = 0 (4)

then

Δ x B = A B 1 A N Δ x N (5)

Substituting the vector Δ x B in (3) we get:

Δ F ( x ) = ( c N T c B T A B 1 A N ) Δ x N (6)

Here, we define the m-vector of multipliers y as follows:

y T = c B T A B 1 (7)

In [16] , Gabasov defines the support gradient and noted it D as: Δ T = y T A c T , but he also recognizes that for every k J N , the kth support derivative is equal to Δ k = c k y T a k , in fact, Δ k is the rate of change of the objective function when the kth non-support components of the feasible solution x is increased and all the other non-support components are fixed, at the same time the support components are changed in such way to satisfy the constraints A x = b .

Then, in the pivot adaptive method, we define the n-vector of reduced gains (or support gradient) δ as follows:

δ T = Δ T = ( δ j , j J ) = c T y T A = ( δ B T , δ N T ) , where : δ B T = 0 , δ N T = c N T y T A B (8)

To reduce the computation time in the adaptive method of Gabasov [16] , we define the ( m × n ) matrix G as follows:

Γ = A B 1 A = ( Γ B , Γ N ) , where: Γ B = I m , Γ N = A B 1 A N (9)

which is the decomposition of the columns of the matrix A on the support J B .

We have: Γ N = Γ ( I , J N ) = ( Γ i j , i I , j J N ) , Γ i j is the product of the ith row of the matrix A B 1 and the jth column of the matrix A N . Then the n-vector of support gradient given in (8) can be computed as follows:

δ T = ( δ j , j J ) = c T c B T A B 1 A = c T c B T Γ = ( δ B T , δ N T ) , where: δ B T = 0, δ N T = c N T c B T Γ N (10)

We see that, to compute the quantity c B T A B 1 A , we begin to compute Γ = A B 1 A and not y T = c B T A B 1 ; therefore we need not to compute the inverse of the basic matrix A B .

From (6) and (10) we obtain

Δ F ( x ) = δ N T Δ x N = j J N δ j Δ x j (11)

3.2. Definition 8

For an given support J B , any vector χ = χ ( J ) = ( χ B χ N ) of R n that satisfies

{ χ j = d j , if δ j < 0 χ j = d j + , if δ j > 0 χ j = d j or d j + , if δ j = 0 χ B = A B 1 ( b A N χ N ) ( j J N ) (12)

is called a primal pseudo-feasible solution accompanying the support J B .

3.3. Proposition 1

A primal pseudo-feasible solution accompanying a given support J B , χ, verifies just the general constraints of the problem (P), ie A χ = b . If more, the support components χ B of the vector χ verifies: d B χ B d B + , then χ is feasible and optimal solution for (P).

3.4. Theorem 1 (The Optimality Criterion)

Let { x , J B } be an SF-solution of the problem (P), and δ T be the support gradient computed by (10), then the relations

{ δ j < 0 , for x j = d j δ j > 0 , for x j = d j + δ j = 0 , for d j x j d j + ( j J N ) (13)

are sufficient, and in the case of non-degeneracy of the SF-solution { x , J B } also necessary for the optimality of the feasible solution x.

Proof

1) The sufficient condition:

for the SF-solution { x , J B } , suppose that the relation (13) are verified, we must proof that x is optimal, it amounts to show that:

x ¯ H , Δ F ( x ) = F ( x ) F ( x ¯ ) 0 (14)

Let x ¯ H , and Δ x = x x ¯ , then:

Δ F ( x ) = F ( x ) F ( x ¯ ) = j J N δ j Δ x j = j J N δ j 0 δ j ( x j x ¯ j ) + j J N δ j 0 δ j ( x j x ¯ j ) + j J N δ j = 0 δ j ( x j x ¯ j ) = j J N δ j 0 δ j ( d j x ¯ j ) + j J N δ j 0 δ j ( d j + x ¯ j ) 0

then x is optimal.

2) The necessary condition:

Suppose that { x , J B } an SF-solution non-degenerate, i.e.:

j J B , d j < x j < d j + (15)

and that x optimal, i.e.:

x ¯ H , Δ F ( x ) = F ( x ) F ( x ¯ ) = j J N δ j Δ x j 0. (16)

Reasoning by absurdity:

Suppose that the relations (13) are not verified, thus we have, at least, one of the following three cases:

1) k J N / δ k 0 and x k > d k .

2) k J N / δ k 0 and x k < d k + .

3) k J N / δ k = 0 and x k = d j or x k = d j + .

Suppose then we have the first case, i.e.

k J N / δ k 0 and x k > d k (17)

(the proof uses the same reasoning for the second and the third cases). Construct the n-vector x ¯ ( θ ) = ( ( x ¯ ( θ ) ) 1 , ( x ¯ ( θ ) ) 2 , , ( x ¯ ( θ ) ) n ) as follows:

{ ( x ¯ ( θ ) ) j = x j , j J N \ { k } ( x ¯ ( θ ) ) k = x k θ , θ [ 0 , x k d k ] ( x ¯ ( θ ) ) j = x j + θ Γ j k , j J B (18)

where Γ j k is the product of the jth row of the matrix A B 1 and the kth column of the matrix A N . Let θ 1 = x k d k > 0 .

we have

A x ¯ ( θ ) = A B ( x ¯ ( θ ) ) B + A N ( x ¯ ( θ ) ) N = 0 (19)

and

j J N , d j ( x ¯ ( θ ) ) j d j + (20)

For j J B , ( x ¯ ( θ ) ) j = x j + θ Γ j k , then for have:

j J B , d j ( x ¯ ( θ ) ) j d j + (21)

a value of θ is chosen such that:

j J B , d j x j + θ Γ j k d j + (22)

i.e.

{ 0 < θ d j + x j Γ j k , for Γ j k > 0 0 < θ d j x j Γ j k , for Γ j k < 0 ( j J B ) (23)

Put: θ 2 = min Γ j k > 0 j J B d j + x j Γ j k , θ 3 = min Γ j k < 0 j J B d j x j Γ j k .

we have: θ 2 > 0 , θ 3 > 0 , because { x , J B } is non-degenerate.

Let θ * = m i n ( θ 1 , θ 2 , θ 3 ) , then

x ¯ ( θ * ) H (24)

or

F ( x ) F ( x ¯ ( θ * ) ) = j J N δ j Δ x j = δ k Δ x k = δ k θ * < 0 (25)

then

F ( x ¯ ( θ * ) ) > F ( x ) (26)

contradiction with (16).

3.5. Definition 9

The pair { x 0 , J B 0 } satisfying the relations (13) will be called an optimal SF-solution of the problem (P).

3.6. The Suboptimality Criterion

Let x 0 an optimal solution of (P), and x a feasible solutions of (P).

From (11) we obtained:

max x ¯ H Δ F ( x ) = F ( x 0 ) F ( x ) = max x ¯ H j J N δ j > 0 δ j ( x ¯ j x j ) + max x ¯ H j J N δ j < 0 δ j ( x ¯ j x j ) j J N δ j > 0 δ j ( d j + x j ) + j J N δ j < 0 δ j ( d j x j )

Let

β ( x , J B ) = j J N δ j > 0 δ j ( d j + x j ) + j J N δ j < 0 δ j ( d j x j ) (27)

β ( x , J B ) is called the suboptimality estimate of the SF-solution { x , J B } since

( F ( x 0 ) F ( x ) ) β ( x , J B ) (28)

We have the following theorem of suboptimality.

3.7. Theorem 2 (Sufficient Condition of Suboptimality)

For a feasible solution x to be ò-optimal for given positive number ò, it is sufficient of the existence of a support J B such that β ( x , J B ) ϵ .

3.8. Proposition 2

From (12) and (27) we deduce:

β ( x , J B ) = δ N T ( χ N x N ) = δ T ( χ x ) (29)

3.9. The Dual Problem

The dual of the primal problem (P) is given by the following linear problem

( D ) { min ϕ ( y , v , w ) = b T y ( d ) T v + ( d + ) T w A T y v + w = c y R m , v 0 , w 0. (30)

where A T = ( a 1 T a n T ) , with j J , a j T is the transpose of the jth column a j of the matrix A. The problem D has n general constraints, and ( m + 2 n ) variables.

Like (P), the problem (D) has an optimal solution λ 0 = ( y 0 , v 0 , w 0 ) , and F ( x 0 ) = ϕ ( λ 0 ) , where x 0 is the optimal solution of (P).

3.10. Definition 10

Any ( m + 2 n ) -vector λ = ( y , v , w ) satisfying all the constraints of the problem D is called a dual feasible solution.

3.11. Definition 11

Let J B be a support of the problem (P), and δ the corresponding support gradient defined in (10), the ( m + 2 n ) -vector λ = ( y , v , w ) which satisfies:

{ y T = c B T A B 1 v j = δ j , w j = 0 if δ j 0 v j = 0 , w j = δ j if δ j > 0 ( j J ) (31)

is called the dual feasible solution accompanying the support J B .

3.12. Proposition 3

・ A dual feasible solution λ = ( y , v , w ) accompanying a given support J B , is dual feasible, i.e.: A T y v + w = c , v 0 , w 0 , y R m .

・ For any support J B we have

c T χ = ϕ ( λ ) (32)

where λ and χ are respectively the dual feasible solution and an primal pseudo-feasible solution accompanying the support J B .

3.13. Definition 12

Let y be dual feasible solution: A T y v + w = c , v 0, w 0 . The ( m + 2 n ) -vector λ * = ( y , v * , w * ) such that:

{ v j * = 0, w j * = ( c j a j T y ) if ( c j a j T y ) 0 v j * = ( c j a j T y ) , w j * = 0 if ( c j a j T y ) 0 ( j J ) (33)

is called a coordinated dual feasible point to the m-vector y.

3.14. Proposition 4

The coordinated dual feasible point λ * = ( y , v * , w * ) to y is dual feasible, i.e.: A T y v * + w * = c , v * 0 , w * 0 .

3.15. Decomposition of the Suboptimality Estimate of an SF-Solution { x , J B }

Let { x , J B } be a SF-solution of the problem (P), β ( x , J B ) be its suboptimality estimate calculated by (27), λ = ( y , v , w ) be the dual feasible solution accompanying the support J B , χ the primal pseudo-feasible solution accompanying the support J B , x 0 be the optimal solution for the problem (P), λ 0 be the optimal solution for the dual problem (D).

From (9), (10), (29), (32), and proposition 1 we have:

β ( x , J B ) = δ N T ( χ N x N ) = δ T ( χ x ) = ( c T c B T Γ ) ( χ x ) = c T χ c T x c B T A B 1 A χ + c B T A B 1 A x = c T χ c T x = ϕ ( λ ) F ( x ) = ϕ ( λ ) ϕ ( λ 0 ) + F ( x 0 ) F ( x ) .

then

β ( x , J B ) = β ( x ) + β ( J B ) (34)

where: β ( J B ) = ϕ ( λ ) ϕ ( λ 0 ) is the degree of non-optimality of the support J B , and β ( x ) = F ( x 0 ) F ( x ) is the degree of non-optimality of the feasible solution x.

3.16. Proposition 5

・ The feasible solution x will be optimal if β ( x ) = 0 , and the support J B will be optimal if β ( J B ) = 0 , but the pair { x , J B } will be an optimal SF-solution if β ( x ) = β ( J B ) = 0 , so the optimality of the feasible solution x can be not identified if it is examined with an unfit support, because in this case there will be β ( J B ) 0 , and β ( x , J B ) 0 .

・ As β ( J B ) = ϕ ( λ ) ϕ ( λ 0 ) , then the support J B will be optimal if its accompanying dual feasible solution λ is optimal solution for the dual problem D.

3.17. Theorem 3 (Existence of an Optimal Support)

For each problem (P), there is always an optimal support.

Proof

As (P) has an optimal solution, its dual (D) given by (30) has also an optimal solution, let λ ¯ = ( y ¯ , v ¯ , w ¯ ) this solution, and define the sets:

I + = { j J / ( c j a j T y ¯ ) 0 } , I = { j J / ( c j a j T y ¯ ) 0 } ,

and

C ¯ = { y R m / ( c j a j T y ) 0, i I + and ( c j a j T y ) 0, i I } ,

we have y ¯ C ¯ .

Let y C ¯ , and define the following linear problem:

( D ¯ ) { m i n ψ ( y ) = b T y + j J + d j + ( c j a j T y ) + j J d j ( c j a j T y ) y C ¯ R m (35)

We have ψ ( y ) = ϕ ( y , v , w ) , where ( y , v , w ) is the coordinated dual feasible point to the vector y, and y ¯ is an optimal feasible solution of the problem ( D ¯ ).

On the other hand, there exists a “vertex” y * of the set C ¯ that is an optimal feasible solution for ( D ¯ ), and this vertex is the intersection of at least m hyperplane a j T y * = c j , j ( I + I ) .

Put J B = { i ( I + I ) / a j T y * = c j } where | J B | = m .

Then: A T ( I , J B ) y * = c B , and so: y * = c B T A B 1 , further the coordinated dual feasible point to y * is the dual feasible solution accompanying the support J B , and its optimal feasible solution for the problem (D), then, according to the proposition (4), we conclude that J B is an optimal support for (P).

4. The Description of the Pivot Adaptive Method “PAM”

As the Simplex Method, to solve the problem (P), the adaptive method (and therefore also the pivot adaptive method) has two phases: the initialization phase, and the second phase, we will detail each one in the following.

1) The initialization phase:

In this phase an initial support feasible solution SF-solution { x , J B } will be determined, and thus the matrix G defined in (9).

We assume that b i 0, i I , and d j = 0 , j J .

Remark 1

Notice that each problem (P) can be reduced to this case,i.e.:
b i 0, i I ,and d j = 0, j J ,in fact:

a) if i 0 I / b i 0 < 0 then both term of the corresponding constraint is multiplied by (−1).

b) if k J / d k 0 , then we do the change of variable: ( x ) k = x k d k , then:

0 ( x ) k d k + d k (36)

and the general constraints A x = b of (P) will be write as:

a 1 x 1 + a 1 x 1 + + a k 1 x k 1 + a k ( x k + d k ) + a k + 1 x k + 1 + + a n x n = b (37)

where j J , a j is the jth column of the matrix A, from this relation we obtain:

a 1 x 1 + a 1 x 1 + + a k 1 x k 1 + a k x k + a k + 1 x k + 1 + + a n x n = b a k d k (38)

in (36), if ( b a k d k ) < 0 we multiple the two terms by (−1).

Then to determine an initial SF-solution to the problem (P), define the artificial problem of (P) as follows:

( P ˜ ) { max F ( x ˜ ) = j = 1 j = m x ˜ j A x + x ˜ = b d x d + , 0 x ˜ b (39)

where b i 0, i I , and x ˜ = x ˜ ( J ) = ( x ˜ i , i I ) = ( x ˜ 1 , x ˜ 2 , , x ˜ m ) are the m artificial variables added to the problem (P).

We resolve the artificial problem P ˜ with the “pivot adaptive method” starting with the obvious initial feasible solution: ( x , x ˜ ) , x = 0 R n , x ˜ = b , and the canonical support (determined by the indices of the artificial variables), and we take the matrix Γ = A .

If H is not empty, at the optimum of P ˜ x ˜ = 0 , A x = b , and d x d + , so x is a feasible solution of (P), and the optimal SF-solution of the problem P ˜ can be taken as the initial SF-solution for the problem P.

2) The second phase:

Assume that an initial SF-solution { x , J B } of the problem (P) is found in the initialization phase, and let G the correspondent matrix computed by (9), ϵ 0 an arbitrary number, the problem will be solved with the pivot adaptive method that we describe in the following.

At the iterations of the pivot adaptive method the transfer { x , J B } { x ¯ , J ¯ B } from one SF-solution to a new one is carried out in such a way that:

β ( x ¯ , J ¯ B ) β ( x , J B ) (40)

the transfer will be realized as two procedures making up the iteration:

i) The procedure of the feasible solution change x x ¯ :

During this procedure we decrease the degree of non-optimality of the feasible solution: β ( x ¯ ) β ( x ) , and then improve the primal criterion F ( x ¯ ) F ( x ) :

i.1) compute the support gradient δ by (10), and the suboptimality estimate by (27).

i.2) If β ( x , J B ) ε , then stop the resolution with: x is an ò-optimal feasible solution for (P) (P).

i.3) If β ( x , J B ) > ε , we compute the non-support components χ N of the primal pseudo-feasible solution accompanying the support J B by (12), and the search direction l = ( l B , l N ) with:

{ l N = χ N x N l B = Γ N l N (41)

i.4) Find the primal step length θ 0 with:

θ 0 = m i n { 1, θ j 0 } , where : θ j 0 = m i n j J B θ j , θ j = ( d j + x j l j , if l j > 0 ; d j x j l j , if l j < 0 ; ; if l j = 0. ( j J B ) (42)

Let j 00 such that: J B ( j 00 ) = j 0 which is the position of j 0 in the vector of indices J B .

i.5) Compute the new feasible solution: x ¯ = x + θ 0 l , and F ( x ¯ ) = F ( x ) + θ 0 β ( x , J B ) .

i.6) If θ 0 = 1 , then stop the resolution with: { x ¯ , J B } is the optimal SF-solution of (P).

i.7) If θ 0 < 1 , then compute:

β ( x ¯ , J B ) = ( 1 θ 0 ) β ( x , J B ) (43)

i.8) If β ( x ¯ , J B ) ϵ , then stop the resolution with: { x ¯ , J B } is the ò-optimal SF-solution of (P).

i.9) if β ( x ¯ , J B ) > ϵ , then go to ii) to change the support J B to J ¯ B .

ii) The procedure of the support change J B J ¯ B :

During this procedure we decrease the degree of non-optimality of the support: β ( J ¯ B ) β ( J B ) ,and then improve the dual criterion ϕ ( λ ¯ ) ϕ ( λ ) , where λ ¯ is the dual feasible solution accompanying the support J ¯ B , and λ is the dual feasible solution accompanying the support J B .

In this procedure there are two rules to change support: rule of “the short step”, and rule of “the long step”, the two rules will be presented.

ii.1) to the j 0 found in (i.4), compute χ j 0 = x j 0 + l j 0 , and α 0 = χ j 0 x ¯ j 0 .

ii.2) compute the dual direction t with:

{ t j 0 = s i g n ( α 0 ) ; t j = 0 , j J B \ { j 0 } ; t N T = t B T Γ N . (44)

ii.3) compute:

σ j = { δ j t j , if δ j t j > 0 ; 0, if δ j = 0 , t j < 0 , ξ j = d j ; 0, if δ j = 0 , t j > 0 , ξ j = d j + ; , inothercases . ( j J N ) (45)

and arrange the obtained values in increasing order as:

σ j 1 σ j 2 σ j p , j k J N , σ j k , k { 1 , , p } . (46)

As was said before, the change of support can be done with two different rules, to use the “short step rule” go to a), and to use the “long step rule” go to b).

a) Change of the support with the “short step rule”:

a.1) From (46) put:

σ 0 = σ j 1 = σ j * = min j J N σ j . (47)

Let j * * such that: J N ( j * * ) = j * which is the position of j * in the vector of indices J N .

Put: J ¯ B = ( J B \ { j 0 } ) { j * } , J ¯ N = ( J N \ { j * } ) { j 0 } , then J ¯ B ( j 00 ) = j * , J ¯ N ( j * * ) = j 0 .

a.2) Compute:

δ ¯ = δ σ 0 t , β ( x ¯ , J ¯ B ) = β ( x ¯ , J B ) + σ 0 α 0 (48)

go to ii.4).

b) Change of the support with the “long step rule”:

b.1) for every j k , k { 1, , p } of (47) compute

Δ α j k = | Γ j 00 j k | ( d j k + d j k ) (49)

b.2) as j * we choose j q such that

α j q 1 = [ α 0 + k = 1 k = q 1 Δ α j k ] < 0 , α j q = [ α 0 + k = 1 k = q Δ α j k ] 0 (50)

Let j * * such that: J N ( j * * ) = j * which is the position of j 0 in the vector of indices J B .

b.3) put: σ 0 = σ j * , and J ¯ B = ( J B \ { j 0 } ) { j * } , J ¯ N = ( J N \ { j * } ) { j 0 } , then J ¯ B ( j 00 ) = j * , J ¯ N ( j * * ) = j 0 .

b.4) Compute:

δ ¯ = δ σ 0 t .

β ( x ¯ , J ¯ B ) = β ( x ¯ , J B ) + k = 1 k = q α j k 1 ( σ j k σ j k 1 ) where: α j 0 = α 0 , σ j 0 = 0 .

Go to ii.4). ii.4) compute Γ ¯ as follows:

{ Γ ¯ j 00 j = Γ j 00 j Γ j 00 j * , j J ; Γ ¯ i j = Γ i j Γ i j * Γ ¯ j 00 j , i I \ { j 00 } , j J . (51)

Γ j 00 j * is “the pivot”. Go to ii.5).

ii.5) Set: x = x ¯ , J B = J ¯ B , β ( x , J B ) = β ( x ¯ , J ¯ B ) , δ = δ ¯ , and Γ = Γ ¯ . Go to i.2).

5. The Pivot Adaptive Method

Let { x , J B } be an initial SF-solution for the problem (P), G the matrix computed by (9), and ϵ 0 are given.

The “Pivot Adaptive Method” (rule of the “short step” and rule of “the long step” are presented) is summarized in the following steps:

Algorithm 1 (The Pivot Adaptive Method)

1) compute the support gradient δ T by (10), and β ( x , J B ) by (27).

2) if β ( x , J B ) ϵ , then STOP with: x is an ò-optimal feasible solution for (P).

3) if β ( x , J B ) > ϵ , then compute the non-support components χ N of the primal pseudo-feasible solution accompanying the support J B by (12), and the search direction l with (41).

4) find the primal step length θ 0 with (42). Let j 00 such that: J B ( j 00 ) = j 0 which is the position of j 0 in the vector of indices J B .

5) Compute the new feasible solution: x ¯ = x + θ 0 l , and F ( x ¯ ) = F ( x ) + θ 0 β ( x , J B ) .

6) if θ 0 = 1 , then STOP with: { x ¯ , J B } is the optimal SF-solution for (P).

7) if θ 0 < 1 , then compute β ( x ¯ , J B ) = ( 1 θ 0 ) β ( x , J B ) .

8) if β ( x ¯ , J B ) ϵ , then STOP with: { x ¯ , J B } is the ò-optimal SF-solution for (P).

9) if β ( x ¯ , J B ) > ϵ , then change the support J B to J ¯ B .

10) compute χ j 0 = x j 0 + l j 0 , and: α 0 = χ j 0 x ¯ j 0 .

11) compute the dual direction t with (44).

12) compute σ j , for j J N with (45), and arrange the obtained values in increasing order.

13) do the change of support by one of the rules:

a) Change of the support with the “short step rule” as follows:

a.1) compute σ 0 with (47), and set: J ¯ B = ( J B \ { j 0 } ) { j * } , J ¯ N = ( J N \ { j * } ) { j 0 } .

a.2) compute β ( x ¯ , J ¯ B ) = β ( x ¯ , J B ) + σ 0 α 0 . Go to (14).

b) Change of the support with the “long step rule” as follows:

b.1) for every j k , k { 1, , p } of (46) compute Δ α j k = | Γ j 00 j k | ( d j k + d j k ) .

b.2) as j * we choose j q such that (50) is verified.

b.3) set: σ 0 = σ j * , and J ¯ B = ( J B \ { j 0 } ) { j * } , J ¯ N = ( J N \ { j * } ) { j 0 } .

b.4) compute β ( x ¯ , J ¯ B ) = β ( x ¯ , J B ) + k = 1 k = q α j k 1 ( σ j k σ j k 1 ) where: α j 0 = α 0 , σ j 0 = 0 . Go to (14).

14) compute Γ ¯ with (51) and δ ¯ = δ σ 0 t .

15) set: x = x ¯ , J B = J ¯ B , β ( x , J B ) = β ( x ¯ , J ¯ B ) , δ = δ ¯ , and Γ = Γ ¯ . Go to (2).

6. Example

In this section, a linear problem will be resolved with the Pivot Adaptive Method using the “short step rule”, and in parallel we will explain how to realize these calculations under the shape of successive tables as is given in Figure 1.

Consider the following linear problem with bounded variables:

( P ) { m a x F ( x ) = c T x A x = b d x d + . (52)

where: c T = ( 65,115,0,0,0 ) , x = ( x 1 , x 2 , x 3 , x 4 , x 5 ) T , A = ( 5 / 2 15 / 2 1 0 0 1 / 8 1 / 8 0 1 0 35 / 2 10 0 0 1 ) , b = ( 240,5,595 ) T , d = 0 R 5 , d + = ( 34,34,240,5,595 ) T .

Put J = { 1 , 2 , 3 , 4 , 5 } , ϵ = 10 3 .

First phase (Initialization):

Let { x 1 , J B 1 } be the initial SF-solution of the problem (P) where; x 1 = ( 11,27,10, 1 / 4 , 265 / 2 ) T , J B 1 = { 3 , 4 , 5 } , so: J N 1 = { 1 , 2 } , A B 1 = ( a 3 , a 4 , a 5 ) = I 3 , A N 1 = ( a 1 , a 2 ) then The matrix G is given by: Γ = ( Γ B , Γ N ) where: ( Γ 1 ) B = I 3 , ( Γ 1 ) N = ( 5 / 2 15 / 2 1 / 8 1 / 8 35 / 2 10 ) .

Note that for the initial support we have chosen the canonical support, and for the initial feasible solution we took an interior point of the feasible region of the problem (P).

u In the preamble part of the tables (Figure 1)which has four rows we put sequentially: c T , ( d ) T , ( d + ) T , x 1 . The tables have 8 columns.

Second phase:

Starting with the SF-solution { x 1 , J B 1 } , and the matrix G to resolve the problem (P) with the PAM using the short step rule.

u In the first table of PAM (Figure 1) (represents the first iteration) there are 8 rows, in the first 3 rows we have the matrix G (indicating the vector formed A B 1 in the first column of the tables), and in the rest rows we have sequentially: δ, l, θ, x2, σ.

The support gradient δ 1 T = ( ( δ 1 T ) B , ( δ 1 T ) N ) where: ( δ 1 T ) B = 0 R 3 , ( δ 1 T ) N = ( 65,115 ) , and β ( x 1 , J B 1 ) = 2300 > ϵ .

1) Iteration 1:

・ Change solution: we have: ( χ 1 ) N = ( 34,34 ) , and ( l 1 ) N = ( 23,7 ) , ( l 1 ) B = ( 110, 15 / 4 , 945 / 2 ) , θ 0 = θ 4 = 1 / 15 , ( J B 1 ) ( 2 ) = 4 (4 is the second component of the vector of index J B 1 ).

u The case of θ 0 is marked by yellow in tables (Figure 1), it correspond at the second vector of J B 1 which is a 4 (which will come out of the basis.

Then, θ 0 < 1 , the new feasible solution x 2 = ( 188 / 15 , 412 / 15 , 40 / 15 ,0,101 ) , and β ( x 2 , J B 1 ) = 6440 / 3 > ϵ .

・ Change support (with the short step rule):

χ 4 = 7 / 2 , and: α 0 = 7 / 2 , the dual direction t T = ( 1 / 8 , 1 / 8 ,0,1,0 ) , σ 0 = σ 1 = 520 , then ( J N 1 ) ( 1 ) = 1 (1 is the first component of the vector of index J N 1 ).

u The case of σ 0 is marked by green in tables (Figure 1), his row correspond to vector a 1 which will go into the basis.

So J B 2 = ( J B 1 \ { 4 } ) { 1 } = { 3,1,5 } , J N 2 = ( J N 1 \ { 1 } ) { 4 } = { 4,2 } .

Figure 1. Tables of the pivot adaptive method.

u For have the 2nd table (Figure 1), we applied the pivoting rule where the pivot is marked by red. The new support gradient δ 2 T = δ 1 T σ 0 t = ( ( δ 2 T ) B , ( δ 2 T ) N ) where: ( δ 2 T ) B = 0 R 3 , ( δ 2 T ) N = ( 520,50 ) , and β ( x 2 , J B 2 ) = β ( x 2 , J B 1 ) + σ 0 α 0 = 980 / 3 .

As β ( x 2 , J B 2 ) > ϵ , we go to another iteration, we compute then: Γ 2 = ( ( Γ 2 ) B , ( Γ 2 ) N ) where:

( Γ 2 ) B = I 3 , ( Γ 1 ) N = ( 20 5 8 1 140 15 / 2 ) .

2) Iteration 2:

・ Change solution: we have: χ N = ( 0,34 ) , and ( l 2 ) N = ( 0, 98 / 15 ) , ( l 2 ) B = ( 98 / 3 , 98 / 15 ,49 ) , θ 0 = θ 3 = 4 / 49 , ( J B 2 ) ( 1 ) = 3 (3 is the first component of the vector of indice J B 2 ).

Then, θ 0 < 1 , the new feasible solution x 3 = ( 12,28,0,0,105 ) , and β ( x 2 , J B 1 ) = 300 > ϵ .

・ Change support (with the short step rule): χ 3 = 30 , and: α 0 = 30 , the dual direction t T = ( 0,5,1, 20,0 ) , σ 0 = σ 2 = 10 , then ( J N 2 ) ( 2 ) = 2 (2 is the second component of the vector of index J N 2 ).

So J B 3 = ( J B 2 \ { 3 } ) { 2 } = { 2,1,5 } , J N 3 = ( J N 2 \ { 2 } ) { 3 } = { 4,3 } .

The new support gradient δ 3 T = δ 2 T σ 0 t = ( ( δ 3 T ) B , ( δ 3 T ) N ) where: ( δ 3 T ) B = 0 R 3 , ( δ 3 T ) N = ( 10,320 ) , and β ( x 3 , J B 3 ) = β ( x 3 , J B 2 ) + σ 0 α 0 = 0 .

Then { x 3 , J B 3 } is the optimal SF-solution of the problem (P), and F ( x 3 ) = 4000 .

7. Brief Numerical Comparison between the PAM and the Simplex Method

To realize a numerical comparison between the Simplex algorithm implemented in the function “linprog” under the MATLAB programming language version 7.14, 0.739 (R2012a) and our algorithm (PAM), an implementation for the later with the short step rule, has been developed.

The comparison is carried on the resolution of the Klee-Minty problem which has the following form [7] :

( P 3 ) { max 2 n 1 x 1 + 2 n 2 x 2 + + 2 x n 1 + x n x 1 5 4 x 1 + x 2 25 8 x 1 + 4 x 2 + x 3 125 2 n x 1 + 2 n 1 x 2 + + 4 x n 1 + x n 5 n x 0 (53)

where x = ( x 1 , x 2 , , x n ) . (P3) has n variables, n constraints and 2n vertices.

For a fixed n, to write (P3) in the form of the problem (P) given in (1), we added to each of the n constraints of (P3) a spread variable, and we found, for each of the 2n variables of the result problem, a lower born and an upper born. Then the problem (P3) can be done as a linear problem with bounded variables as follows:

( P 4 ) { max 2 n 1 x 1 + 2 n 2 x 2 + + 2 x n 1 + x n x 1 + x n + 1 = 5 4 x 1 + x 2 + x n + 2 = 25 8 x 1 + 4 x 2 + x 3 + x n + 3 = 125 2 n x 1 + 2 n 1 x 2 + + 4 x n 1 + x n + x 2 n = 5 n 0 x 1 5 ; 0 x 2 25 ; ; 0 x n 5 n ; 0 x n + 1 5 ; 0 x n + 2 25 ; ; 0 x 2 n 5 n . (54)

The Simplex algorithm chosen here takes as a starting solution the original, then in our implementation of the (PAM) we impose the same initial feasible solution. For the initial support we had taken the canonical one J B = { n + 1 , n + 2 , , 2 n } , then Γ = A .

We have considered problems with matrix A of size n × n , where n { 3,5,7,10,12,15,17,20,23,25,27,30,33,35,37,40,42,45,47,50 } , for each size, we obtained the number of iterations, and the time of resolution, where the time is given into milliseconds.

This results are reported in Figure 2, and in Figure 3, where “Simplex(it)”, “PAM(it)”, “Simplex(tm)”, “PAM(tm)” represent respectively the number of iterations of the Simplex algorithm, the number of iterations of the Pivot Adaptive Method, the time of the Simplex algorithm, the time of the Pivot Adaptive Method. The time is given into milliseconds.

Figure 2. Number of iterations and time for Linprog-Simplex and PAM for Klee-Minty problem.

Figure 3. Time comparison between Linprog-Simplex and PAM for the Klee-Minty problem.

We remark that the (PAM) is better than the Simplex algorithm in computation time, and in necessary number of iterations to solve the problems. Recall that the optimal solution of the problem (P3) is given by x = ( 0,0, ,5 n ) .

8. Conclusions

The main contribution of this article is a new variant of the Adaptive Method that we have called “Pivot Adaptive Method” (PAM). In this variant, we use the simplex pivoting rule in order to avoid computing the inverse of the basic matrix in each iteration. As was recognized in this work, the algorithm saves our time compared to the original algorithm (AM), and allows us to present the resolution of a given problem under the shape of successive tables as seen in an example.

We have implemented our algorithm (PAM) ((PAM) with the short step rule, and (PAM) with the long step rule) using MATLAB, and we have done a brief comparison with the primal simplex algorithm (using linprog-simplex in MATLAB) for solving the Klee-Minty problem. Indeed, the (PAM) is more efficient in number of iterations, and in computation time.

In a subsequent work, we shall do a more thorough comparison between the Simplex algorithm of Dantzig and the (PAM), and we shall extend the (PAM) to the problems with a large scale using the constraint selection technique.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Dantzig, G.B. (1951) Minimization of a Linear Function of Variables Subject to Linear Inequalities. John Wiley, New York.
[2] Minoux, M. (1983) Programmation mathématique, Théorie et al gorithmes, Tome 1, Dunod.
[3] Culioli, J.C. (1994) Introduction à l’optimisation, Elipse.
[4] Dantzig, G.B. (1963) Linear Programming and Extensions. Princeton University Press, Princeton. https://doi.org/10.1515/9781400884179
[5] Gabasov, R. and Kirillova, F.M. (1977, 1978, 1980) Method of Linear Programming. Vol. 1, 2 and 3, BGU Press, Minsk. (In Russian)
[6] Bland, R.G. (1977) New Finite Pivoting Rules for the Simplex Method. Mathematics of Operations Research, 2, 103-107.
[7] Klee, V. and Minty, G.J. (1972) How Good Is the Simplex Algorithm? In: Shisha III, O., Ed., Inequalities, Academic Press, New York, 159-175.
[8] Khachiyan, L.G. (1979) A Polynomial Algorithm in Linear Programming. Soviet Mathematics Doklady, 20, 191-194.
[9] Karmarkar, N.K. (1984) A New Polynomial-Time Algorithm for Linear Programming. Combinatorica, 4, 373-395.
[10] Kojima, M., Megiddo, N., Noma, T. and Yoshise, A. (1989) A Unified Approach to Interior Point Algorithms for Linear Programming. In: Megiddo, N., Ed., Progress in Mathematical Programming: Interior Point and Related Methods, Springer Verlag, New York, 29-47.
[11] Ye, Y. (1997) Interior Point Algorithms, Theory and Analysis. John Wiley and Sons, Chichester.
[12] Vanderbei, R.J. and Shanno, D.F. (1999) An Interior-Point Algorithm for Nonconvex Nonlinear Programming. Computational Optimization and Applications, 13, 231-252.
[13] Peng, J., Roos, C. and Terlaky, T. (2001) A New and Efficient Large-Update Interior-Point Method for Linear Optimization. Journal of Computational Technologies, 6, 61A-80.
[14] Cafieri, S., Dapuzzo, M., Marino, M., Mucherino, A. and Toraldo, G. (2006) Interior Point Solver for Large-Scale Quadratic Programming Problems with Bound Constraints. Journal of Optimization Theory and Applications, 129, 55-75.
[15] Gabasov, R.F. (1994) Adaptive Method for Solving Linear Programming Problems. University of Bruxelles, Bruxelles.
[16] Gabasov, R.F. (1994) Adaptive Method for Solving Linear Programming Problems. University of Karlsruhe, Institute of Statistics and Mathematics, Karlsruhe.
[17] Gabasov, R., Kirillova, F.M. and Prischepova, S.V. (1995) Optimal Feedback Control. Springer-Verlag, London.
[18] Gabasov, R., Kirillova, F.M. and Kostyukova, O.I. (1979) A Method of Solving General Linear Programming Problems. Doklady AN BSSR, 23, 197-200. (In Russian)
[19] Kostina, E. (2002) The Long Step Rule in the Bounded-Variable Dual Simplex Method: Numerical Experiments. Mathematical Methods of Operation Research, 55, 413-429.
[20] Radjef, S. and Bibi, M.O. (2011) An Effective Generalization of the Direct Support Method. Mathematical Problems in Engineering, 2011, Article ID: 374390.
[21] Balashevich, N.V., Gabasov, R. and Kirillova, F.M. (2000) Numerical Methods of Program and Positional Optimization of the Linear Control Systems. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 40, 838-859.
[22] Bentobache, M. and Bibi, M.O. (2012) A Two-Phase Support Method for Solving Linear Programs: Numerical Experiments. Mathematical Problems in Engineering, 2012, Article ID: 482193.
[23] Bibi, M.O. and Bentobache, M. (2015) A Hybrid Direction Algorithm for Solving Linear Programs. International Journal of Computer Mathematics, 92, 201-216.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.