The Pivot Adaptive Method for Solving Linear Programming Problems

DOI: 10.4236/ajor.2018.82008   PDF   HTML     390 Downloads   952 Views   Citations
A new variant of the Adaptive Method (AM) of Gabasov is presented, to minimize the computation time. Unlike the original method and its some variants, we need not to compute the inverse of the basic matrix at each iteration, or to solve the linear systems with the basic matrix. In fact, to compute the new support feasible solution, the simplex pivoting rule is used by introducing a matrix that we will define. This variant is called “the Pivot Adaptive Method” (PAM); it allows presenting the resolution of a given problem under the shape of successive tables as we will see in example. The proofs that are not given by Gabasov will also be presented here, namely the proofs for the theorem of the optimality criterion and for the theorem of existence of an optimal support, and at the end, a brief comparison between our method and the Simplex Method will be given.

Keywords

Share and Cite:

Belahcene, S. , Marthon, P. and Aidene, M. (2018) The Pivot Adaptive Method for Solving Linear Programming Problems. American Journal of Operations Research, 8, 92-111. doi: 10.4236/ajor.2018.82008.

1. Introduction

As a branch of mathematics, linear programming is the domain that had the most successful in optimization      . Since its formulation from 1930 to 1940 and the development of the Simplex method of Dantzig in 1949   , researchers in various fields have been led to formulate and solve linear problems.

Although, the Simplex algorithm is often effective in practice, the fact that it is not a polynomial algorithm, as shown by Klee and Minty  , has incited researchers to propose other algorithms and led to the birth of the interior point algorithms.

In 1979, L. Khachiyan proposed the first polynomial algorithm for linear programming  ; it is based on the ellipsoid method, studied by Arkadi Nemirovski and David B. Yudin, a preliminary version of which having been introduced by Naum Z. Shor. Unfortunately, this ellipsoid method has a poor efficiency.

In 1984, N. K. Karmakar published an interior point algorithm which has a polynomial convergence ,and this caused a renewed interest to interior point methods, as well in linear programming that in nonlinear programming      .

Gabasov and Kirillova have generalized the Simplex method in 1995    , and developed the Adaptive Method (AM), a primal-dual method, for linear programming with bounded variables. As the Simplex method, the Adaptive Method is a support method, but it can start from any support (base) and any feasible solution and can move to the optimal solution by interior or boundary points; this method was extended, later, to solve general linear and convex problems   , and also optimal control problems  .

In linear programming, several variants of the AM have been proposed. They generally address the problem of initialization of the method and the choice of the direction   .

In this work, a new variant of the adaptive method called “Pivot adaptive method” (PAM) is presented. Unlike the original method and its variants, we need not to compute the inverse of the basic matrix at each iteration  , or to solve the linear systems with the basic matrix  . In fact, to compute the new feasible solution and the new support, we only need compute the decomposition of vectors columns of the matrix of the problem in the current basis. For this computation we use the simplex pivoting rule  by introducing a new matrix GAMMA which allows to minimize the computation time. This new variant of adaptive method allows us to present the resolution of a given problem under the shape of successive tables (as is done with the Simplex method) as we will see in example. The proofs for the theorem of the optimality criterion and for the theorem of existence of an optimal support that are not given by Gabasov  will be presented here, and without giving too many details a brief comparison between our method and the Simplex method will be given at the end of this article.

The paper is organized as follows. In Section 2, after giving the statement of the problem, some important definitions are given. In Section 3, we describe step by step the “Pivot Adaptive Method” and in parallel the proofs of the corresponding theorems are given. In Section 4, an example will be solved to illustrate the PAM, and more details will be given to explain how present the resolution under the shape of successive tables. In Section 5, we give a brief comparison between the PAM and the Simplex Method by solving the Klee-Minty problem. Section 6, concludes the paper.

2. Statement of the Problem and Definitions

In this article we consider the primal linear programming problem with bounded variables presented in the following standard form:

$\left(P\right)\text{ }\left\{\begin{array}{l}\mathrm{max}F\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)=\underset{j=1}{\overset{j=n}{\sum }}\text{ }{c}_{j}{x}_{j}\hfill \\ \underset{j=1}{\overset{j=n}{\sum }}\text{ }{a}_{ij}{x}_{j}={b}_{i},\text{\hspace{0.17em}}i\in I\hfill \\ {d}_{j}^{-}\le {x}_{j}\le {d}_{j}^{+},\text{\hspace{0.17em}}j\in J.\hfill \end{array}$ (1)

where: $J=\left\{1,\cdots ,n\right\}$ is the index set of the variables ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ , $I=\left\{1,\cdots ,m\right\}$ is the index set of the parameters ${b}_{1},{b}_{2},\cdots ,{b}_{m}$ (corresponds to the constraints), put: $J={J}_{B}\cup {J}_{N}$ , ${J}_{B}\cap {J}_{N}=\varnothing$ , $|{J}_{B}|=m$ , and introduce the vectors:

$x=x\left(J\right)=\left({x}_{j},j\in J\right)=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)=\left(\begin{array}{c}{x}_{B}\\ {x}_{N}\end{array}\right)$ , with: ${x}_{B}=x\left({J}_{B}\right)=\left({x}_{j},j\in {J}_{B}\right)$ , ${x}_{N}=x\left({J}_{N}\right)=\left({x}_{j},j\in {J}_{N}\right)$ .

$c=c\left(J\right)=\left({c}_{j},j\in J\right)=\left({c}_{1},{c}_{2},\cdots ,{c}_{n}\right)=\left(\begin{array}{c}{c}_{B}\\ {c}_{N}\end{array}\right)$ , with: ${c}_{B}=c\left({J}_{B}\right)=\left({c}_{j},j\in {J}_{B}\right)$ , ${c}_{N}=c\left({J}_{N}\right)=\left({c}_{j},j\in {J}_{N}\right)$ .

${b}^{\text{T}}={b}^{\text{T}}\left(J\right)=\left({b}_{i},i\in I\right)=\left({b}_{1},{b}_{2},\cdots ,{b}_{m}\right)$ .

${d}^{-}={d}^{-}\left(J\right)=\left({d}_{j}^{-},j\in J\right)=\left({d}_{1}^{-},{d}_{2}^{-},\cdots ,{d}_{n}^{-}\right);\text{\hspace{0.17em}}‖{d}^{-}‖<+\infty$ .

${d}^{+}={d}^{+}\left(J\right)=\left({d}_{j}^{+},j\in J\right)=\left({d}_{1}^{+},{d}_{2}^{+},\cdots ,{d}_{n}^{+}\right);\text{\hspace{0.17em}}‖{d}^{+}‖<+\infty$ .

and the $\left(m×n\right)$ matrix

$A=A\left(I,J\right)=\left({a}_{ij},i\in I,j\in J\right)=\left({a}_{1},{a}_{2},\cdots ,{a}_{n}\right)=\left(\begin{array}{c}{A}_{1}^{\text{T}}\\ ⋮\\ {A}_{m}^{\text{T}}\end{array}\right)$ , with: ${a}_{j}=\left(\begin{array}{c}{a}_{1j}\\ ⋮\\ {a}_{mj}\end{array}\right)$ ,

$j\in J$ ( ${a}_{j}$ : the jth column of the matrix A), and ${A}_{i}^{\text{T}}=\left({a}_{i1},{a}_{i2},\cdots ,{a}_{in}\right),i\in I$ ( ${A}_{i}^{\text{T}}$ : the ith row of the matrix A), $A=\left({A}_{B}/{A}_{N}\right)$ , ${A}_{B}=A\left(I,{J}_{B}\right)$ , ${A}_{N}=A\left(I,{J}_{N}\right)$ .

We assume that $rank\left(A\right)=m . Then the problem (1) takes the following form:

$\left(P\right)\text{ }\left\{\begin{array}{l}maxF\left(x\right)={c}^{\text{T}}x\hfill \\ Ax=b\hfill \\ {d}^{-}\le x\le {d}^{+}\hfill \end{array}$ (2)

$Ax=b$ are called the general constraints of (P).

Denote the feasible region of (P) as: $H=\left\{x\in {R}^{n},Ax=b,{d}^{-}\le x\le {d}^{+}\right\}$ .

2.1. Definition 1

Each vector of the set H is called a feasible solution of (P).

2.2. Definition 2

Any vector $x\in {R}^{n}$ that satisfies the general constraints $Ax=b$ of the problem (P) is called a pseudo-feasible solution.

2.3. Definition 3

A feasible solution ${x}^{0}$ is called optimal if: $F\left({x}^{0}\right)={c}^{\text{T}}{x}^{0}=\underset{x\in H}{\mathrm{max}}{c}^{\text{T}}x$ .

2.4. Definition 4

For given value $ϵ\ge 0$ , the solution ${x}^{ϵ}$ is called an ò-optimal (or suboptimal) if: $\left(F\left({x}^{0}\right)-F\left({x}^{ϵ}\right)\right)\le ϵ$ , where ${x}^{0}$ is an optimal solution for the problem (P).

2.5. Definition 5

The set of m indices ${J}_{B}\subset J$ ( $|{J}_{B}|=m$ ) is called a support of (P) if the submatrix ${A}_{B}=A\left(I,{J}_{B}\right)$ is non-singular ( $det{A}_{B}\ne 0$ ), then the set ${J}_{N}=J\{J}_{B}$ of ( $n-m$ ) indices is called non-support, ${A}_{B}$ is the matrix support, and ${A}_{N}=A\left(I,{J}_{N}\right)$ is the matrix non-support.

2.6. Definition 6

The pair $\left\{x,{J}_{B}\right\}$ formed by a feasible solution x and the support ${J}_{B}$ is called a support feasible solution (SF-solution).

2.7. Definition 7

The SF-solution $\left\{x,{J}_{B}\right\}$ is called non-degenerate if: ${d}_{j}^{-}<{x}_{j}<{d}_{j}^{+},\forall j\in {J}_{B}$ .

Recall that, unlike the Simplex Method in which the feasible solution x and the basic indices ${J}_{B}$ are related intimately, in the adaptive method there are independent.

3. Optimality Criterion

For the smooth running of calculations, in the rest of the article, we assume that the sets ${J}_{B}$ and ${J}_{N}$ are vectors of indices (i.e.: the order of indices in these sets is important and must be respected).

3.1. Formula of the Objective Value Increment

Let $\left\{x,{J}_{B}\right\}$ be the initial SF-solution of the problem (P), and $\stackrel{¯}{x}=\stackrel{¯}{x}\left(J\right)=\left(\begin{array}{c}{\stackrel{¯}{x}}_{B}\\ {\stackrel{¯}{x}}_{N}\end{array}\right)$ an arbitrary pseudo-feasible solution of (P). We set $\Delta x=\Delta {x}_{B}+\Delta {x}_{N}=\stackrel{¯}{x}-x$ , then the increment of the objective function value is given by:

$\Delta F\left(x\right)=F\left(\stackrel{¯}{x}\right)-F\left(x\right)={c}^{\text{T}}\stackrel{¯}{x}-{c}^{\text{T}}x={c}^{\text{T}}\Delta x={c}_{B}^{\text{T}}\Delta {x}_{B}+{c}_{N}^{\text{T}}\Delta {x}_{N}$ (3)

and since:

$A\Delta x={A}_{B}\Delta {x}_{B}+{A}_{N}\Delta {x}_{N}=A\stackrel{¯}{x}-Ax=0$ (4)

then

$\Delta {x}_{B}=-{A}_{B}^{-1}{A}_{N}\Delta {x}_{N}$ (5)

Substituting the vector $\Delta {x}_{B}$ in (3) we get:

$\Delta F\left(x\right)=\left({c}_{N}^{\text{T}}-{c}_{B}^{\text{T}}{A}_{B}^{-1}{A}_{N}\right)\Delta {x}_{N}$ (6)

Here, we define the m-vector of multipliers y as follows:

${y}^{\text{T}}={c}_{B}^{\text{T}}{A}_{B}^{-1}$ (7)

In  , Gabasov defines the support gradient and noted it D as: ${\Delta }^{\text{T}}={y}^{\text{T}}A-{c}^{\text{T}}$ , but he also recognizes that for every $k\in {J}_{N}$ , the kth support derivative is equal to $-{\Delta }_{k}={c}_{k}-{y}^{\text{T}}{a}_{k}$ , in fact, $-{\Delta }_{k}$ is the rate of change of the objective function when the kth non-support components of the feasible solution x is increased and all the other non-support components are fixed, at the same time the support components are changed in such way to satisfy the constraints $Ax=b$ .

Then, in the pivot adaptive method, we define the n-vector of reduced gains (or support gradient) δ as follows:

${\delta }^{\text{T}}=-{\Delta }^{\text{T}}=\left({\delta }_{j},j\in J\right)={c}^{\text{T}}-{y}^{\text{T}}A=\left({\delta }_{B}^{\text{T}},{\delta }_{N}^{\text{T}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}:\text{\hspace{0.17em}}{\delta }_{B}^{\text{T}}=0,{\delta }_{N}^{\text{T}}={c}_{N}^{\text{T}}-{y}^{\text{T}}{A}_{B}$ (8)

To reduce the computation time in the adaptive method of Gabasov  , we define the ( $m×n$ ) matrix G as follows:

$\Gamma ={A}_{B}^{-1}A=\left({\Gamma }_{B},{\Gamma }_{N}\right),\text{ }\text{where:}\text{\hspace{0.17em}}{\Gamma }_{B}={I}_{m},{\Gamma }_{N}={A}_{B}^{-1}{A}_{N}$ (9)

which is the decomposition of the columns of the matrix A on the support ${J}_{B}$ .

We have: ${\Gamma }_{N}=\Gamma \left(I,{J}_{N}\right)=\left({\Gamma }_{ij},i\in I,j\in {J}_{N}\right)$ , ${\Gamma }_{ij}$ is the product of the ith row of the matrix ${A}_{B}^{-1}$ and the jth column of the matrix ${A}_{N}$ . Then the n-vector of support gradient given in (8) can be computed as follows:

$\begin{array}{l}{\delta }^{\text{T}}=\left({\delta }_{j},j\in J\right)={c}^{\text{T}}-{c}_{B}^{\text{T}}{A}_{B}^{-1}A={c}^{\text{T}}-{c}_{B}^{\text{T}}\Gamma =\left({\delta }_{B}^{\text{T}},{\delta }_{N}^{\text{T}}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where:}\text{\hspace{0.17em}}{\delta }_{B}^{\text{T}}=0,{\delta }_{N}^{\text{T}}={c}_{N}^{\text{T}}-{c}_{B}^{\text{T}}{\Gamma }_{N}\end{array}$ (10)

We see that, to compute the quantity ${c}_{B}^{\text{T}}{A}_{B}^{-1}A$ , we begin to compute $\Gamma ={A}_{B}^{-1}A$ and not ${y}^{\text{T}}={c}_{B}^{\text{T}}{A}_{B}^{-1}$ ; therefore we need not to compute the inverse of the basic matrix ${A}_{B}$ .

From (6) and (10) we obtain

$\Delta F\left(x\right)={\delta }_{N}^{\text{T}}\Delta {x}_{N}=\underset{j\in {J}_{N}}{\sum }{\delta }_{j}\Delta {x}_{j}$ (11)

3.2. Definition 8

For an given support ${J}_{B}$ , any vector $\chi =\chi \left(J\right)=\left(\begin{array}{c}{\chi }_{B}\\ {\chi }_{N}\end{array}\right)$ of ${R}^{n}$ that satisfies

$\left\{\begin{array}{l}{\chi }_{j}={d}_{j}^{-},\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}<0\hfill \\ {\chi }_{j}={d}_{j}^{+},\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}>0\hfill \\ {\chi }_{j}={d}_{j}^{-}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}{d}_{j}^{+},\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}=0\hfill \\ {\chi }_{B}={A}_{B}^{-1}\left(b-{A}_{N}{\chi }_{N}\right)\hfill \end{array}\left(j\in {J}_{N}\right)$ (12)

is called a primal pseudo-feasible solution accompanying the support ${J}_{B}$ .

3.3. Proposition 1

A primal pseudo-feasible solution accompanying a given support ${J}_{B}$ , χ, verifies just the general constraints of the problem (P), ie $A\chi =b$ . If more, the support components ${\chi }_{B}$ of the vector χ verifies: ${d}_{B}^{-}\le {\chi }_{B}\le {d}_{B}^{+}$ , then χ is feasible and optimal solution for (P).

3.4. Theorem 1 (The Optimality Criterion)

Let $\left\{x,{J}_{B}\right\}$ be an SF-solution of the problem (P), and ${\delta }^{\text{T}}$ be the support gradient computed by (10), then the relations

$\left\{\begin{array}{l}{\delta }_{j}<0,\text{ }\text{for}\text{\hspace{0.17em}}{x}_{j}={d}_{j}^{-}\hfill \\ {\delta }_{j}>0,\text{ }\text{for}\text{\hspace{0.17em}}{x}_{j}={d}_{j}^{+}\hfill \\ {\delta }_{j}=0,\text{ }\text{for}\text{\hspace{0.17em}}{d}_{j}^{-}\le {x}_{j}\le {d}_{j}^{+}\hfill \end{array}\left(j\in {J}_{N}\right)$ (13)

are sufficient, and in the case of non-degeneracy of the SF-solution $\left\{x,{J}_{B}\right\}$ also necessary for the optimality of the feasible solution x.

Proof

1) The sufficient condition:

for the SF-solution $\left\{x,{J}_{B}\right\}$ , suppose that the relation (13) are verified, we must proof that x is optimal, it amounts to show that:

$\forall \stackrel{¯}{x}\in H,\Delta F\left(x\right)=F\left(x\right)-F\left(\stackrel{¯}{x}\right)\ge 0$ (14)

Let $\stackrel{¯}{x}\in H$ , and $\Delta x=x-\stackrel{¯}{x}$ , then:

$\begin{array}{c}\Delta F\left(x\right)=F\left(x\right)-F\left(\stackrel{¯}{x}\right)=\underset{j\in {J}_{N}}{\sum }{\delta }_{j}\Delta {x}_{j}\\ =\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}\le 0\end{array}}{\sum }{\delta }_{j}\left({x}_{j}-{\stackrel{¯}{x}}_{j}\right)+\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}\ge 0\end{array}}{\sum }{\delta }_{j}\left({x}_{j}-{\stackrel{¯}{x}}_{j}\right)+\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}=0\end{array}}{\sum }{\delta }_{j}\left({x}_{j}-{\stackrel{¯}{x}}_{j}\right)\\ =\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}\le 0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{-}-{\stackrel{¯}{x}}_{j}\right)+\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}\ge 0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{+}-{\stackrel{¯}{x}}_{j}\right)\ge 0\end{array}$

then x is optimal.

2) The necessary condition:

Suppose that $\left\{x,{J}_{B}\right\}$ an SF-solution non-degenerate, i.e.:

$\forall j\in {J}_{B},{d}_{j}^{-}<{x}_{j}<{d}_{j}^{+}$ (15)

and that x optimal, i.e.:

$\forall \stackrel{¯}{x}\in H,\Delta F\left(x\right)=F\left(x\right)-F\left(\stackrel{¯}{x}\right)=\underset{j\in {J}_{N}}{\sum }{\delta }_{j}\Delta {x}_{j}\ge 0.$ (16)

Reasoning by absurdity:

Suppose that the relations (13) are not verified, thus we have, at least, one of the following three cases:

1) $\exists k\in {J}_{N}/{\delta }_{k}\le 0\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{x}_{k}>{d}_{k}^{-}$ .

2) $\exists k\in {J}_{N}/{\delta }_{k}\ge 0\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{x}_{k}<{d}_{k}^{+}$ .

3) $\exists k\in {J}_{N}/{\delta }_{k}=0\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{x}_{k}={d}_{j}^{-}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}{x}_{k}={d}_{j}^{+}$ .

Suppose then we have the first case, i.e.

$\exists k\in {J}_{N}/{\delta }_{k}\le 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{k}>{d}_{k}^{-}$ (17)

(the proof uses the same reasoning for the second and the third cases). Construct the n-vector $\stackrel{¯}{x}\left(\theta \right)=\left({\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{1},{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{2},\cdots ,{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{n}\right)$ as follows:

$\left\{\begin{array}{l}{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{j}={x}_{j},\forall j\in {J}_{N}\\left\{k\right\}\hfill \\ {\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{k}={x}_{k}-\theta ,\theta \in \left[0,{x}_{k}-{d}_{k}^{-}\right]\hfill \\ {\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{j}={x}_{j}+\theta {\Gamma }_{jk},\forall j\in {J}_{B}\hfill \end{array}$ (18)

where ${\Gamma }_{jk}$ is the product of the jth row of the matrix ${A}_{B}^{-1}$ and the kth column of the matrix ${A}_{N}$ . Let ${\theta }_{1}={x}_{k}-{d}_{k}^{-}>0$ .

we have

$A\stackrel{¯}{x}\left(\theta \right)={A}_{B}{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{B}+{A}_{N}{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{N}=0$ (19)

and

$\forall j\in {J}_{N},{d}_{j}^{-}\le {\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{j}\le {d}_{j}^{+}$ (20)

For $j\in {J}_{B},{\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{j}={x}_{j}+\theta {\Gamma }_{jk}$ , then for have:

$\forall j\in {J}_{B},{d}_{j}^{-}\le {\left(\stackrel{¯}{x}\left(\theta \right)\right)}_{j}\le {d}_{j}^{+}$ (21)

a value of θ is chosen such that:

$\forall j\in {J}_{B},{d}_{j}^{-}\le {x}_{j}+\theta {\Gamma }_{jk}\le {d}_{j}^{+}$ (22)

i.e.

$\left\{\begin{array}{l}0<\theta \le \frac{{d}_{j}^{+}-{x}_{j}}{{\Gamma }_{jk}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{ }{\Gamma }_{jk}>0\hfill \\ 0<\theta \le \frac{{d}_{j}^{-}-{x}_{j}}{{\Gamma }_{jk}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{ }{\Gamma }_{jk}<0\hfill \end{array}\left(j\in {J}_{B}\right)$ (23)

Put: ${\theta }_{2}=\underset{\begin{array}{l}{\Gamma }_{jk}>0\\ \text{\hspace{0.17em}}j\in {J}_{B}\end{array}}{\mathrm{min}}\frac{{d}_{j}^{+}-{x}_{j}}{{\Gamma }_{jk}}$ , ${\theta }_{3}=\underset{\begin{array}{l}{\Gamma }_{jk}<0\\ \text{\hspace{0.17em}}j\in {J}_{B}\end{array}}{\mathrm{min}}\frac{{d}_{j}^{-}-{x}_{j}}{{\Gamma }_{jk}}$ .

we have: ${\theta }_{2}>0$ , ${\theta }_{3}>0$ , because $\left\{x,{J}_{B}\right\}$ is non-degenerate.

Let ${\theta }^{*}=min\left({\theta }_{1},{\theta }_{2},{\theta }_{3}\right)$ , then

$\stackrel{¯}{x}\left({\theta }^{*}\right)\in H$ (24)

or

$F\left(x\right)-F\left(\stackrel{¯}{x}\left({\theta }^{*}\right)\right)=\underset{j\in {J}_{N}}{\sum }{\delta }_{j}\Delta {x}_{j}={\delta }_{k}\Delta {x}_{k}={\delta }_{k}{\theta }^{*}<0$ (25)

then

$F\left(\stackrel{¯}{x}\left({\theta }^{*}\right)\right)>F\left(x\right)$ (26)

contradiction with (16).

3.5. Definition 9

The pair $\left\{{x}^{0},{J}_{B}^{0}\right\}$ satisfying the relations (13) will be called an optimal SF-solution of the problem (P).

3.6. The Suboptimality Criterion

Let ${x}^{0}$ an optimal solution of (P), and x a feasible solutions of (P).

From (11) we obtained:

$\begin{array}{c}\underset{\stackrel{¯}{x}\in H}{\mathrm{max}}\Delta F\left(x\right)=F\left({x}^{0}\right)-F\left(x\right)\\ =\underset{\stackrel{¯}{x}\in H}{\mathrm{max}}\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}>0\end{array}}{\sum }{\delta }_{j}\left({\stackrel{¯}{x}}_{j}-{x}_{j}\right)+\underset{\stackrel{¯}{x}\in H}{\mathrm{max}}\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}<0\end{array}}{\sum }{\delta }_{j}\left({\stackrel{¯}{x}}_{j}-{x}_{j}\right)\\ \le \underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}>0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{+}-{x}_{j}\right)+\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}<0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{-}-{x}_{j}\right)\end{array}$

Let

$\beta \left(x,{J}_{B}\right)=\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}>0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{+}-{x}_{j}\right)+\underset{\begin{array}{c}j\in {J}_{N}\\ {\delta }_{j}<0\end{array}}{\sum }{\delta }_{j}\left({d}_{j}^{-}-{x}_{j}\right)$ (27)

$\beta \left(x,{J}_{B}\right)$ is called the suboptimality estimate of the SF-solution $\left\{x,{J}_{B}\right\}$ since

$\left(F\left({x}^{0}\right)-F\left(x\right)\right)\le \beta \left(x,{J}_{B}\right)$ (28)

We have the following theorem of suboptimality.

3.7. Theorem 2 (Sufficient Condition of Suboptimality)

For a feasible solution x to be ò-optimal for given positive number ò, it is sufficient of the existence of a support ${J}_{B}$ such that $\beta \left(x,{J}_{B}\right)\le ϵ$ .

3.8. Proposition 2

From (12) and (27) we deduce:

$\beta \left(x,{J}_{B}\right)={\delta }_{N}^{\text{T}}\left({\chi }_{N}-{x}_{N}\right)={\delta }^{\text{T}}\left(\chi -x\right)$ (29)

3.9. The Dual Problem

The dual of the primal problem (P) is given by the following linear problem

$\left(D\right)\text{ }\left\{\begin{array}{l}\mathrm{min}\varphi \left(y,v,w\right)={b}^{\text{T}}y-{\left({d}^{-}\right)}^{\text{T}}v+{\left({d}^{+}\right)}^{\text{T}}w\hfill \\ {A}^{\text{T}}y-v+w=c\hfill \\ y\in {R}^{m},v\ge 0,w\ge 0.\hfill \end{array}$ (30)

where ${A}^{\text{T}}=\left(\begin{array}{c}{a}_{1}^{\text{T}}\\ ⋮\\ {a}_{n}^{\text{T}}\end{array}\right)$ , with $\forall j\in J$ , ${a}_{j}^{\text{T}}$ is the transpose of the jth column ${a}_{j}$ of the matrix A. The problem D has n general constraints, and $\left(m+2n\right)$ variables.

Like (P), the problem (D) has an optimal solution ${\lambda }^{0}=\left({y}^{0},{v}^{0},{w}^{0}\right)$ , and $F\left({x}^{0}\right)=\varphi \left({\lambda }^{0}\right)$ , where ${x}^{0}$ is the optimal solution of (P).

3.10. Definition 10

Any $\left(m+2n\right)$ -vector $\lambda =\left(y,v,w\right)$ satisfying all the constraints of the problem D is called a dual feasible solution.

3.11. Definition 11

Let ${J}_{B}$ be a support of the problem (P), and δ the corresponding support gradient defined in (10), the $\left(m+2n\right)$ -vector $\lambda =\left(y,v,w\right)$ which satisfies:

$\left\{\begin{array}{l}{y}^{\text{T}}={c}_{B}^{\text{T}}{A}_{B}^{-1}\hfill \\ {v}_{j}=-{\delta }_{j},{w}_{j}=0\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}\le 0\hfill \\ {v}_{j}=0,{w}_{j}={\delta }_{j}\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}>0\hfill \end{array}\left(j\in J\right)$ (31)

is called the dual feasible solution accompanying the support ${J}_{B}$ .

3.12. Proposition 3

・ A dual feasible solution $\lambda =\left(y,v,w\right)$ accompanying a given support ${J}_{B}$ , is dual feasible, i.e.: ${A}^{\text{T}}y-v+w=c,v\ge 0,w\ge 0,y\in {R}^{m}$ .

・ For any support ${J}_{B}$ we have

${c}^{\text{T}}\chi =\varphi \left(\lambda \right)$ (32)

where λ and χ are respectively the dual feasible solution and an primal pseudo-feasible solution accompanying the support ${J}_{B}$ .

3.13. Definition 12

Let y be dual feasible solution: ${A}^{\text{T}}y-v+w=c,v\ge 0,w\ge 0$ . The $\left(m+2n\right)$ -vector ${\lambda }^{*}=\left(y,{v}^{*},{w}^{*}\right)$ such that:

$\left\{\begin{array}{l}{v}_{j}^{*}=0,{w}_{j}^{*}=\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\ge 0\hfill \\ {v}_{j}^{*}=-\left({c}_{j}-{a}_{j}^{\text{T}}y\right),{w}_{j}^{*}=0\text{ }\text{if}\text{\hspace{0.17em}}\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\le 0\hfill \end{array}\left(j\in J\right)$ (33)

is called a coordinated dual feasible point to the m-vector y.

3.14. Proposition 4

The coordinated dual feasible point ${\lambda }^{*}=\left(y,{v}^{*},{w}^{*}\right)$ to y is dual feasible, i.e.: ${A}^{\text{T}}y-{v}^{*}+{w}^{*}=c$ , ${v}^{*}\ge 0$ , ${w}^{*}\ge 0$ .

3.15. Decomposition of the Suboptimality Estimate of an SF-Solution $\left\{x,{J}_{B}\right\}$

Let $\left\{x,{J}_{B}\right\}$ be a SF-solution of the problem (P), $\beta \left(x,{J}_{B}\right)$ be its suboptimality estimate calculated by (27), $\lambda =\left(y,v,w\right)$ be the dual feasible solution accompanying the support ${J}_{B}$ , χ the primal pseudo-feasible solution accompanying the support ${J}_{B}$ , ${x}^{0}$ be the optimal solution for the problem (P), ${\lambda }^{0}$ be the optimal solution for the dual problem (D).

From (9), (10), (29), (32), and proposition 1 we have:

$\begin{array}{c}\beta \left(x,{J}_{B}\right)={\delta }_{N}^{\text{T}}\left({\chi }_{N}-{x}_{N}\right)={\delta }^{\text{T}}\left(\chi -x\right)=\left({c}^{\text{T}}-{c}_{B}^{\text{T}}\Gamma \right)\left(\chi -x\right)\\ ={c}^{\text{T}}\chi -{c}^{\text{T}}x-{c}_{B}^{\text{T}}{A}_{B}^{-1}A\chi +{c}_{B}^{\text{T}}{A}_{B}^{-1}Ax={c}^{\text{T}}\chi -{c}^{\text{T}}x\\ =\varphi \left(\lambda \right)-F\left(x\right)=\varphi \left(\lambda \right)-\varphi \left({\lambda }^{0}\right)+F\left({x}^{0}\right)-F\left(x\right)\end{array}$ .

then

$\beta \left(x,{J}_{B}\right)=\beta \left(x\right)+\beta \left({J}_{B}\right)$ (34)

where: $\beta \left({J}_{B}\right)=\varphi \left(\lambda \right)-\varphi \left({\lambda }^{0}\right)$ is the degree of non-optimality of the support ${J}_{B}$ , and $\beta \left(x\right)=F\left({x}^{0}\right)-F\left(x\right)$ is the degree of non-optimality of the feasible solution x.

3.16. Proposition 5

・ The feasible solution x will be optimal if $\beta \left(x\right)=0$ , and the support ${J}_{B}$ will be optimal if $\beta \left({J}_{B}\right)=0$ , but the pair $\left\{x,{J}_{B}\right\}$ will be an optimal SF-solution if $\beta \left(x\right)=\beta \left({J}_{B}\right)=0$ , so the optimality of the feasible solution x can be not identified if it is examined with an unfit support, because in this case there will be $\beta \left({J}_{B}\right)\ne 0$ , and $\beta \left(x,{J}_{B}\right)\ne 0$ .

・ As $\beta \left({J}_{B}\right)=\varphi \left(\lambda \right)-\varphi \left({\lambda }^{0}\right)$ , then the support ${J}_{B}$ will be optimal if its accompanying dual feasible solution λ is optimal solution for the dual problem D.

3.17. Theorem 3 (Existence of an Optimal Support)

For each problem (P), there is always an optimal support.

Proof

As (P) has an optimal solution, its dual (D) given by (30) has also an optimal solution, let $\stackrel{¯}{\lambda }=\left(\stackrel{¯}{y},\stackrel{¯}{v},\stackrel{¯}{w}\right)$ this solution, and define the sets:

${I}^{+}=\left\{j\in J/\left({c}_{j}-{a}_{j}^{\text{T}}\stackrel{¯}{y}\right)\ge 0\right\}$ , ${I}^{-}=\left\{j\in J/\left({c}_{j}-{a}_{j}^{\text{T}}\stackrel{¯}{y}\right)\le 0\right\}$ ,

and

$\stackrel{¯}{C}=\left\{y\in {R}^{m}/\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\ge 0,\forall i\in {I}^{+}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\le 0,\forall i\in {I}^{-}\right\}$ ,

we have $\stackrel{¯}{y}\in \stackrel{¯}{C}$ .

Let $y\in \stackrel{¯}{C}$ , and define the following linear problem:

$\left(\stackrel{¯}{D}\right)\text{ }\left\{\begin{array}{l}min\psi \left(y\right)={b}^{\text{T}}y+\underset{j\in {J}^{+}}{\sum }{d}_{j}^{+}\left({c}_{j}-{a}_{j}^{\text{T}}y\right)+\underset{j\in {J}^{-}}{\sum }{d}_{j}^{-}\left({c}_{j}-{a}_{j}^{\text{T}}y\right)\hfill \\ y\in \stackrel{¯}{C}\subset {R}^{m}\hfill \end{array}$ (35)

We have $\psi \left(y\right)=\varphi \left(y,v,w\right)$ , where $\left(y,v,w\right)$ is the coordinated dual feasible point to the vector y, and $\stackrel{¯}{y}$ is an optimal feasible solution of the problem ( $\stackrel{¯}{D}$ ).

On the other hand, there exists a “vertex” ${y}^{*}$ of the set $\stackrel{¯}{C}$ that is an optimal feasible solution for ( $\stackrel{¯}{D}$ ), and this vertex is the intersection of at least m hyperplane ${a}_{j}^{\text{T}}{y}^{*}={c}_{j},j\in \left({I}^{+}\cup {I}^{-}\right)$ .

Put ${J}_{B}=\left\{i\in \left({I}^{+}\cup {I}^{-}\right)/{a}_{j}^{\text{T}}{y}^{*}={c}_{j}\right\}$ where $|{J}_{B}|=m$ .

Then: ${A}^{\text{T}}\left(I,{J}_{B}\right){y}^{*}={c}_{B}$ , and so: ${y}^{*}={c}_{B}^{\text{T}}{A}_{B}^{-1}$ , further the coordinated dual feasible point to ${y}^{*}$ is the dual feasible solution accompanying the support ${J}_{B}$ , and its optimal feasible solution for the problem (D), then, according to the proposition (4), we conclude that ${J}_{B}$ is an optimal support for (P).

4. The Description of the Pivot Adaptive Method “PAM”

As the Simplex Method, to solve the problem (P), the adaptive method (and therefore also the pivot adaptive method) has two phases: the initialization phase, and the second phase, we will detail each one in the following.

1) The initialization phase:

In this phase an initial support feasible solution SF-solution $\left\{x,{J}_{B}\right\}$ will be determined, and thus the matrix G defined in (9).

We assume that ${b}_{i}\ge 0,\forall i\in I$ , and ${d}_{j}^{-}=0,\forall j\in J$ .

Remark 1

Notice that each problem (P) can be reduced to this case,i.e.:
${b}_{i}\ge 0,\forall i\in I$ ,and ${d}_{j}^{-}=0,\forall j\in J$ ,in fact:

a) if $\exists {i}^{0}\in I/{b}_{{i}^{0}}<0$ then both term of the corresponding constraint is multiplied by (−1).

b) if $\exists k\in J/{d}_{k}^{-}\ne 0$ , then we do the change of variable: ${\left({x}^{\prime }\right)}_{k}={x}_{k}-{d}_{k}^{-}$ , then:

$0\le {\left({x}^{\prime }\right)}_{k}\le {d}_{k}^{+}-{d}_{k}^{-}$ (36)

and the general constraints $Ax=b$ of (P) will be write as:

${a}_{1}{x}_{1}+{a}_{1}{x}_{1}+\cdots +{a}_{k-1}{x}_{k-1}+{a}_{k}\left({{x}^{\prime }}_{k}+{d}_{k}^{-}\right)+{a}_{k+1}{x}_{k+1}+\cdots +{a}_{n}{x}_{n}=b$ (37)

where $\forall j\in J$ , ${a}_{j}$ is the jth column of the matrix A, from this relation we obtain:

${a}_{1}{x}_{1}+{a}_{1}{x}_{1}+\cdots +{a}_{k-1}{x}_{k-1}+{a}_{k}{{x}^{\prime }}_{k}+{a}_{k+1}{x}_{k+1}+\cdots +{a}_{n}{x}_{n}=b-{a}_{k}{d}_{k}^{-}$ (38)

in (36), if $\left(b-{a}_{k}{d}_{k}^{-}\right)<0$ we multiple the two terms by (−1).

Then to determine an initial SF-solution to the problem (P), define the artificial problem of (P) as follows:

$\left(\stackrel{˜}{P}\right)\text{ }\left\{\begin{array}{l}\mathrm{max}F\left(\stackrel{˜}{x}\right)=-\underset{j=1}{\overset{j=m}{\sum }}\text{ }{\stackrel{˜}{x}}_{j}\hfill \\ Ax+\stackrel{˜}{x}=b\hfill \\ {d}^{-}\le x\le {d}^{+},0\le \stackrel{˜}{x}\le b\hfill \end{array}$ (39)

where ${b}_{i}\ge 0,\forall i\in I$ , and $\stackrel{˜}{x}=\stackrel{˜}{x}\left(J\right)=\left({\stackrel{˜}{x}}_{i},i\in I\right)=\left({\stackrel{˜}{x}}_{1},{\stackrel{˜}{x}}_{2},\cdots ,{\stackrel{˜}{x}}_{m}\right)$ are the m artificial variables added to the problem (P).

We resolve the artificial problem $\stackrel{˜}{P}$ with the “pivot adaptive method” starting with the obvious initial feasible solution: $\left(x,\stackrel{˜}{x}\right),x={0}_{{R}^{n}},\stackrel{˜}{x}=b$ , and the canonical support (determined by the indices of the artificial variables), and we take the matrix $\Gamma =A$ .

If H is not empty, at the optimum of $\stackrel{˜}{P}$ $\stackrel{˜}{x}=0$ , $Ax=b$ , and ${d}^{-}\le x\le {d}^{+}$ , so x is a feasible solution of (P), and the optimal SF-solution of the problem $\stackrel{˜}{P}$ can be taken as the initial SF-solution for the problem P.

2) The second phase:

Assume that an initial SF-solution $\left\{x,{J}_{B}\right\}$ of the problem (P) is found in the initialization phase, and let G the correspondent matrix computed by (9), $ϵ\ge 0$ an arbitrary number, the problem will be solved with the pivot adaptive method that we describe in the following.

At the iterations of the pivot adaptive method the transfer $\left\{x,{J}_{B}\right\}\to \left\{\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right\}$ from one SF-solution to a new one is carried out in such a way that:

$\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)\le \beta \left(x,{J}_{B}\right)$ (40)

the transfer will be realized as two procedures making up the iteration:

i) The procedure of the feasible solution change $x\to \stackrel{¯}{x}$ :

During this procedure we decrease the degree of non-optimality of the feasible solution: $\beta \left(\stackrel{¯}{x}\right)\le \beta \left(x\right)$ , and then improve the primal criterion $F\left(\stackrel{¯}{x}\right)\ge F\left(x\right)$ :

i.1) compute the support gradient δ by (10), and the suboptimality estimate by (27).

i.2) If $\beta \left(x,{J}_{B}\right)\le \epsilon$ , then stop the resolution with: x is an ò-optimal feasible solution for (P) (P).

i.3) If $\beta \left(x,{J}_{B}\right)>\epsilon$ , we compute the non-support components ${\chi }_{N}$ of the primal pseudo-feasible solution accompanying the support ${J}_{B}$ by (12), and the search direction $l=\left({l}_{B},{l}_{N}\right)$ with:

$\left\{\begin{array}{l}{l}_{N}={\chi }_{N}-{x}_{N}\hfill \\ {l}_{B}=-{\Gamma }_{N}{l}_{N}\hfill \end{array}$ (41)

i.4) Find the primal step length ${\theta }^{0}$ with:

${\theta }^{0}=min\left\{1,{\theta }_{{j}_{0}}\right\},\text{where}:{\theta }_{{j}_{0}}=\underset{j\in {J}_{B}}{min}{\theta }_{j},{\theta }_{j}=\left(\begin{array}{l}\frac{{d}_{j}^{+}-{x}_{j}}{{l}_{j}},\text{ }\text{if}\text{\hspace{0.17em}}{l}_{j}>0;\hfill \\ \frac{{d}_{j}^{-}-{x}_{j}}{{l}_{j}},\text{ }\text{if}\text{\hspace{0.17em}}{l}_{j}<0;\hfill \\ \infty ;\text{ }\text{\hspace{0.17em}}\text{ }\text{if}\text{\hspace{0.17em}}{l}_{j}=0.\hfill \end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(j\in {J}_{B}\right)$ (42)

Let ${j}_{00}$ such that: ${J}_{B}\left({j}_{00}\right)={j}_{0}$ which is the position of ${j}_{0}$ in the vector of indices ${J}_{B}$ .

i.5) Compute the new feasible solution: $\stackrel{¯}{x}=x+{\theta }^{0}l$ , and $F\left(\stackrel{¯}{x}\right)=F\left(x\right)+{\theta }^{0}\beta \left(x,{J}_{B}\right)$ .

i.6) If ${\theta }^{0}=1$ , then stop the resolution with: $\left\{\stackrel{¯}{x},{J}_{B}\right\}$ is the optimal SF-solution of (P).

i.7) If ${\theta }^{0}<1$ , then compute:

$\beta \left(\stackrel{¯}{x},{J}_{B}\right)=\left(1-{\theta }^{0}\right)\beta \left(x,{J}_{B}\right)$ (43)

i.8) If $\beta \left(\stackrel{¯}{x},{J}_{B}\right)\le ϵ$ , then stop the resolution with: $\left\{\stackrel{¯}{x},{J}_{B}\right\}$ is the ò-optimal SF-solution of (P).

i.9) if $\beta \left(\stackrel{¯}{x},{J}_{B}\right)>ϵ$ , then go to ii) to change the support ${J}_{B}$ to ${\stackrel{¯}{J}}_{B}$ .

ii) The procedure of the support change ${J}_{B}\to {\stackrel{¯}{J}}_{B}$ :

During this procedure we decrease the degree of non-optimality of the support: $\beta \left({\stackrel{¯}{J}}_{B}\right)\le \beta \left({J}_{B}\right)$,and then improve the dual criterion $\varphi \left(\stackrel{¯}{\lambda }\right)\ge \varphi \left(\lambda \right)$ , where $\stackrel{¯}{\lambda }$ is the dual feasible solution accompanying the support ${\stackrel{¯}{J}}_{B}$ , and $\lambda$ is the dual feasible solution accompanying the support ${J}_{B}$ .

In this procedure there are two rules to change support: rule of “the short step”, and rule of “the long step”, the two rules will be presented.

ii.1) to the ${j}_{0}$ found in (i.4), compute ${\chi }_{{j}_{0}}={x}_{{j}_{0}}+{l}_{{j}_{0}}$ , and ${\alpha }_{0}={\chi }_{{j}_{0}}-{\stackrel{¯}{x}}_{{j}_{0}}$ .

ii.2) compute the dual direction t with:

$\left\{\begin{array}{l}{t}_{{j}_{0}}=-sign\left({\alpha }_{0}\right);\\ {t}_{j}=0,j\in {J}_{B}\\left\{{j}_{0}\right\};\\ {t}_{N}^{\text{T}}={t}_{B}^{\text{T}}{\Gamma }_{N}.\end{array}$ (44)

ii.3) compute:

${\sigma }_{j}=\left\{\begin{array}{l}\frac{{\delta }_{j}}{{t}_{j}},\text{ }\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}{t}_{j}>0;\\ 0,\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}=0,{t}_{j}<0,{\xi }_{j}={d}_{j}^{-};\\ 0,\text{ }\text{if}\text{\hspace{0.17em}}{\delta }_{j}=0,{t}_{j}>0,{\xi }_{j}={d}_{j}^{+};\\ \infty ,\text{ }\text{inothercases}.\end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(j\in {J}_{N}\right)$ (45)

and arrange the obtained values in increasing order as:

${\sigma }_{{j}_{1}}\le {\sigma }_{{j}_{2}}\le \cdots \le {\sigma }_{{j}_{p}},{j}_{k}\in {J}_{N},{\sigma }_{{j}_{k}}\ne \infty ,k\in \left\{1,\cdots ,p\right\}.$ (46)

As was said before, the change of support can be done with two different rules, to use the “short step rule” go to a), and to use the “long step rule” go to b).

a) Change of the support with the “short step rule”:

a.1) From (46) put:

${\sigma }^{0}={\sigma }_{{j}_{1}}={\sigma }_{{j}^{*}}=\underset{j\in {J}_{N}}{\mathrm{min}}{\sigma }_{j}.$ (47)

Let ${j}_{**}$ such that: ${J}_{N}\left({j}_{**}\right)={j}^{*}$ which is the position of ${j}^{*}$ in the vector of indices ${J}_{N}$ .

Put: ${\stackrel{¯}{J}}_{B}=\left({J}_{B}\\left\{{j}_{0}\right\}\right)\cup \left\{{j}^{*}\right\}$ , ${\stackrel{¯}{J}}_{N}=\left({J}_{N}\\left\{{j}^{*}\right\}\right)\cup \left\{{j}_{0}\right\}$ , then ${\stackrel{¯}{J}}_{B}\left({j}_{00}\right)={j}^{*}$ , ${\stackrel{¯}{J}}_{N}\left({j}_{**}\right)={j}_{0}$ .

a.2) Compute:

$\begin{array}{c}\stackrel{¯}{\delta }=\delta -{\sigma }^{0}t,\\ \beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)=\beta \left(\stackrel{¯}{x},{J}_{B}\right)+{\sigma }^{0}{\alpha }_{0}\end{array}$ (48)

go to ii.4).

b) Change of the support with the “long step rule”:

b.1) for every ${j}_{k},k\in \left\{1,\cdots ,p\right\}$ of (47) compute

$\Delta {\alpha }_{{j}_{k}}=|{\Gamma }_{{j}_{00}{j}_{k}}|\left({d}_{{j}_{k}}^{+}-{d}_{{j}_{k}}^{-}\right)$ (49)

b.2) as ${j}^{*}$ we choose ${j}_{q}$ such that

${\alpha }_{{j}_{q-1}}=\left[{\alpha }_{0}+\underset{k=1}{\overset{k=q-1}{\sum }}\Delta {\alpha }_{{j}_{k}}\right]<0,{\alpha }_{{j}_{q}}=\left[{\alpha }_{0}+\underset{k=1}{\overset{k=q}{\sum }}\text{ }\Delta {\alpha }_{{j}_{k}}\right]\ge 0$ (50)

Let ${j}_{**}$ such that: ${J}_{N}\left({j}_{**}\right)={j}^{*}$ which is the position of ${j}_{0}$ in the vector of indices ${J}_{B}$ .

b.3) put: ${\sigma }^{0}={\sigma }_{{j}^{*}}$ , and ${\stackrel{¯}{J}}_{B}=\left({J}_{B}\\left\{{j}_{0}\right\}\right)\cup \left\{{j}^{*}\right\}$ , ${\stackrel{¯}{J}}_{N}=\left({J}_{N}\\left\{{j}^{*}\right\}\right)\cup \left\{{j}_{0}\right\}$ , then ${\stackrel{¯}{J}}_{B}\left({j}_{00}\right)={j}^{*}$ , ${\stackrel{¯}{J}}_{N}\left({j}_{**}\right)={j}_{0}$ .

b.4) Compute:

$\stackrel{¯}{\delta }=\delta -{\sigma }^{0}t$ .

$\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)=\beta \left(\stackrel{¯}{x},{J}_{B}\right)+\underset{k=1}{\overset{k=q}{\sum }}\text{ }{\alpha }_{{j}_{k-1}}\left({\sigma }_{{j}_{k}}-{\sigma }_{{j}_{k-1}}\right)$ where: ${\alpha }_{{j}_{0}}={\alpha }_{0}$ , ${\sigma }_{{j}_{0}}=0$ .

Go to ii.4). ii.4) compute $\stackrel{¯}{\Gamma }$ as follows:

$\left\{\begin{array}{l}{\stackrel{¯}{\Gamma }}_{{j}_{00}j}=\frac{{\Gamma }_{{j}_{00}j}}{{\Gamma }_{{j}_{00}{j}^{*}}},j\in J;\\ {\stackrel{¯}{\Gamma }}_{ij}={\Gamma }_{ij}-{\Gamma }_{i{j}^{*}}{\stackrel{¯}{\Gamma }}_{{j}_{00}j},i\in I\\left\{{j}_{00}\right\},j\in J.\end{array}$ (51)

${\Gamma }_{{j}_{00}{j}^{*}}$ is “the pivot”. Go to ii.5).

ii.5) Set: $x=\stackrel{¯}{x}$ , ${J}_{B}={\stackrel{¯}{J}}_{B}$ , $\beta \left(x,{J}_{B}\right)=\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)$ , $\delta =\stackrel{¯}{\delta }$ , and $\Gamma =\stackrel{¯}{\Gamma }$ . Go to i.2).

5. The Pivot Adaptive Method

Let $\left\{x,{J}_{B}\right\}$ be an initial SF-solution for the problem (P), G the matrix computed by (9), and $ϵ\ge 0$ are given.

The “Pivot Adaptive Method” (rule of the “short step” and rule of “the long step” are presented) is summarized in the following steps:

Algorithm 1 (The Pivot Adaptive Method)

1) compute the support gradient ${\delta }^{\text{T}}$ by (10), and $\beta \left(x,{J}_{B}\right)$ by (27).

2) if $\beta \left(x,{J}_{B}\right)\le ϵ$ , then STOP with: x is an ò-optimal feasible solution for (P).

3) if $\beta \left(x,{J}_{B}\right)>ϵ$ , then compute the non-support components ${\chi }_{N}$ of the primal pseudo-feasible solution accompanying the support ${J}_{B}$ by (12), and the search direction l with (41).

4) find the primal step length ${\theta }^{0}$ with (42). Let ${j}_{00}$ such that: ${J}_{B}\left({j}_{00}\right)={j}_{0}$ which is the position of ${j}_{0}$ in the vector of indices ${J}_{B}$ .

5) Compute the new feasible solution: $\stackrel{¯}{x}=x+{\theta }^{0}l$ , and $F\left(\stackrel{¯}{x}\right)=F\left(x\right)+{\theta }^{0}\beta \left(x,{J}_{B}\right)$ .

6) if ${\theta }^{0}=1$ , then STOP with: $\left\{\stackrel{¯}{x},{J}_{B}\right\}$ is the optimal SF-solution for (P).

7) if ${\theta }^{0}<1$ , then compute $\beta \left(\stackrel{¯}{x},{J}_{B}\right)=\left(1-{\theta }^{0}\right)\beta \left(x,{J}_{B}\right)$ .

8) if $\beta \left(\stackrel{¯}{x},{J}_{B}\right)\le ϵ$ , then STOP with: $\left\{\stackrel{¯}{x},{J}_{B}\right\}$ is the ò-optimal SF-solution for (P).

9) if $\beta \left(\stackrel{¯}{x},{J}_{B}\right)>ϵ$ , then change the support ${J}_{B}$ to ${\stackrel{¯}{J}}_{B}$ .

10) compute ${\chi }_{{j}_{0}}={x}_{{j}_{0}}+{l}_{{j}_{0}}$ , and: ${\alpha }_{0}={\chi }_{{j}_{0}}-{\stackrel{¯}{x}}_{{j}_{0}}$ .

11) compute the dual direction t with (44).

12) compute ${\sigma }_{j}$ , for $j\in {J}_{N}$ with (45), and arrange the obtained values in increasing order.

13) do the change of support by one of the rules:

a) Change of the support with the “short step rule” as follows:

a.1) compute ${\sigma }^{0}$ with (47), and set: ${\stackrel{¯}{J}}_{B}=\left({J}_{B}\\left\{{j}_{0}\right\}\right)\cup \left\{{j}^{*}\right\}$ , ${\stackrel{¯}{J}}_{N}=\left({J}_{N}\\left\{{j}^{*}\right\}\right)\cup \left\{{j}_{0}\right\}$ .

a.2) compute $\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)=\beta \left(\stackrel{¯}{x},{J}_{B}\right)+{\sigma }^{0}{\alpha }_{0}$ . Go to (14).

b) Change of the support with the “long step rule” as follows:

b.1) for every ${j}_{k},k\in \left\{1,\cdots ,p\right\}$ of (46) compute $\Delta {\alpha }_{{j}_{k}}=|{\Gamma }_{{j}_{00}{j}_{k}}|\left({d}_{{j}_{k}}^{+}-{d}_{{j}_{k}}^{-}\right)$ .

b.2) as ${j}^{*}$ we choose ${j}_{q}$ such that (50) is verified.

b.3) set: ${\sigma }^{0}={\sigma }_{{j}^{*}}$ , and ${\stackrel{¯}{J}}_{B}=\left({J}_{B}\\left\{{j}_{0}\right\}\right)\cup \left\{{j}^{*}\right\}$ , ${\stackrel{¯}{J}}_{N}=\left({J}_{N}\\left\{{j}^{*}\right\}\right)\cup \left\{{j}_{0}\right\}$ .

b.4) compute $\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)=\beta \left(\stackrel{¯}{x},{J}_{B}\right)+\underset{k=1}{\overset{k=q}{\sum }}\text{ }{\alpha }_{{j}_{k-1}}\left({\sigma }_{{j}_{k}}-{\sigma }_{{j}_{k-1}}\right)$ where: ${\alpha }_{{j}_{0}}={\alpha }_{0}$ , ${\sigma }_{{j}_{0}}=0$ . Go to (14).

14) compute $\stackrel{¯}{\Gamma }$ with (51) and $\stackrel{¯}{\delta }=\delta -{\sigma }^{0}t$ .

15) set: $x=\stackrel{¯}{x}$ , ${J}_{B}={\stackrel{¯}{J}}_{B}$ , $\beta \left(x,{J}_{B}\right)=\beta \left(\stackrel{¯}{x},{\stackrel{¯}{J}}_{B}\right)$ , $\delta =\stackrel{¯}{\delta }$ , and $\Gamma =\stackrel{¯}{\Gamma }$ . Go to (2).

6. Example

In this section, a linear problem will be resolved with the Pivot Adaptive Method using the “short step rule”, and in parallel we will explain how to realize these calculations under the shape of successive tables as is given in Figure 1.

Consider the following linear problem with bounded variables:

$\left(P\right)\text{ }\left\{\begin{array}{l}maxF\left(x\right)={c}^{\text{T}}x\hfill \\ Ax=b\hfill \\ {d}^{-}\le x\le {d}^{+}.\hfill \end{array}$ (52)

where: ${c}^{\text{T}}=\left(65,115,0,0,0\right)$ , $x={\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5}\right)}^{\text{T}}$ , $A=\left(\begin{array}{ccccc}5/2& 15/2& 1& 0& 0\\ 1/8& 1/8& 0& 1& 0\\ 35/2& 10& 0& 0& 1\end{array}\right)$ , $b={\left(240,5,595\right)}^{\text{T}}$ , ${d}^{-}={0}_{{R}^{5}}$ , ${d}^{+}={\left(34,34,240,5,595\right)}^{\text{T}}$ .

Put $J=\left\{1,2,3,4,5\right\}$ , $ϵ={10}^{-3}$ .

First phase (Initialization):

Let $\left\{{x}^{1},{J}_{B}^{1}\right\}$ be the initial SF-solution of the problem (P) where; ${x}^{1}={\left(11,27,10,1/4,265/2\right)}^{\text{T}}$ , ${J}_{B}^{1}=\left\{3,4,5\right\}$ , so: ${J}_{N}^{1}=\left\{1,2\right\}$ , ${A}_{B}^{1}=\left({a}_{3},{a}_{4},{a}_{5}\right)={I}_{3}$ , ${A}_{N}^{1}=\left({a}_{1},{a}_{2}\right)$ then The matrix G is given by: $\Gamma =\left({\Gamma }_{B},{\Gamma }_{N}\right)$ where: ${\left({\Gamma }_{1}\right)}_{B}={I}_{3}$ , ${\left({\Gamma }_{1}\right)}_{N}=\left(\begin{array}{cc}5/2& 15/2\\ 1/8& 1/8\\ 35/2& 10\end{array}\right)$ .

Note that for the initial support we have chosen the canonical support, and for the initial feasible solution we took an interior point of the feasible region of the problem (P).

u In the preamble part of the tables (Figure 1)which has four rows we put sequentially: ${c}^{\text{T}}$, ${\left({d}^{-}\right)}^{\text{T}}$, ${\left({d}^{+}\right)}^{\text{T}}$ , ${x}^{1}$ . The tables have 8 columns.

Second phase:

Starting with the SF-solution $\left\{{x}^{1},{J}_{B}^{1}\right\}$ , and the matrix G to resolve the problem (P) with the PAM using the short step rule.

u In the first table of PAM (Figure 1) (represents the first iteration) there are 8 rows, in the first 3 rows we have the matrix G (indicating the vector formed ${A}_{B}^{1}$ in the first column of the tables), and in the rest rows we have sequentially: δ, l, θ, x2, σ.

The support gradient ${\delta }_{1}^{\text{T}}=\left({\left({\delta }_{1}^{\text{T}}\right)}_{B},{\left({\delta }_{1}^{\text{T}}\right)}_{N}\right)$ where: ${\left({\delta }_{1}^{\text{T}}\right)}_{B}={0}_{{R}^{3}}$ , ${\left({\delta }_{1}^{\text{T}}\right)}_{N}=\left(65,115\right)$ , and $\beta \left({x}^{1},{J}_{B}^{1}\right)=2300>ϵ$ .

1) Iteration 1:

・ Change solution: we have: ${\left({\chi }_{1}\right)}_{N}=\left(34,34\right)$ , and ${\left({l}_{1}\right)}_{N}=\left(23,7\right)$ , ${\left({l}_{1}\right)}_{B}=\left(-110,-15/4,-945/2\right)$ , ${\theta }^{0}={\theta }_{4}=1/15$ , $\left({J}_{B}^{1}\right)\left(2\right)=4$ (4 is the second component of the vector of index ${J}_{B}^{1}$ ).

u The case of ${\theta }^{0}$ is marked by yellow in tables (Figure 1), it correspond at the second vector of ${J}_{B}^{1}$ which is ${a}_{4}$ (which will come out of the basis.

Then, ${\theta }^{0}<1$ , the new feasible solution ${x}^{2}=\left(188/15,412/15,40/15,0,101\right)$ , and $\beta \left({x}^{2},{J}_{B}^{1}\right)=6440/3>ϵ$ .

・ Change support (with the short step rule):

${\chi }_{4}=-7/2$ , and: ${\alpha }_{0}=-7/2$ , the dual direction ${t}^{\text{T}}=\left(1/8,1/8,0,1,0\right)$ , ${\sigma }^{0}={\sigma }_{1}=520$ , then $\left({J}_{N}^{1}\right)\left(1\right)=1$ (1 is the first component of the vector of index ${J}_{N}^{1}$ ).

u The case of ${\sigma }^{0}$ is marked by green in tables (Figure 1), his row correspond to vector ${a}_{1}$ which will go into the basis.

So ${J}_{B}^{2}=\left({J}_{B}^{1}\\left\{4\right\}\right)\cup \left\{1\right\}=\left\{3,1,5\right\}$ , ${J}_{N}^{2}=\left({J}_{N}^{1}\\left\{1\right\}\right)\cup \left\{4\right\}=\left\{4,2\right\}$ .

Figure 1. Tables of the pivot adaptive method.

u For have the 2nd table (Figure 1), we applied the pivoting rule where the pivot is marked by red. The new support gradient ${\delta }_{2}^{\text{T}}={\delta }_{1}^{\text{T}}-{\sigma }^{0}t=\left({\left({\delta }_{2}^{\text{T}}\right)}_{B},{\left({\delta }_{2}^{\text{T}}\right)}_{N}\right)$ where: ${\left({\delta }_{2}^{\text{T}}\right)}_{B}={0}_{{R}^{3}}$ , ${\left({\delta }_{2}^{\text{T}}\right)}_{N}=\left(-520,50\right)$ , and $\beta \left({x}^{2},{J}_{B}^{2}\right)=\beta \left({x}^{2},{J}_{B}^{1}\right)+{\sigma }^{0}{\alpha }_{0}=980/3$ .

As $\beta \left({x}^{2},{J}_{B}^{2}\right)>ϵ$ , we go to another iteration, we compute then: ${\Gamma }_{2}=\left({\left({\Gamma }_{2}\right)}_{B},{\left({\Gamma }_{2}\right)}_{N}\right)$ where:

${\left({\Gamma }_{2}\right)}_{B}={I}_{3}$ , ${\left({\Gamma }_{1}\right)}_{N}=\left(\begin{array}{cc}-20& 5\\ 8& 1\\ -140& -15/2\end{array}\right)$ .

2) Iteration 2:

・ Change solution: we have: ${\chi }_{N}=\left(0,34\right)$ , and ${\left({l}_{2}\right)}_{N}=\left(0,98/15\right)$ , ${\left({l}_{2}\right)}_{B}=\left(-98/3,-98/15,49\right)$ , ${\theta }^{0}={\theta }_{3}=4/49$ , $\left({J}_{B}^{2}\right)\left(1\right)=3$ (3 is the first component of the vector of indice ${J}_{B}^{2}$ ).

Then, ${\theta }^{0}<1$ , the new feasible solution ${x}^{3}=\left(12,28,0,0,105\right)$ , and $\beta \left({x}^{2},{J}_{B}^{1}\right)=300>ϵ$ .

・ Change support (with the short step rule): ${\chi }_{3}=-30$ , and: ${\alpha }_{0}=-30$ , the dual direction ${t}^{\text{T}}=\left(0,5,1,-20,0\right)$ , ${\sigma }^{0}={\sigma }_{2}=10$ , then $\left({J}_{N}^{2}\right)\left(2\right)=2$ (2 is the second component of the vector of index ${J}_{N}^{2}$ ).

So ${J}_{B}^{3}=\left({J}_{B}^{2}\\left\{3\right\}\right)\cup \left\{2\right\}=\left\{2,1,5\right\}$ , ${J}_{N}^{3}=\left({J}_{N}^{2}\\left\{2\right\}\right)\cup \left\{3\right\}=\left\{4,3\right\}$ .

The new support gradient ${\delta }_{3}^{\text{T}}={\delta }_{2}^{\text{T}}-{\sigma }^{0}t=\left({\left({\delta }_{3}^{\text{T}}\right)}_{B},{\left({\delta }_{3}^{\text{T}}\right)}_{N}\right)$ where: ${\left({\delta }_{3}^{\text{T}}\right)}_{B}={0}_{{R}^{3}}$ , ${\left({\delta }_{3}^{\text{T}}\right)}_{N}=\left(10,320\right)$ , and $\beta \left({x}^{3},{J}_{B}^{3}\right)=\beta \left({x}^{3},{J}_{B}^{2}\right)+{\sigma }^{0}{\alpha }_{0}=0$ .

Then $\left\{{x}^{3},{J}_{B}^{3}\right\}$ is the optimal SF-solution of the problem (P), and $F\left({x}^{3}\right)=4000$ .

7. Brief Numerical Comparison between the PAM and the Simplex Method

To realize a numerical comparison between the Simplex algorithm implemented in the function “linprog” under the MATLAB programming language version 7.14, 0.739 (R2012a) and our algorithm (PAM), an implementation for the later with the short step rule, has been developed.

The comparison is carried on the resolution of the Klee-Minty problem which has the following form  :

$\left({P}_{3}\right)\text{ }\left\{\begin{array}{l}\mathrm{max}\text{\hspace{0.17em}}{2}^{n-1}{x}_{1}+{2}^{n-2}{x}_{2}+\dots +2{x}_{n-1}+{x}_{n}\hfill \\ {x}_{1}\le 5\hfill \\ 4{x}_{1}+{x}_{2}\le 25\hfill \\ 8{x}_{1}+4{x}_{2}+{x}_{3}\le 125\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\hfill \\ {2}^{n}{x}_{1}+{2}^{n-1}{x}_{2}+\cdots +4{x}_{n-1}+{x}_{n}\le {5}^{n}\hfill \\ x\ge 0\hfill \end{array}$ (53)

where $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ . (P3) has n variables, n constraints and 2n vertices.

For a fixed n, to write (P3) in the form of the problem (P) given in (1), we added to each of the n constraints of (P3) a spread variable, and we found, for each of the 2n variables of the result problem, a lower born and an upper born. Then the problem (P3) can be done as a linear problem with bounded variables as follows:

$\left({P}_{4}\right)\text{ }\left\{\begin{array}{l}\mathrm{max}\text{\hspace{0.17em}}{2}^{n-1}{x}_{1}+{2}^{n-2}{x}_{2}+\cdots +2{x}_{n-1}+{x}_{n}\hfill \\ {x}_{1}+{x}_{n+1}=5\hfill \\ 4{x}_{1}+{x}_{2}+{x}_{n+2}=25\hfill \\ 8{x}_{1}+4{x}_{2}+{x}_{3}+{x}_{n+3}=125\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\hfill \\ {2}^{n}{x}_{1}+{2}^{n-1}{x}_{2}+\cdots +4{x}_{n-1}+{x}_{n}+{x}_{2n}={5}^{n}\hfill \\ 0\le {x}_{1}\le 5;0\le {x}_{2}\le 25;\cdots ;0\le {x}_{n}\le {5}^{n};\hfill \\ 0\le {x}_{n+1}\le 5;0\le {x}_{n+2}\le 25;\cdots ;0\le {x}_{2n}\le {5}^{n}.\hfill \end{array}$ (54)

The Simplex algorithm chosen here takes as a starting solution the original, then in our implementation of the (PAM) we impose the same initial feasible solution. For the initial support we had taken the canonical one ${J}_{B}=\left\{n+1,n+2,\cdots ,2n\right\}$ , then $\Gamma =A$ .

We have considered problems with matrix A of size $n×n$ , where $n\in \left\{3,5,7,10,12,15,17,20,23,25,27,30,33,35,37,40,42,45,47,50\right\}$ , for each size, we obtained the number of iterations, and the time of resolution, where the time is given into milliseconds.

This results are reported in Figure 2, and in Figure 3, where “Simplex(it)”, “PAM(it)”, “Simplex(tm)”, “PAM(tm)” represent respectively the number of iterations of the Simplex algorithm, the number of iterations of the Pivot Adaptive Method, the time of the Simplex algorithm, the time of the Pivot Adaptive Method. The time is given into milliseconds.

Figure 2. Number of iterations and time for Linprog-Simplex and PAM for Klee-Minty problem.

Figure 3. Time comparison between Linprog-Simplex and PAM for the Klee-Minty problem.

We remark that the (PAM) is better than the Simplex algorithm in computation time, and in necessary number of iterations to solve the problems. Recall that the optimal solution of the problem (P3) is given by $x=\left(0,0,\cdots {,5}^{n}\right)$ .

8. Conclusions

The main contribution of this article is a new variant of the Adaptive Method that we have called “Pivot Adaptive Method” (PAM). In this variant, we use the simplex pivoting rule in order to avoid computing the inverse of the basic matrix in each iteration. As was recognized in this work, the algorithm saves our time compared to the original algorithm (AM), and allows us to present the resolution of a given problem under the shape of successive tables as seen in an example.

We have implemented our algorithm (PAM) ((PAM) with the short step rule, and (PAM) with the long step rule) using MATLAB, and we have done a brief comparison with the primal simplex algorithm (using linprog-simplex in MATLAB) for solving the Klee-Minty problem. Indeed, the (PAM) is more efficient in number of iterations, and in computation time.

In a subsequent work, we shall do a more thorough comparison between the Simplex algorithm of Dantzig and the (PAM), and we shall extend the (PAM) to the problems with a large scale using the constraint selection technique.

Conflicts of Interest

The authors declare no conflicts of interest.

  Dantzig, G.B. (1951) Minimization of a Linear Function of Variables Subject to Linear Inequalities. John Wiley, New York.  Minoux, M. (1983) Programmation mathématique, Théorie et al gorithmes, Tome 1, Dunod.  Culioli, J.C. (1994) Introduction à l’optimisation, Elipse.  Dantzig, G.B. (1963) Linear Programming and Extensions. Princeton University Press, Princeton. https://doi.org/10.1515/9781400884179  Gabasov, R. and Kirillova, F.M. (1977, 1978, 1980) Method of Linear Programming. Vol. 1, 2 and 3, BGU Press, Minsk. (In Russian)  Bland, R.G. (1977) New Finite Pivoting Rules for the Simplex Method. Mathematics of Operations Research, 2, 103-107.  Klee, V. and Minty, G.J. (1972) How Good Is the Simplex Algorithm? In: Shisha III, O., Ed., Inequalities, Academic Press, New York, 159-175.  Khachiyan, L.G. (1979) A Polynomial Algorithm in Linear Programming. Soviet Mathematics Doklady, 20, 191-194.  Karmarkar, N.K. (1984) A New Polynomial-Time Algorithm for Linear Programming. Combinatorica, 4, 373-395.  Kojima, M., Megiddo, N., Noma, T. and Yoshise, A. (1989) A Unified Approach to Interior Point Algorithms for Linear Programming. In: Megiddo, N., Ed., Progress in Mathematical Programming: Interior Point and Related Methods, Springer Verlag, New York, 29-47.  Ye, Y. (1997) Interior Point Algorithms, Theory and Analysis. John Wiley and Sons, Chichester.  Vanderbei, R.J. and Shanno, D.F. (1999) An Interior-Point Algorithm for Nonconvex Nonlinear Programming. Computational Optimization and Applications, 13, 231-252.  Peng, J., Roos, C. and Terlaky, T. (2001) A New and Efficient Large-Update Interior-Point Method for Linear Optimization. Journal of Computational Technologies, 6, 61A-80.  Cafieri, S., Dapuzzo, M., Marino, M., Mucherino, A. and Toraldo, G. (2006) Interior Point Solver for Large-Scale Quadratic Programming Problems with Bound Constraints. Journal of Optimization Theory and Applications, 129, 55-75.  Gabasov, R.F. (1994) Adaptive Method for Solving Linear Programming Problems. University of Bruxelles, Bruxelles.  Gabasov, R.F. (1994) Adaptive Method for Solving Linear Programming Problems. University of Karlsruhe, Institute of Statistics and Mathematics, Karlsruhe.  Gabasov, R., Kirillova, F.M. and Prischepova, S.V. (1995) Optimal Feedback Control. Springer-Verlag, London.  Gabasov, R., Kirillova, F.M. and Kostyukova, O.I. (1979) A Method of Solving General Linear Programming Problems. Doklady AN BSSR, 23, 197-200. (In Russian)  Kostina, E. (2002) The Long Step Rule in the Bounded-Variable Dual Simplex Method: Numerical Experiments. Mathematical Methods of Operation Research, 55, 413-429.  Radjef, S. and Bibi, M.O. (2011) An Effective Generalization of the Direct Support Method. Mathematical Problems in Engineering, 2011, Article ID: 374390.  Balashevich, N.V., Gabasov, R. and Kirillova, F.M. (2000) Numerical Methods of Program and Positional Optimization of the Linear Control Systems. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 40, 838-859.  Bentobache, M. and Bibi, M.O. (2012) A Two-Phase Support Method for Solving Linear Programs: Numerical Experiments. Mathematical Problems in Engineering, 2012, Article ID: 482193.  Bibi, M.O. and Bentobache, M. (2015) A Hybrid Direction Algorithm for Solving Linear Programs. International Journal of Computer Mathematics, 92, 201-216.

comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc. This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.