A Modified Lagrange Method for Solving Convex Quadratic Optimization Problems ()

Twum B. Stephen^{1}, Avoka John^{2}, Christian J. Etwire^{1}

^{1}School of Mathematical Science, C. K. Tedam University of Technology and Applied Sciences, Navrongo, Ghana.

^{2}Department of Mathematics, University for Development Studies, Tamale, Ghana.

**DOI: **10.4236/ojop.2024.131001
PDF
HTML XML
26
Downloads
176
Views
Citations

In this paper, a modified version of the Classical Lagrange Multiplier method is developed for convex quadratic optimization problems. The method, which is evolved from the first order derivative test for optimality of the Lagrangian function with respect to the primary variables of the problem, decomposes the solution process into two independent ones, in which the primary variables are solved for independently, and then the secondary variables, which are the Lagrange multipliers, are solved for, afterward. This is an innovation that leads to solving independently two simpler systems of equations involving the primary variables only, on one hand, and the secondary ones on the other. Solutions obtained for small sized problems (as preliminary test of the method) demonstrate that the new method is generally effective in producing the required solutions.

Keywords

Quadratic Programming, Lagrangian Function, Lagrange Multipliers, Optimality Conditions, Subsidiary Equations, Modified Lagrange Method

Share and Cite:

Stephen, T. , John, A. and Etwire, C. (2024) A Modified Lagrange Method for Solving Convex Quadratic Optimization Problems. *Open Journal of Optimization*, **13**, 1-20. doi: 10.4236/ojop.2024.131001.

1. Introduction

Whenever a viable decision is made by selecting from a variety of alternatives which can be evaluated by some performance measure for quality’s sake, and there are some restrictions involving the alternatives, there is an optimization problem. An optimization problem is therefore characterized by a set of decision alternatives called decision variables, at least a performance measure often called objective function (or objective functions), and often restrictions involving the decision variables called constraints [1] . The occurrence of optimization problems is an everyday phenomenon and cuts across diverse disciplines, including engineering, computer science, applied mathematics, economics, and management, to name a few. Interest in the study of optimization has grown immensely from its early days going back to those of Sir Isaac Newton, and given new impetus in the 1940s by the seminal work of George B. Dantzig [2] in linear optimization. Today it attracts many researchers and practitioners from diverse fields and backgrounds, including engineers, systems analysts, operations researchers, numerical analysts, and management scientists.

Optimization problems are broadly classified as either linear or nonlinear, constrained or unconstrained [3] . Among constrained nonlinear optimization problems is a special subclass known as convex optimization with convex quadratic optimization problems being special subset of this subclass. Convex optimization problems in general have gained a lot of interest over the years, with applications discovered in many fields including computer science, robotics and signal processing [4] . Convex quadratic optimization problems also called convex quadratic programming problems (which are the focus in this study) are characterized by either linear or convex quadratic objective function and linear or convex quadratic constraints. A convex quadratic programming problem is a type of nonlinear programming. A typical form of the problem (see [5] ) is:

$\begin{array}{l}\mathrm{min}{c}^{\text{T}}x+\frac{1}{2}{x}^{\text{T}}Qx\\ \text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\text{\hspace{0.05em}}Ax=b\end{array}$ (1.1)

where *x* is a vector of *n* decision variables, *c* is *n* dimensional constant vector, *Q* and *A* are respectively *n* × *n* symmetric positive semi-definite and *m* × *n* constant matrices, and *b* is *m* dimensional constant vector [6] . If the objective function is linear and constraints a mix of linear, nonlinear, or convex quadratic equations, the problem becomes:

$\begin{array}{l}\mathrm{min}{c}^{\text{T}}x\\ \text{subjectto}\text{\hspace{0.17em}}{h}_{i}\left(x\right)={b}_{i},\text{\hspace{0.17em}}i=1,2,\cdots ,m\end{array}$ (1.2)

where, ${h}_{i}\left(x\right)$ is linear or nonlinear, convex quadratic, for some $i\in \left\{1,2,\cdots ,m\right\}$ .

1.1. The Classical Lagrange Method

The Lagrange Multiplier Method, referred to also as *Classical* *Lagrange* *Method* (CLM) is well-known for solving equality constrained nonlinear optimization problems (Hoffman *et al.*, 1992). The main idea of the method is to convert an equality constrained optimization problem into an unconstrained one, by aggregating the objective function and constraint functions into a composite function called the *Lagrangian* *function*, to which the first derivative test for optimality is applied [7] . The Lagrangian function can be seen as constituting a relationship between the objective function and the equality constraints which simply gives a reformulation of the original problem as unconstrained one [8] . For convex quadratic programming problems, the objective function and constraint functions are convex and so the Lagrangian function is convex [9] . Therefore, for the sake of this work, we can limit the discussion of the CLM to convex problems only.

Consider a convex optimization problem with equality constraints as:

$\begin{array}{l}\mathrm{min}f\left(x\right)\\ \text{subjectto}:{h}_{i}\left(x\right)={b}_{i},\text{\hspace{0.17em}}{b}_{i}\in R,\text{\hspace{0.17em}}x\in {R}^{n},\text{\hspace{0.17em}}i=1,\cdots ,m,\end{array}$

where *f* and
${h}_{i}$ are respectively convex quadratic objective function and convex *i*^{th} constraint functions. By letting
${g}_{i}\left(x\right)={h}_{i}\left(x\right)-{b}_{i}=0$ , the problem may be posed as:

$\begin{array}{l}\mathrm{min}f\left(x\right)\\ \text{subjectto}:{g}_{i}\left(x\right)=0,\text{\hspace{0.17em}}x,{b}_{i}\in {R}^{n},\text{\hspace{0.17em}}i=1,\cdots ,m.\end{array}$ (1.3)

The problem (1.3) is transformed into an unconstrained Lagrangian function *L* as:

$\mathrm{min}L\left(x,\lambda \right)=f\left(x\right)-{\lambda}^{\text{T}}g\left(x\right)$ (1.4)

where
$g\left(x\right)$ is an *n* vector of constraint functions and *λ* is an *m* vector of multipliers, which are scalars (called Lagrange Multipliers).

The first order necessary optimality conditions for the Lagrangian function, which are also sufficient for a minimum solution for convex unconstrained optimization problems [10] are:

${\nabla}_{x}L\left(x,\lambda \right)={\nabla}_{x}f\left(x\right)-{\lambda}^{\text{T}}{\nabla}_{x}g\left(x\right)=0$ (1.5a)

${\nabla}_{\lambda}L\left(x,\lambda \right)={\nabla}_{\lambda}f\left(x\right)-g\left(x\right)=0$ (1.5b)

Equations (1.5a) and (1.5b) imply that the gradients of the Lagrangian function with respect to *x* and *λ* independently vanish at the minimum points (solutions)
${x}^{*}$ and
${\lambda}^{*}$ . Furthermore, it means that if minimum solutions
${x}^{*}$ and
${\lambda}^{*}$ exist to (1.4), they satisfy (1.5a) and (1.5b). However, for non-convex problems not all minimum solutions satisfying the first order necessary optimality conditions would be minimum solutions [10] ; such problems require that sufficient optimality conditions are also satisfied [10] . Observe also that (1.5a) and (1.5b) have *n* + *m* unknown variables (*i.e.*, *n* variables in *x* and *m* variables in *λ*) whose values have to be determined.

To find the minimum values
${x}^{*}$ ,
${\lambda}^{*}$ the *m* + *n* system of equations resulting from (1.5a) and (1.5b) are solved simultaneously in the CLM. The larger *m* and *n* are, the more computational effort and time are required to solve for the minimum values. Therefore, finding a method that reduces the computational effort and time demands for finding the solutions is interesting and important, since that can have consequent positive impact on solving many real-world convex (quadratic) problems which occur in many disciplines [4] . This, indeed, is the motivation behind the pursuit of a Modified Lagrange Method (MLM).

1.2. Some Previous Works

A few related previous works in the research area are by [11] who developed a Lagrange relaxation-based decomposition algorithm for an integrated off-shore oil production planning optimization problem. The algorithm was used to provide compact bounds and thus replace the large-scale problem with moderate scale ones and therefore solve several single-batch subproblems simultaneously. An Adaptive Augmented Lagrangian method by [12] was developed for large scale constrained optimization problems. They used a novel adaptive updating technique for the penalty parameters which was motivated by techniques for exact penalty methods. A modified augmented Lagrangian multiplier method was proposed by [13] who used an outer iteration approach in which the Lagrangian multipliers and various penalty parameters were updated and used in the iteration involving constrained problems with simple bounds. Earlier, a class of Augmented Lagrangian methods for constrained optimization problems had been developed (see [14] [15] [16] ) which are similar to the Penalty methods and replace a constrained problem with a series of unconstrained ones and add a penalty term to the objective function. Komzsik and Chiang [17] investigated the effects of the Lagrange multiplier method in solving large-scale problems in a parallel environment.

In the next section, the philosophy and methodology behind the MLM is presented and discussed. The subsequent section assesses the MLM, in this first instance, in terms of its ability to produce solutions for convex quadratic optimization problems as does the CLM, using selected test problems. The paper is concluded with a section that elucidates the findings of the study and outlines prospects and consequences of the MLM for further research.

2. The Modified Lagrange Method

The Classical Lagrange Method (CLM) is modified in this section with the intention to improve it. The modification centers on the evolution of a new approach premised on departure from the classical and conventional thinking to finding solution of a convex quadratic nonlinear equality constrained optimization problem. While the CLM solves directly the two systems of Equations (1.5a) and (1.5b) resulting from the application of the first order necessary optimality conditions to (1.4), the Modified Lagrange Method (MLM) avoids this path, and rather uses only one of the two sets of Equations (*i.e.*, (1.5a)) to develop a new approach to solving the problem. This is considered a major innovation that simplifies the solution process. The rest of the section develops the concept behind the MLM.

Consider the convex quadratic optimization problem given by (1.3). A scalar form of the Lagrangian function (1.4) which converts the problem (1.3) into an equivalent unconstrained problem and which we seek to minimize over *x* only, is:

${\mathrm{min}}_{x}L\left(x,\lambda \right)=f\left(x\right)-{\displaystyle {\sum}_{i=1}^{m}{\lambda}_{i}{g}_{i}}\left(x\right),\text{\hspace{0.17em}}x\in {R}^{n},\text{\hspace{0.17em}}{\lambda}_{i}\in R,\text{\hspace{0.17em}}i=1,2,\cdots ,m$ (2.1)

It is clear that if there exist some
${x}^{*}\in {R}^{n}$ for which the second term of Equation (2.1) vanishes for all
${g}_{i}\left(x\right)$ ,
$i=1,2,\cdots ,m$ , then the constraint equations of (1.3) are satisfied. At such a point, the function
$L\left(x,\lambda \right)$ and
$f\left(x\right)$ become coincident, regardless of *λ*, and so
${x}^{*}$ is a critical vector value that minimizes the function
$L\left(x,\lambda \right)$ and therefore
$f\left(x\right)$ . In other words, the Lagrangian function is always equal to the objective function for some critical point
${x}^{*}$ for which
${g}_{i}\left({x}^{*}\right)=0$ , for all *i*. It turns out, under the CLM, that such a point,
${x}^{*}$ , is the desired optimal (or minimum) solution of (2.1) or (1.3). This observation provides four (4) important facts that serve as bases for the evolution of the MLM. They are as follows: 1) At
${x}^{*}\in {R}^{n}$ ,
${g}_{i}\left({x}^{*}\right)=0$ for all *i*, in spite of *λ*; 2) This (*i.e.*, (1)) indicate that we may seek to find
${x}^{*}$ independently of *λ*; 3) Since
${g}_{i}\left({x}^{*}\right)=0$ identically for all *i* and does not vary from zero at
${x}^{*}$ , it follows that
$\frac{\partial {g}_{i}\left({x}^{*}\right)}{\partial {x}_{j}}=0$ for all *j*; and 4) By independently finding
${x}^{*}$ we may independently find
${\lambda}^{*}$ the optimal value of *λ*. These four observations derived from the CLM means that we can instead find the optimal solutions
${x}^{*},{\lambda}^{*}$ of problem (1.3) and problem (2.1) by two independent processes which first finds
${x}^{*}$ and subsequently finds
${\lambda}^{*}$ . This, in essence, is the philosophy or conceptual framework of the new method called MLM.

By the CLM, the first order necessary optimality conditions (in scalar forms) of (2.1) which are also sufficient for the minimum solutions ${x}^{*},{\lambda}^{*}$ are (2.2a)

$\frac{\partial f\left(x\right)}{\partial {x}_{j}}-{\displaystyle {\sum}_{i=1}^{m}{\lambda}_{i}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}}}=0,\text{\hspace{0.17em}}i=1,2,\cdots ,m;\text{\hspace{0.17em}}j=1,2,\cdots ,n$ (2.2a)

$\frac{\partial f(x)}{\partial {\lambda}_{i}}-{g}_{i}\left(x\right)=0,\text{\hspace{0.17em}}i=1,2,\cdots ,m$ (2.2b)

It is noted that (2.2a) involves *n* equations in *n* unknown variables of *x* and *m* unknown variables of *λ*. Therefore, if we are able to eliminate
${\lambda}_{i}\left(i=1,2,\cdots ,m\right)$ from (2.2a), then we can solve (2.2a) independently of (2.2b). secondly, we observe on the other hand, that (2.2b) cannot be solved independently of (2.2a), since it has *m* equations in the *n* unknown variables of *x*, and for nontrivial cases of the problem (1.3),
$n>m$ . Thirdly, it is important to note that the solution
${x}^{*}$ which satisfies (2.2a) would not violate (2.2b), since the first and second terms of (2.2b) vanish for all
$i\in \left\{1,2,\cdots ,m\right\}$ at
${x}^{*}$ .

2.1. A Novel Process for Finding ${x}^{\ast}$

With the observations made, therefore, we proceed to eliminate
${\lambda}_{i}$ (
$i\in \left\{1,2,\cdots ,m\right\}$ ) by some means in (2.2b), so that we can obtain equations involving only the *n* unknown variables of *x*. Consider the system (2.2a) in the expanded form given by the Equation (2.3).

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{j}}-{\lambda}_{1}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{j}}-{\lambda}_{2}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{j}}-\cdots -{\lambda}_{m-1}\frac{\partial {g}_{m-1}\left(x\right)}{\partial {x}_{j}}-{\lambda}_{m}\frac{\partial {g}_{m}\left(x\right)}{\partial {x}_{j}}\\ =0,\text{\hspace{0.17em}}j=1,2,\cdots ,n.\end{array}$ (2.3)

As was noted earlier,
${g}_{i}\left(x\right)$ and
$\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}}$ both vanish at
${x}^{*}$ for any *i* and *j*, in spite of
${\lambda}_{i}$ . therefore
${\lambda}_{i}$ can be viewed as arbitrary for all *i*, as far as finding
${x}^{*}$ from (2.2a) and therefore from (2.3) is concerned. This means we can choose
${\lambda}_{i}$ arbitrarily in our quest to solve (2.2a) independently of (2.2b). By choosing
${\lambda}_{i}\ne 0$ for some
$i\in \left\{1,2,\cdots ,m\right\}$ and
${\lambda}_{s}=0$ for all
$s\ne i$ ,
$s\in \left\{1,2,\cdots ,m\right\}$ , we can obtain the result in (2.4).

$\frac{\partial f\left(x\right)}{\partial {x}_{j}}={\lambda}_{i}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}},\text{\hspace{0.17em}}j=1,2,\cdots ,n,\text{\hspace{0.17em}}\forall i\in \left\{1,2,\cdots ,m\right\}$ (2.4)

From (2.4), we can eliminate
${\lambda}_{i}$ by taking ratios of the *j*^{th} and the
${\left(j+1\right)}^{\text{th}}$ equation
$\left(j=1,2,\cdots ,n-1\right)$ , leading to:

$\begin{array}{l}\frac{\partial f\left(x\right)/\partial {x}_{1}}{\partial f\left(x\right)/\partial {x}_{2}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{1}}{\partial {g}_{i}\left(x\right)/\partial {x}_{2}},\frac{\partial f\left(x\right)/\partial {x}_{2}}{\partial f\left(x\right)/\partial {x}_{3}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{2}}{\partial {g}_{i}\left(x\right)/\partial {x}_{3}},\cdots ,\\ \frac{\partial f\left(x\right)/\partial {x}_{n-1}}{\partial f\left(x\right)/\partial {x}_{n}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{n-1}}{\partial {g}_{i}\left(x\right)/\partial {x}_{n}}\end{array}$

which is generalized as (2.5):

$\frac{\partial f\left(x\right)/\partial {x}_{j}}{\partial f\left(x\right)/\partial {x}_{j+1}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{j}}{\partial {g}_{i}\left(x\right)/\partial {x}_{j+1}},j=1,2,\cdots ,n-1,\text{\hspace{0.17em}}\forall i\in \left\{1,2,\cdots ,m\right\}$ (2.5)

The result in (2.5) leads to (2.6):

$\frac{\partial f\left(x\right)}{\partial {x}_{j}}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j+1}}-\frac{\partial f\left(x\right)}{\partial {x}_{j+1}}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}}=0,\text{\hspace{0.17em}}j=1,2,\cdots ,n-1;\text{\hspace{0.17em}}\forall i\in \left\{1,2,\cdots ,m\right\}$ (2.6)

The results in (2.6) produces for each *i*, *n* − 1 equations, which we refer to as *Subsidiary* *Constraint* Equations (SCE). The SCE become additional available equations to the *m* original equations given by
${g}_{i}\left(x\right)=0$ of the problem (1.3) for finding
${x}^{*}$ . Together, they provide
$m+n-1$ equations in *x* only and therefore simpler to solve. They are generally of sufficient number for finding
${x}^{*}$ . Occasionally, through numerical experimental work, the authors have observed that some of the *n* − 1 number of SCE produced from (2.6) may be redundant equations, and depending on the number which are redundant, there may not be sufficient number of equations for finding
${x}^{*}$ . To deal with this phenomenon, the authors have further observed that apart from taking ratios consecutively as in (2.5) for any
$i\in \left\{1,2,\cdots ,m\right\}$ , the ratios can instead be taken in any arbitrary order, which means that there can be a large number (innumerable) possible forms of the SCE that can be constructed. For the purpose of this work, we produce another form of ordering of the ratios that leads to other forms of the SCE that appears to avoid the redundancy phenomenon, at least for the current work. The ratios are given by (2.7):

$\begin{array}{l}\frac{\partial f\left(x\right)/\partial {x}_{j}}{\partial f\left(x\right)/\partial {x}_{j+1}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{j}}{\partial {g}_{i}\left(x\right)/\partial {x}_{j+1}},\frac{\partial f\left(x\right)/\partial {x}_{j}}{\partial f\left(x\right)/\partial {x}_{j+2}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{j}}{\partial {g}_{i}\left(x\right)/\partial {x}_{j+2}},\cdots ,\\ \frac{\partial f\left(x\right)/\partial {x}_{j}}{\partial f\left(x\right)/\partial {x}_{n}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{j}}{\partial {g}_{i}\left(x\right)/\partial {x}_{n}}\end{array}$ (2.7)

where $j\in \left\{1,2,\cdots ,n-1\right\}$ , $i\in \left\{1,2,\cdots ,m\right\}$ . The result in (2.7) is generalized as (2.8).

$\frac{\partial f\left(x\right)/\partial {x}_{j}}{\partial f\left(x\right)/\partial {x}_{j+s}}=\frac{\partial {g}_{i}\left(x\right)/\partial {x}_{j}}{\partial {g}_{i}\left(x\right)/\partial {x}_{j+s}},\text{\hspace{0.17em}}j\in \left\{1,2,\cdots ,n-1\right\},\text{\hspace{0.17em}}s=1,2,\cdots ,n-1\text{\hspace{0.17em}}\text{\hspace{0.05em}}\forall i$ (2.8)

It follows from (2.8) that:

$\frac{\partial f\left(x\right)}{\partial {x}_{j}}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j+s}}-\frac{\partial f\left(x\right)}{\partial {x}_{j+s}}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}}=0,\text{\hspace{0.17em}}j\in \left\{1,2,\cdots ,n-1\right\},\text{\hspace{0.17em}}s=1,2,\cdots ,n-1\text{\hspace{0.17em}}\text{\hspace{0.05em}}\forall i$ (2.9)

The authors have observed further (through numerical experimental work) that taking aggregates of the set of SCE obtained from (2.9) is even more effective at avoiding the redundancy phenomenon. Therefore, summing from the *k*^{th} SCE of (2.9) (
$k=j\ge 1$ ), involving the equations
$k,k+1,\cdots ,n-1,n$ , we obtain:

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{k}}\left(\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{k+1}}+\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{k+2}}+\cdots +\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{n}}\right)\\ -\left(\frac{\partial f\left(x\right)}{\partial {x}_{k+1}}+\frac{\partial f\left(x\right)}{\partial {x}_{k+2}}+\cdots +\frac{\partial f\left(x\right)}{\partial {x}_{n}}\right)\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{k}}=0\end{array}$

Which is the same as:

$\frac{\partial f\left(x\right)}{\partial {x}_{k}}{\displaystyle {\sum}_{s=k+1}^{n}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{s}}}-\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{k}}{\displaystyle {\sum}_{s=k+1}^{n}\frac{\partial f\left(x\right)}{\partial {x}_{s}}}=0,\text{\hspace{0.17em}}1\le k\le n-2$ (2.10)

It is noted that various sums, involving two (2) or more equations, may be obtained from (2.9) or (2.10), indicating further the variety of the forms of aggregates of the SCE that may be created from (2.9).

The following observations are made about the SCE in general: 1) They are devoid of the unknown parameter
${\lambda}_{i}$ and produce equations in the unknown variables of *x* only; 2) They are derived from the first order necessary optimality condition given by (2.2a) only, which involves derivatives that are easy to compute, due to the convex nature of the functions: and 3) They are derived under the implicit assumption of the existence of the minimum solution. Therefore, they embody the minimum point
${x}^{*}$ . The resulting equations, therefore, in conjunction with the original constraint equations of the problem can independently be used to find the minimum solution
${x}^{*}$ . The corresponding minimum objective function value
$f\left({x}^{*}\right)$ follow immediately, therefore.

2.2. Finding the Parameter *λ*

The optimal value of *λ* can also be obtained independently, from (2.2a) which is given by:

$\frac{\partial f\left(x\right)}{\partial {x}_{j}}-{\displaystyle {\sum}_{i=1}^{m}{\lambda}_{i}\frac{\partial {g}_{i}\left(x\right)}{\partial {x}_{j}}}=0,\text{\hspace{0.17em}}i=1,2,\cdots ,m\text{\hspace{0.17em}};\text{\hspace{0.17em}}j=1,2,\cdots ,n$

By obtaining the required derivatives of *f *and
${g}_{i}$ , and evaluating them at
${x}^{*}$ , we obtain *m* equations involving
${\lambda}_{i}$ only,
$i=1,2,\cdots ,m$ , which, therefore, can be solved independently for
${\lambda}^{*}$ .

2.3. The Solution Steps of the MLM

The solution process of the MLM is characterized by the following five (5) steps:

Step 1: Given problem (1.3), construct the Lagrangian function as in (2.1) and go to Step 2;

Step 2: Construct the first order necessary optimality condition as in (2.2a), and construct (2.4), then go to Step 3;

Step 3: Generate SCE as in (2.6), (2.9) or (2.10) and go to Step 4;

Determine whether or not the set of SCE together with the original problem constraints are sufficient to find ${x}^{*}$ . If yes, then solve for ${x}^{*}$ using any suitable numerical algorithm, otherwise, form alternative SCE using other forms of ratios or aggregates of equations such as 2.10 to solve for ${x}^{*}$ ; then go to Step 5.

Step 5: Substitute ${x}^{*}$ in (2.2a) and solve for ${\lambda}^{*}$ , then stop.

We conclude the presentation of the MLM with the observation that: unlike the CLM, it does not require the first order necessary optimality condition given by (2.2b) in the solution of a problem (as given by (1.3)). Nevertheless, the solution ${x}^{*}$ obtained would not violate (2.2b). This, in addition to the novel procedures for finding ${x}^{*}$ and ${\lambda}^{*}$ are major simplifications in the process of solving a given convex quadratic optimization problem. The next section provides numerical results obtained by only hand computations involving small sized problems as a first stage evaluation of the MLM. In a subsequent work to be given in another paper, the new method would be evaluated on fairly large-scale problems and its relative performance against the CLM would be assessed to further demonstrate its capabilities as a viable method for solving equality constrained convex quadratic optimization problems.

3. Numerical Results

The new method (MLM), is applied to selected convex quadratic optimization problems to show that it produces the required solutions as the CLM. In this first stage of testing the method, we use small sized problems for which manual (or hand) computation is sufficient to find the solution. In the subsequent paper, where the method would be assessed against the CLM in terms of their relative performances in solving larger sized problems, the use of software would be necessary.

Seven problems, selected from literature for which solutions by the CLM have been obtained already, are solved with the MLM. The solution of each problem by the MLM is obtained for several cases of the SCE generated using (2.6) and (2.10) together with the original problem constraints, to demonstrate the variety of ways of solving the problems using the MLM.

3.1. Problem 1

This problem is extracted from Schittkowski [18] , given by:

$\mathrm{min}f\left(x\right)={\left({x}_{1}-1\right)}^{2}+{\left({x}_{2}-{x}_{3}\right)}^{2}+{\left({x}_{4}-{x}_{5}\right)}^{2}$ (3.1a)

Subject to:

${g}_{1}\left(x\right)={x}_{1}+{x}_{2}+{x}_{3}+{x}_{4}+{x}_{5}-5=0$ (3.1b)

${g}_{2}\left(x\right)={x}_{3}-2\left({x}_{4}+{x}_{5}\right)+3=0$ (3.1c)

The solution by the CLM (see [18] ) is:

${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ , $f\left({x}^{*}\right)=0$ , ${\lambda}_{1}=0$ , ${\lambda}_{2}=0$

Solution by the MLM

Case 1: Solving by (2.6) and from (3.1a) and (3.1b), we obtain the SCE given by (3.1c) to (3.1f).

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}-{x}_{2}+{x}_{3}=1$ (3.1c)

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}={x}_{3}$ (3.1d)

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}-\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}=0\Rightarrow -{x}_{2}+{x}_{3}-{x}_{4}+{x}_{5}=0$ (3.1e)

$\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}-\frac{\partial f\left(x\right)}{\partial {x}_{5}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}=0\Rightarrow {x}_{4}={x}_{5}$ (3.1f)

Notice that the equations given by (3.1d) and (3.1f) make (3.1e) redundant. However, substituting (3.1d) in (3.1c) gives ${x}_{1}=1$ . By substituting ${x}_{1}=1$ , (3.1d), and (3.1f) in the constraint Equations (3.1b) and (3.1c) and multiplying the resulting expression from (3.1b) by and adding that to the resulting equation from (3.1c) yields ${x}_{3}=1$ and therefore ${x}_{2}=1$ . Substituting ${x}_{1}=1$ , ${x}_{2}={x}_{3}=1$ into (3.1b) we obtain ${x}_{4}=1={x}_{5}$ which gives the solution ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ , satisfying both constraint equations ${g}_{1}\left(x\right)=0$ and ${g}_{2}\left(x\right)=0$ . The objective function value is obtained as $f\left({x}^{*}\right)={\left(1-1\right)}^{2}+{\left(1-1\right)}^{2}+{\left(1-1\right)}^{2}=0$ . The minimum values of the multipliers ${\lambda}_{1}$ and ${\lambda}_{2}$ are obtained by solving the system of Equations (3.1g) given by:

$\left[\begin{array}{c}2{x}_{1}-2\\ 2{x}_{2}-2{x}_{3}\\ -2{x}_{2}+2{x}_{3}\\ -2{x}_{4}+2{x}_{5}\\ 2{x}_{5}-2{x}_{4}\end{array}\right]+\left[\begin{array}{c}{\lambda}_{1}\\ {\lambda}_{1}\\ {\lambda}_{1}\\ {\lambda}_{1}\\ {\lambda}_{1}\end{array}\right]+\left[\begin{array}{c}0\\ 0\\ {\lambda}_{2}\\ -2{\lambda}_{2}\\ -2{\lambda}_{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\end{array}\right]$ (3.1g)

Substituting the values of ${x}_{i}\left(i=1,2,3,4,5\right)$ into (3.1g) produces the results ${\lambda}_{1}={\lambda}_{2}=0$ . Hence the minimum solution for Problem 1 using the MLM is

${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ , ${\lambda}_{1}={\lambda}_{2}=0$ and $f\left(x\right)=0$ .

Case 2: Alternatively, we use aggregates of SCE formed as in (2.9) or (2.10) to generate equations and use in conjunction with the constraint equations to solve Problem 1, as follows:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\Rightarrow {x}_{1}=1$

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow 3{x}_{1}-{x}_{4}+{x}_{5}=3\end{array}$

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow 3{x}_{1}-3{x}_{3}-{x}_{4}+{x}_{5}=0\end{array}$

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\Rightarrow {x}_{2}={x}_{3}$

$\begin{array}{l}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}-\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)\frac{\partial f\left(x\right)}{\partial {x}_{5}}=0\\ \Rightarrow {x}_{2}-{x}_{3}-3{x}_{4}+3{x}_{5}=0\end{array}$

Subtracting $3{x}_{1}-3{x}_{3}-{x}_{4}+{x}_{5}=0$ from $3{x}_{1}-{x}_{4}+{x}_{5}=3$ , the result is ${x}_{3}=1$ . Substituting ${x}_{3}=1$ in ${x}_{2}={x}_{3}$ , the result is ${x}_{2}=1$ . Substituting ${x}_{1}={x}_{2}={x}_{3}=1$ in (3.1b), the resulting equation is ${x}_{4}+{x}_{5}=2$ . Substituting ${x}_{2}={x}_{3}=1$ in ${x}_{2}-{x}_{3}-3{x}_{4}+3{x}_{5}=0$ , the resulting equation is $-{x}_{4}+{x}_{5}=0$ . Solving ${x}_{4}+{x}_{5}=2$ and $-{x}_{4}+{x}_{5}=0$ results in ${x}_{4}={x}_{5}=1$ .

Since the value of ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ satisfy both constraints (3.1b) and (3.1c), the objective function value is 0 as obtained previously. The values of ${\lambda}_{1}={\lambda}_{2}=0$ follow accordingly from (3.1g).

Case 3: The required number of SCE in this case are generated with respect to *f* and
${g}_{2}$ using (2.6) leading to the following:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow 0=0$

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}={x}_{3}$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{{x}_{4}}-\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}=0\Rightarrow 2{x}_{2}-2{x}_{3}-{x}_{4}+{x}_{5}=0$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{5}}-\frac{\partial f\left(x\right)}{\partial {x}_{5}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{4}}=0\Rightarrow {x}_{4}={x}_{5}$ .

The resulting SCE generated above, together with the constraint Equations (3.1b) and (3.1c) fail to be of sufficient number to enable solving Problem 1, due to some redundant equations obtained. Therefore, we proceed to Case 4, where aggregates of the SCE are constructed to enable solving of Problem 1.

Case 4: The aggregates of SCE produced according to (2.10), yield the following equations:

The solution process of the MLM.

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\Rightarrow {x}_{1}=1$

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\Rightarrow {x}_{2}-{x}_{3}=0$

$\begin{array}{l}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}-\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)\frac{\partial f\left(x\right)}{\partial {x}_{5}}=0\\ \Rightarrow {x}_{2}-{x}_{3}-3{x}_{4}+3{x}_{5}=0\end{array}$

Substituting ${x}_{1}=1$ in (3.1b) and multiplying the resulting equation by 2 and adding that to (3.1c) gives $2{x}_{2}+3{x}_{3}=5$ . Substituting ${x}_{2}={x}_{3}$ in $2{x}_{2}+3{x}_{3}=5$ results in ${x}_{3}=1$ and so ${x}_{2}=1$ . Substituting ${x}_{3}=1$ in (3.1c) gives ${x}_{4}+{x}_{5}=2$ . Substituting ${x}_{2}={x}_{3}=1$ in ${x}_{2}-{x}_{3}-3{x}_{4}+3{x}_{5}=0$ gives $-{x}_{4}+{x}_{5}=0$ . Solving ${x}_{4}+{x}_{5}=2$ and $-{x}_{4}+{x}_{5}=0$ simultaneously results in ${x}_{4}={x}_{5}=1$ . Therefore, we have ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{4}={x}_{5}=1$ as was obtained in earlier cases. The rest of the results follow therefore.

3.2. Problem 2

This is an extract from [19] and given by:

$\mathrm{min}f\left(x\right)={x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}$ (3.2a)

subject to

${g}_{1}\left(x\right)=2{x}_{1}+{x}_{2}+2{x}_{3}-9=0$ (3.2b)

${g}_{2}\left(x\right)=5{x}_{1}+5{x}_{2}+7{x}_{3}-29=0$ (3.2c)

The solution obtained by CLM as in [19] , is:

${x}_{1}=2,\text{\hspace{0.17em}}{x}_{2}=1,\text{\hspace{0.17em}}{x}_{3}=2,\text{\hspace{0.17em}}{\lambda}_{1}=-2$ and ${\lambda}_{2}=0,\text{\hspace{0.17em}}f\left(x\right)=9$

Solution of Problem 2 by the MLM

Case 1: The nonredundant SCE produced according to (2.6) and using ${g}_{1}\left(x\right)$ yield:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}=2{x}_{2}$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{3}=2{x}_{2}$ ,

Substituting ${x}_{1}=2{x}_{2}$ and ${x}_{3}=2{x}_{2}$ in (3.2b) results in ${x}_{2}=1$ . From ${x}_{1}=2{x}_{2}$ and ${x}_{3}=2{x}_{2}$ we obtain ${x}_{1}=2$ and ${x}_{3}=2$ . The solution is therefore ${x}_{1}=2$ , ${x}_{2}=1$ , ${x}_{3}=2$ . The solution ${x}_{1}=2$ , ${x}_{2}=1$ and ${x}_{3}=2$ satisfy both constraints. The objective function value is $f\left({x}^{*}\right)={2}^{2}+{1}^{2}+{2}^{2}=9$ . We solve for ${\lambda}_{1}$ and ${\lambda}_{2}$ as follows:

$\nabla f\left(x\right)+{\lambda}_{1}\nabla {g}_{1}\left(x\right)+{\lambda}_{2}\nabla {g}_{2}\left(x\right)=0$

$\Rightarrow \left[\begin{array}{c}2{x}_{1}\\ 2{x}_{2}\\ 2{x}_{3}\end{array}\right]+\left[\begin{array}{c}2{\lambda}_{1}\\ {\lambda}_{1}\\ 2{\lambda}_{1}\end{array}\right]+\left[\begin{array}{c}5{\lambda}_{2}\\ 5{\lambda}_{2}\\ 7{\lambda}_{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\end{array}\right]$

This system of equations yields the solutions ${\lambda}_{1}=-2$ , ${\lambda}_{2}=0$ .

Case 2: Aggregates of SCE this time is produced according to (2.10) involving the constraint ${g}_{1}\left(x\right)$ as follows:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\Rightarrow 3{x}_{1}-2{x}_{2}-2{x}_{3}=0$

$\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{1}}\right)-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\right)=0\Rightarrow 2{x}_{1}+2{x}_{2}-3{x}_{3}=0$

Solving $3{x}_{1}-2{x}_{2}-2{x}_{3}=0$ simultaneously with (3.2b) and (3.2c) results in ${x}_{1}=2$ , ${x}_{2}=1$ , ${x}_{3}=2$ which is the required solution. The corresponding values of the parameters ${\lambda}_{1}$ and ${\lambda}_{2}$ and the objective function follow accordingly. Alternatively, we can obtain the same solution by solving simultaneously $2{x}_{1}+2{x}_{2}-3{x}_{3}=0$ , (3.2b) and (3.2c).

Case 3: In this case, SCE are produced using (2.6), but this time using ${g}_{2}\left(x\right)$ , leading to the following equations:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}={x}_{2}$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}=\frac{7}{5}{x}_{3}$ ,

Substituting ${x}_{1}=\frac{7}{5}{x}_{3}$ and ${x}_{2}=\frac{7}{5}{x}_{3}$ in (3.2c), the result is ${x}_{3}=\frac{21}{29}$ . From ${x}_{1}=\frac{5}{7}{x}_{3}$ and ${x}_{2}=\frac{7}{5}{x}_{3}$ , we obtain ${x}_{1}=\frac{147}{145}$ and ${x}_{2}=\frac{147}{145}$ . Since the values do not satisfy both constraints, they are discarded. Alternatively, substituting ${x}_{1}={x}_{2}$ in both constraints and solving the resulting equations simultaneously results in ${x}_{3}=-3$ , ${x}_{2}=5$ . Substituting these values in (3.2c) gives ${x}_{1}=5$ .

The values ${x}_{1}=5,\text{\hspace{0.17em}}{x}_{2}=5,\text{\hspace{0.17em}}{x}_{3}=-3$ satisfy both constraint and yield corresponding objective function value calculated as $f\left({x}^{*}\right)=59$ . This value is higher than the value $f\left({x}^{*}\right)=9$ obtained earlier; and so, the current solution, though feasible, fails to be the minimum solution.

Case 4: In this case, aggregates of the SCE formed according to (2.10) and using ${g}_{2}\left(x\right)$ produce the following equations:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\Rightarrow 7{x}_{1}-5{x}_{2}-5{x}_{3}=0$

$\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{1}}\right)-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{1}}\right)=0\Rightarrow 3{x}_{1}-2{x}_{2}-2{x}_{3}=0$

Solving (3.2b) and (3.2c) simultaneously with the first SCE results in the solution ${x}_{1}=2.1154$ , ${x}_{2}=1.1538$ , ${x}_{3}=1.8077$ , which satisfy both constraints. The corresponding objective function value is $f\left({x}^{*}\right)=9$ . Alternatively, solving (3.2b), (3.2c) and the second equation simultaneously, gives ${x}_{1}=2.0826$ , ${x}_{2}=1.1101$ , ${x}_{3}=1.8624$ , which satisfy both constraints. The corresponding objective function value is $f\left({x}^{*}\right)=9.0381$ , which is marginally different from the value of 9 obtained previously.

The results obtained under this case reveal interesting outcomes that require further investigation to understand the phenomena responsible for this. We observe, nevertheless, that the solution (2.1154, 1.1538, 1.8077) compares well with (2, 1, 2) which is the required; the objective function value of 9.0381 also compares well with 9, the required minimum value.

3.3. Problem 3

The problem is taken from [20] and given by:

$\mathrm{min}f\left(x\right)={x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}+{x}_{4}^{2}$ (3.3a)

subject to

${g}_{1}\left(x\right)={x}_{1}+{x}_{2}+{x}_{3}+{x}_{4}-10=0$ (3.3b)

${g}_{2}\left(x\right)={x}_{1}-{x}_{2}+{x}_{3}+3{x}_{4}-6=0$ (3.3c)

The solution by the CLM is:

${x}_{1}=\frac{5}{2},\text{\hspace{0.17em}}{x}_{2}=\frac{7}{2},\text{\hspace{0.17em}}{x}_{3}=\frac{5}{2},\text{\hspace{0.17em}}{x}_{4}=\frac{3}{2},\text{\hspace{0.17em}}f=27,\text{\hspace{0.17em}}{\lambda}_{1}=3,\text{\hspace{0.17em}}{\lambda}_{2}=-\frac{1}{2}.$

Solution of Problem 3 by the MLM

Case 1: SCE according to (2.6) are generated involving ${g}_{1}\left(x\right)$ as follows:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}-{x}_{2}=0$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}-{x}_{3}=0$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}-\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}=0\Rightarrow {x}_{3}-{x}_{4}=0$ .

The equations ${x}_{1}-{x}_{2}=0$ $\Rightarrow $ ${x}_{1}={x}_{2}$ and ${x}_{2}-{x}_{3}=0$ $\Rightarrow $ ${x}_{2}={x}_{3}$ $\Rightarrow $

${x}_{2}={x}_{1}={x}_{3}$ . Substituting ${x}_{1}={x}_{3}$ in (3.3b) and (3.3c) gives the equations ${x}_{2}+2{x}_{3}+{x}_{4}=10$ and $-{x}_{2}+2{x}_{3}+3{x}_{4}=6$ . Adding them gives ${x}_{3}+{x}_{4}=4$ . Also, adding ${x}_{1}-{x}_{2}=0$ to ${x}_{3}-{x}_{4}=0$ and subtracting the result from (3.3c) yields ${x}_{4}=\frac{3}{2}$ . Putting ${x}_{4}=\frac{3}{2}$ in ${x}_{3}+{x}_{4}=4$ gives ${x}_{3}=\frac{5}{2}$ , and putting the results of ${x}_{3}$ and ${x}_{4}$ in $-{x}_{2}+2{x}_{3}+3{x}_{4}=6$ gives ${x}_{2}=\frac{7}{2}$ . Finally, substituting ${x}_{2}=\frac{7}{2}$ , ${x}_{3}=\frac{5}{2}$ and ${x}_{4}=\frac{3}{2}$ in (3.3b) or (3.3c) results in ${x}_{1}=\frac{5}{2}$ . The values ${x}_{1}=\frac{5}{2}$ , ${x}_{2}=\frac{7}{2}$ , ${x}_{3}=\frac{5}{2}$ , ${x}_{4}=\frac{3}{2}$ satisfy both constraints (3.3b) and (3.3c), and the corresponding objective function value is $f\left({x}^{*}\right)=27$ . The values of ${\lambda}_{1}$ , ${\lambda}_{2}$ are obtained from the equation $\nabla f\left(x\right)+{\lambda}_{1}\nabla {g}_{1}\left(x\right)+{\lambda}_{2}\nabla {g}_{2}\left(x\right)=0$ which is the system:

$\left[\begin{array}{c}2{x}_{1}\\ 2{x}_{2}\\ 2{x}_{3}\\ 2{x}_{4}\end{array}\right]+\left[\begin{array}{c}{\lambda}_{1}\\ {\lambda}_{1}\\ {\lambda}_{1}\\ {\lambda}_{1}\end{array}\right]+\left[\begin{array}{c}{\lambda}_{2}\\ -{\lambda}_{2}\\ {\lambda}_{2}\\ 3{\lambda}_{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ 0\end{array}\right]$ (3.3d)

Solving Equation (3.3d) produces the result ${\lambda}_{1}=6$ , ${\lambda}_{2}=-1$ .

Case 2: The SCE, this time, are constructed according to (2.10) involving the constraint ${g}_{1}\left(x\right)$ , leading to the following:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\Rightarrow 2{x}_{1}-{x}_{2}-{x}_{3}=0$ (3.3e)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow 3{x}_{1}-{x}_{2}-{x}_{3}-{x}_{4}=0\end{array}$ (3.3f)

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\Rightarrow 2{x}_{2}-{x}_{3}-{x}_{4}=0$ (3.3g)

$\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}-\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)\frac{\partial f\left(x\right)}{\partial {x}_{4}}=0\Rightarrow 2{x}_{2}-{x}_{3}-{x}_{4}=0$ (3.3h)

Solving Equations (3.3b) (3.3c), (3.3e) and (3.3f) simultaneously, we have ${x}_{1}=\frac{5}{2}$ , ${x}_{2}=\frac{9}{2}$ , ${x}_{3}=\frac{1}{2}$ , ${x}_{4}=\frac{5}{2}$ , which satisfy the constraints (3.3b) and (3.3c). The corresponding objective function value is $f\left({x}^{*}\right)=30.5$ . Comparing the objective function values of 27 (obtained in Case 1), we see that 27 is the least. Therefore, the minimum solution for Problem 3 is ${x}_{1}=\frac{5}{2}$ , ${x}_{2}=\frac{7}{2}$ , ${x}_{3}=\frac{5}{2}$ , ${x}_{4}=\frac{3}{2}$ , ${\lambda}_{1}=6$ , ${\lambda}_{2}=-1$ , $f\left({x}^{*}\right)=27$ .

Case 3: In this case, SCE are produced using (2.6) in conjunction with the constraint ${g}_{2}\left(x\right)$ as follows:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}+{x}_{2}=0$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}+{x}_{3}=0$ ,

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{{x}_{4}}-\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}=0\Rightarrow 3{x}_{3}-{x}_{4}=0$ ,

From ${x}_{1}+{x}_{2}=0$ $\Rightarrow $ ${x}_{2}=-{x}_{1}$ and ${x}_{2}+{x}_{3}=0$ $\Rightarrow $ ${x}_{2}=-{x}_{3}$ $\Rightarrow $ ${x}_{2}=-{x}_{1}=-{x}_{3}$. Substituting ${x}_{1}={x}_{3}$ in (3.3b) and (3.3c), gives the equations ${x}_{2}+2{x}_{3}+{x}_{4}=10$ and $-{x}_{2}+2{x}_{3}+3{x}_{4}=6$ respectively. Adding ${x}_{2}+2{x}_{3}+{x}_{4}=10$ to $-{x}_{2}+2{x}_{3}+3{x}_{4}=6$ , results in ${x}_{3}+{x}_{4}=4$ . Adding $3{x}_{3}-{x}_{4}=0$ to ${x}_{3}+{x}_{4}=4$ results in ${x}_{3}=1$ , ${x}_{4}=3$ . Therefore, ${x}_{1}=1$ , since ${x}_{1}={x}_{3}$ . Putting ${x}_{1}=1$ , ${x}_{3}=1$ , ${x}_{4}=3$ in (3.3c) give ${x}_{2}=5$ . The values ${x}_{1}=1$ , ${x}_{2}=5$ , ${x}_{3}=1$ , ${x}_{4}=3$ satisfy both constraints (3.3b) and (3.3c) with corresponding objective function value of 36. This value, however, is higher than 27 obtained under Cases 1 and 2; therefore, the solution while feasible, is not optimal.

3.4. Problem 4

This problem is taken from [18] and given by:

$\mathrm{min}f\left(x\right)={\left({x}_{1}-{x}_{2}\right)}^{2}+{\left({x}_{2}+{x}_{3}-2\right)}^{2}+{\left({x}_{4}-{x}_{5}\right)}^{2}+{\left({x}_{4}-1\right)}^{2}+{\left({x}_{5}-1\right)}^{2}$

subject to:

${g}_{1}\left(x\right)={x}_{1}+3{x}_{2}-4=0$ (3.4a)

${g}_{2}\left(x\right)={x}_{3}+{x}_{4}-2{x}_{5}=0$ (3.4b)

${g}_{3}\left(x\right)={x}_{2}-{x}_{5}=0$ (3.4c)

The solution of the problem by CLM is:

${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1,\text{\hspace{0.17em}}{\lambda}_{1}=0,\text{\hspace{0.17em}}{\lambda}_{2}=0$ and ${\lambda}_{3}=0,\text{\hspace{0.17em}}f\left({x}^{*}\right)=0$ (Schittkowski, 2009).

Solution of Problem 4 by the MLM

Case 1: The SCE according to (2.6) and involving the constraint ${g}_{1}\left(x\right)$ are:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow 4{x}_{1}-5{x}_{2}-{x}_{3}=-2$ (3.4d)

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}+{x}_{3}=2$ , (3.4e)

Solving (3.4a), (3.4b), (3.4c), (3.4d) and (3.4e) simultaneously results in ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ , which satisfy all the three constraints with corresponding objective function value $f\left({x}^{*}\right)=0$ . The first order necessary optimality conditions lead to the system:

$\left[\begin{array}{c}2{x}_{1}-2{x}_{2}\\ -2{x}_{1}+4{x}_{2}+2{x}_{3}-4\\ 2{x}_{2}+2{x}_{3}-4\\ 4{x}_{4}-2{x}_{5}-2\\ -2{x}_{4}+2{x}_{5}-2\end{array}\right]+\left[\begin{array}{c}{\lambda}_{1}\\ 3{\lambda}_{1}\\ 0\\ 0\\ 0\end{array}\right]+\left[\begin{array}{c}0\\ 0\\ {\lambda}_{2}\\ {\lambda}_{2}\\ -2{\lambda}_{2}\end{array}\right]+\left[\begin{array}{c}0\\ {\lambda}_{3}\\ 0\\ 0\\ -{\lambda}_{3}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\end{array}\right]$ (3.4f)

Solving the system of Equations (3.4f) results in ${\lambda}_{1}=0$ , ${\lambda}_{2}=0$ , ${\lambda}_{3}=0$ .

Case 2: The SCE this time are produced according to (2.10) as:

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\\ \Rightarrow 2{x}_{1}-3{x}_{2}-{x}_{3}=-2\end{array}$ (3.4g)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow 4{x}_{1}-6{x}_{2}-2{x}_{3}-2{x}_{4}+{x}_{5}=-5\end{array}$ (3.4h)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}\right)\\ -\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\\ \Rightarrow 4{x}_{1}-6{x}_{2}-2{x}_{3}-{x}_{4}-{x}_{5}=-6\end{array}$ (3.4i)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow -{x}_{2}-{x}_{3}-2{x}_{4}+{x}_{5}=-3\end{array}$ (3.4j)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{5}}\right)-\frac{\partial {g}_{1}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\\ \Rightarrow {x}_{2}+{x}_{3}+{x}_{4}+{x}_{5}=4\end{array}$ (3.4k)

Multiplying (3.4g) by 2 and subtracting the resulting equation from (3.4h) gives $2{x}_{4}-{x}_{5}=1$ . Subtracting (2.4h) from (3.4i) results in $-{x}_{4}+2{x}_{5}=1$ . Solving simultaneously the equations $2{x}_{4}-{x}_{5}=1$ and $2{x}_{4}-{x}_{5}=1$ yields ${x}_{4}={x}_{5}=1$ . Substituting ${x}_{5}=1$ in (3.4c) gives ${x}_{2}=1$ . Substituting ${x}_{4}={x}_{5}=1$ in (3.4b) yields ${x}_{3}=1$ . Substituting ${x}_{2}=1$ in (3.4a) gives ${x}_{1}=1$ . Therefore, the minimum solution is ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ with minimum objective function value of zero.

Case 3: The SCE in this case are produced according to (2.6), but this time involving the constraint ${g}_{2}\left(x\right)$ as follows:

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow -{x}_{1}+2{x}_{2}+{x}_{3}=2$ , (3.4l)

$\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{2}\left(x\right)}{{x}_{4}}-\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{3}}=0\Rightarrow {x}_{2}+{x}_{3}-2{x}_{4}+{x}_{5}=1$ , (3.4m)

$\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{5}}-\frac{\partial f\left(x\right)}{\partial {x}_{5}}\frac{\partial {g}_{2}\left(x\right)}{\partial {x}_{4}}=0\Rightarrow {x}_{4}=1$ .

Solving (3.4a), (3.4b), (3.4c), (3.4l), and (3.4m) simultaneously results in ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ which is the required minimum solution, with objective function value 0.

Case 4: The SCE are produced using (2.6) involving ${g}_{3}\left(x\right)$ this time around, as:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}-{x}_{2}=0$ , (3.4n)

$\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial {g}_{3}\left(x\right)}{{x}_{3}}-\frac{\partial f\left(x\right)}{\partial {x}_{3}}\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{2}}=0\Rightarrow {x}_{2}+{x}_{3}=2$ , (3.4o)

$\frac{\partial f\left(x\right)}{\partial {x}_{4}}\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{5}}-\frac{\partial f\left(x\right)}{\partial {x}_{5}}\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{4}}=0\Rightarrow -2{x}_{4}+{x}_{5}=-1$ . (3.4p)

Solving constraint (3.4a) and Equation (3.4n) simultaneously results in ${x}_{1}={x}_{2}=1$ . Substituting ${x}_{2}=1$ in constraint (3.4b) results in ${x}_{5}=1$ . Substituting ${x}_{2}=1$ in ${x}_{2}+{x}_{3}=2$ and ${x}_{5}=1$ in $-2{x}_{4}+{x}_{5}=-1$ gives ${x}_{3}=1$ and ${x}_{4}=1$ respectively. Therefore, ${x}_{1}={x}_{2}={x}_{3}={x}_{4}={x}_{5}=1$ is the required minimum solution, with objective function value zero as previously. The parameter values obtained previously therefore follow accordingly.

Case 5: The SCE in this case are generated according to (2.10), involving the constraint ${g}_{3}\left(x\right)$ , as follows:

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{2}}+\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{3}}\right)-\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{1}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{2}}+\frac{\partial f\left(x\right)}{\partial {x}_{3}}\right)=0\\ \Rightarrow {x}_{1}-{x}_{2}=0\end{array}$ (3.4q)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{4}}\right)-\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}\right)=0\\ \Rightarrow {x}_{2}+{x}_{3}+2{x}_{4}-{x}_{5}=3\end{array}$ (3.4r)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{3}}+\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{5}}\right)-\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{2}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{3}}+\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\\ \Rightarrow {x}_{1}-3{x}_{2}-2{x}_{3}-{x}_{4}=-6\end{array}$ (3.4s)

$\begin{array}{l}\frac{\partial f\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{4}}+\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{5}}\right)-\frac{\partial {g}_{3}\left(x\right)}{\partial {x}_{3}}\left(\frac{\partial f\left(x\right)}{\partial {x}_{4}}+\frac{\partial f\left(x\right)}{\partial {x}_{5}}\right)=0\\ \Rightarrow {x}_{2}+{x}_{3}=2\end{array}$ (3.4t)

Solving the constraint Equation (3.4a) and Equation (3.4q) simultaneously yields ${x}_{1}={x}_{2}=1$ . Substituting ${x}_{2}=1$ in (3.4c) results in ${x}_{5}=1$ . Substituting (3.4t) gives ${x}_{3}=1$ . Substituting ${x}_{1}={x}_{2}={x}_{3}={x}_{5}=1$ in (3.4t) results in ${x}_{4}=1$ , which is what was obtained in the earlier cases of Problem 4. The objective function and parameter values, therefore, follow accordingly.

3.5. Problem 5

This is an extract from [20] . In this case we have a linear objective function being minimized subject to a nonlinear convex quadratic constraint equation.

$\mathrm{min}f\left(x\right)=2{x}_{1}+{x}_{2}$

subject to:

$g\left(x\right)={x}_{1}^{2}+{x}_{2}^{2}-4=0$

The solution obtained by the CLM is:

${x}_{1}=\frac{-4}{\sqrt{5}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{2}=\frac{-2}{\sqrt{5}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\lambda =\frac{\sqrt{5}}{4},\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left({x}^{*}\right)=-2\sqrt{5}$

Solution of Problem 5 by the MLM

Case 1: The only SCE produced for Problem 5 using (2.6) and involving the only constraint $g\left(x\right)$ is:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial g\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial g\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}=2{x}_{2}$ .

Substituting ${x}_{1}=2{x}_{2}$ in ${x}_{1}^{2}+{x}_{2}^{2}-4=0$ the result is ${x}_{2}=\pm \frac{2}{\sqrt{5}}$ . Substituting ${x}_{2}=\frac{2}{\sqrt{5}}$ and ${x}_{2}=-\frac{2}{\sqrt{5}}$ in ${x}_{1}=2{x}_{2}$ results in ${x}_{1}=\frac{4}{\sqrt{5}}$ and ${x}_{1}=-\frac{4}{\sqrt{5}}$ respectively. The objective function values at ( $\frac{4}{\sqrt{5}}$ , $\frac{2}{\sqrt{5}}$ ) and ( $\frac{-4}{\sqrt{5}}$ , $\frac{-2}{\sqrt{5}}$ ) are respectively $2\sqrt{5}$ and $-2\sqrt{5}$ . Hence, the minimum value is $-2\sqrt{5}$ and attained at ( $\frac{-4}{\sqrt{5}}$ , $\frac{-2}{\sqrt{5}}$ ). From the first order optimality condition, we have the system:

$\left[\begin{array}{c}2\\ 1\end{array}\right]+\lambda \left[\begin{array}{c}2{x}_{1}\\ 2{x}_{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right]$

which produces the result $\lambda =\frac{\sqrt{5}}{4}$ .

3.6. Problem 6

This problem is an extract from [21] . Like Problem 5, also involves minimizing a linear objective function, subject to a nonlinear convex quadratic constraint, given by:

$\mathrm{min}f\left(x\right)=2{x}_{2}+{x}_{1}$

subject to:

$g\left(x\right)={x}_{2}^{2}+{x}_{1}{x}_{2}-1=0$

The solution (see [21] ) obtained by the CLM is:

${x}_{1}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{2}=-1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\lambda =1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left(x\right)=-2$

Solution of Problem 6 by the MLM

Case 1: The only SCE obtained from (2.6) for this problem is:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial g\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial g\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}=0$

Substituting ${x}_{1}=0$ in ${x}_{2}^{2}+{x}_{1}{x}_{2}-1=0$ results in ${x}_{2}=1$ or ${x}_{2}=-1$ . Substituting ${x}_{1}=0$ and ${x}_{2}=1$ , the objective function value is 2. Also, substituting ${x}_{1}=0$ , ${x}_{2}=-1$ , the objective function value is −2. Therefore, minimum objective function value is −2 and attained at ${x}_{1}=0$ , ${x}_{2}=-1$ . The first order optimality condition leads to the system:

$\left[\begin{array}{c}1\\ 2\end{array}\right]+\lambda \left[\begin{array}{c}{x}_{2}\\ {x}_{1}+2{x}_{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right]$

Substituting ${x}_{1}=0$ , ${x}_{2}=-1$ results in $\lambda =1$ .

3.7. Problem 7

In this problem, we consider minimizing a convex function, subject to a linear constraint. The problem is taken from [21] .

$\mathrm{min}f\left(x\right)={x}_{1}{x}_{2}$

subject to:

$g\left(x\right)=2{x}_{1}+2{x}_{2}-20=0$

Solution by the CLM is:

${x}_{1}=5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{2}=5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\lambda =2.5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left({x}^{*}\right)=25$

Solution of Problem 7 by the MLM

Case 1: The only SCE obtained from (2.6) is:

$\frac{\partial f\left(x\right)}{\partial {x}_{1}}\frac{\partial g\left(x\right)}{\partial {x}_{2}}-\frac{\partial f\left(x\right)}{\partial {x}_{2}}\frac{\partial g\left(x\right)}{\partial {x}_{1}}=0\Rightarrow {x}_{1}={x}_{2}$

Substituting ${x}_{1}={x}_{2}$ in $2{x}_{1}+2{x}_{2}-20=0$ , results in ${x}_{2}=5$ . Substituting ${x}_{2}=5$ in $2{x}_{1}+2{x}_{2}-20=0$ , we obtain ${x}_{1}=5$ . The minimum objective function value is therefore 25, attained at ${x}_{1}=5={x}_{2}$ . The first order optimality condition leads to the system:

$\left[\begin{array}{c}{x}_{2}\\ {x}_{1}\end{array}\right]+\lambda \left[\begin{array}{c}2\\ 2\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right]$

which gives $\lambda =-2.5$ .

4. Conclusions

This paper has reported a new way of solving an equality constrained convex quadratic optimization problems, based on what the authors have called Modified Lagrange Method (MLM), which, unlike the classical Lagrange Method (CLM), decomposes the task of solving for the unknown decision variable values of the problem and the values of the Lagrange multipliers by two independent procedures. The construction of what has been called Subsidiary Constraint Equations (SCE) by the authors, which result from a new procedure for independent solution for the decision variables is the novelty of the MLM. The arbitrarily large variety of ways of constructing the SCE is noted to be interesting and remains an area of further research.

The seven problems solved in this paper were intended to obtain preliminary results to demonstrate the ability of the MLM to find the required optimal solutions just as can be obtained from the CLM. In future works, the authors would demonstrate the superior performance of the MLM over the CLM in solving convex quadratic equality constrained problems and extend the approach to inequality constrained nonlinear convex quadratic problems as well as to convex problems not necessarily quadratic.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Luenburger, D.G. and Ye, Y. (2008) Dual and Cutting Plane Methods. Linear and Nonlinear Programming, Vol. 116. Springer, New York, 435-468. https://doi.org/10.1007/978-0-387-74503-9_14 |

[2] | Koopman, T.C. (2003) Activity Analysis of Production Allocation. Wiley & Chapman-Hall, New York, 339-347. |

[3] |
Bazaraa, M.S., Sherali, H.D. and Shetty, C.M. (2006) Nonlinear Programming, Theory & Algorithms. John Wiley & Sons Inc., New York. https://doi.org/10.1002/0471787779 |

[4] |
Boyd, S. and Vandenberghe, L. (2004) Convex Optimization. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511804441 |

[5] | Nocedal, J. and Wright, S.J. (2006) Quadratic Programming. Numerical Optimization, Springer, New York, 448-492. |

[6] | Shustrova, A. (2015) Primal-Dual Interior Point Methods for Quadratic Programming. Doctor’s Thesis, University of California, San Diego. |

[7] | Hoffman, L.D. and Bradley, G.L. (1992) Calculus for Business, Economics, and the Social and Life Sciences. McGraw-Hill, New York. |

[8] |
Beavis, B. and Dobbs, I. (1990) “Static Optimization” Optimization and Stability Theory for Economic Analysis. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511559402 |

[9] | Hillier, F.S. and Price, C.C. (2012) International Series in Operations Research & Management Science. Springer, Berlin/Heidelberg |

[10] |
Jakovetic, D., Xavier, J., and Moure, J.M. (2014) Fast Distributed Gradient Methods. IEEE Transaction on Automatic Control, 59, 1131-1146. https://doi.org/10.1109/TAC.2014.2298712 |

[11] |
Gao, X., Hao, Y.Z., Wang, Y., Zuo, X., and Chen, T. (2021) A Lagrange Relaxation-Based Decomposition Algorithm for Large-Scale Off Shore Oil Production Planning Optimization. Processes, 9, Article 1257. https://doi.org/10.3390/pr9071257 |

[12] |
Curtis, F.E., Jiang, H. and Robinson, D.P. (2015) An Adaptive Augmented Lagrangian Method for Large-Scale Problems. Mathematical Programming Series A, 152, 201-245. https://doi.org/10.1007/s10107-014-0784-y |

[13] |
Liang, X., Hu, J., Zhong, W. and Qian, J. (2008) A Modified Augmented Lagrange Multiplier Method for Large Scale Optimization. Developments in Chemical Engineering and Mineral Processing, 9, 115-124. https://doi.org/10.1002/apj.5500090214 |

[14] |
Burman, E., Hausbo, P. and Larson, M.G. (2023) The Augmented Lagrange Method as a Framework for Stabilized Methods in Computational Mechanics. Archives of Computational Methods in Engineering, 30, 2579-2604. https://doi.org/10.1007/s11831-022-09878-6 |

[15] | Bertsekas, D.P. (1996) Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Nashua. |

[16] | Fortin, M. and Glovinski, R. (1983) Augmented Lagrangian Methods: Numerical Solution of Boundary-Value Problems. Elsevier Science Problems, New York. |

[17] |
Komzsik, L. and Chaing, K. (1993) The Effects of a Lagrange Multiplier Approach in MSC/MASTIZAN on Large-Scale Parallel Application. Computing Systems in Engineering, 4, 399-403. https://doi.org/10.1016/0956-0521(93)90008-K |

[18] | Schittkowski, K. (2012) More Test Examples for Nonlinear Programming Codes. Vol. 282, Springer Science & Business Media, Berlin. |

[19] |
Libretext (2023) Constrained Optimization Lagrange Multipliers. https://math.libretexts.org/ |

[20] | Trench, W.F. (2012) The Method of Lagrange Multipliers. Harper & Row, New York.ss |

[21] | Berkeley Math (2020) University of California. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.