Guignard’s Constraint Qualification (GCQ) and Multiobjective Optimisation Problems ()
1. Introduction
Many authors derive the first order necessary conditions for multiple objective functions under the same techniques as used in scalar-valued objective function [1] [2] [3] [4] [5], but do not provide a useful tool to develop necessary and sufficient conditions for the multiobjective optimisation problem.
In this paper, we generalise and recall first order optimality conditions. We consider objective function to be a vector function with classical inequality and equality constraints are considered; furthermore, we suppose that the involved functions satisfy suitable differentiability assumptions.
Many papers [6] - [13] have been devoted to studying first order necessary conditions for multiobjective problems with a set constraint; the basic idea is to approximate the constraints with the contingent cone. We review these results carefully, stressing the meaningful differences with the scalar-objective case.
Scalarization techniques are often used in multiobjective optimisation problem. Many papers have been published where scalarisation approaches are used and non-negative Lagrange multipliers associated with the vector-valued objective functions are considered. So it is possible that due to some zero multipliers, the corresponding components of the vector valued objective functions have no role in the necessary conditions of multiobjective problems. To get positive Lagrange multipliers, Maeda [14] first then Preda [15] introduced some special sets and derived some generalised regularity conditions for the first-order KKT-type necessary conditions that ensure the existence of positive Lagrange multipliers for first order multiobjective optimality conditions.
This paper recalls first order necessary and sufficient conditions for multiobjective optimisation problems. To derive necessary conditions, we use well-known constraint qualifications such as Abadie constraints qualification and Guignard constraint qualification, which are considered more general assumptions in the literature to establish optimality conditions.
In Section 2 of this paper, basic notations are presented which are used throughout our analysis. Then, in Section 3, first order necessary conditions are introduced, whereas constraint qualifications with counterexamples are presented in Section 3 to show the gap between scalar and multiobjective optimisation problems.
2. Basic Notations
This section introduces some notations and definitions used throughout the papers.
For
,
be n-dimensional space. We use the following relations to compare n-dimensional points.
, if and only if
,
,
, if and only if
and
,
, if and only if
,
.
Now, we consider the following multiobjective optimisation Problem P:
, subject to the constraints set X such that
.
Now, let
,
and
be continuously differentiable vector-valued functions defined by
,
and
where
for
,
for
and
for
. Assume that active constraints set
, for
.
The solution of the problem (P) is called the efficient point. A more general solution of (P) is called the weak efficient point. The following definitions are the standard ones that are used for multiobjective optimisation problems.
Definition 2.1. A point
is called an efficient solution to Problem (P) if there is no
such that
.
A point
is called a weakly efficient solution to Problem (P) if there is no
such that
.
Because of ordering relations, we can get a problem where all points are efficient solutions or no efficient solution at all, as the following example shows.
Example 2.1 Consider the problem
and
.
Here, all
are efficient solutions (see, Figure 1). And if we consider a problem
and
.
So, no
(see, Figure 2) are efficient solutions to the problem.
Figure 1. Efficient points
.
Figure 2. Shaded feasible region
.
3. First Order Necessary Conditions
The study of constraints set X and its image
is a difficult task and therefore, it is reasonable to consider suitable approximations of these sets. Thus, the following concept plays a vital role in developing optimisation theories [6] [14] [16] [17].
Definition 3.1 Let X be a subset of
. The contingent cone to X at
is the set defined by
where
denotes the closure of X and
is a nonempty closed cone and enjoys some important properties [6] [14] [17] [18]; The contingent cone is isotone, that is,
whenever
. It is convex if the original set is convex.
The analysis of optimality conditions can be deepened, developing the connection between the contingent cone and the following linearising cone (see [1] [18] [19]).
Definition 3.2 The linearising cone to X at
is the set defined by
Here
is a nonempty closed convex cone.
The following lemma is a well-known established property for scalar and multiobjective optimisation problems. The property and proof can be seen in the literaturesee, [14] [15] [20] [21] [22]. However, we have included the statement and proof here for reader convenience.
Lemma 3.1 If
is an efficient solution of P then for any direction
the system
(1)
has no solution
.
Proof. Let
, that is
, where
,
for each n,
and
. By differentiability of f at
, we get
(2)
where
as
Since
is an efficient solution, so there is no
, where
, and so, from (2), we get
,
Since
and taking the limit as
, the above inequality implies
. It implies that (1) has no solution. This completes the proof.
The converse is no longer true, as the following example shows
Example 3.1 Consider the problem
and
.
It is easily verified that:
1)
, where
.
2)
.
3)
.
4)
is not an efficient solution to the problem. It is noted that
is a weak efficient point.
5) Figure 3
The relation between contingent and linearization cone is presented as follows.
Lemma 3.2 Let
. Then
[see [14] for its proof].
Now we introduce Motzkin theorems that are used to derive the necessary conditions of optimality. These theorems say the impossibility of one system to the solvability of another one; see [11] [23] for its proof.
Theorem 3.1 Motzkin theorem of the Alternative
Let A, B, and C be given matrices, with A being nonvacuous. Then either
1)
,
and
has a solution x
or
2)
has a solution y1, y2, y3
but never both.
4. Constraint Qualifications
To obtain Lagrange multipliers associated with the objective function which are all positive (
) that is, at least one multiplier is non-zero. To restrict multiplier associated objective functions that are not equal to zero and establish necessary conditions for optimality, we require some assumptions called constraint qualifications (CQ). This is because if
then the objective function has disappeared and any other function could have played this role. In our analysis, we consider two well-known constraint qualifications introduced below.
Definition 4.1 The constraint set X satisfies the Abadie Constraint Qualification (ACQ) at
if
.
Definition 4.2 The constraint set X satisfies the Guignard’s Constraint Qualification (GCQ) at
if
.
Lemma 4.1 Let
be any feasible solution to problem P. Assume that the ACQ holds at
. If
is an efficient solution to Problem P, then the system
(3)
has no solution
.
Proof. By Lemma 3.1,
is an efficient
, If ACQ holds at
then
.
This completes the proof that (3) has no solution.
Theorem 4.1 Suppose that (ACQ) holds at
. If
is an efficient solution to Problem P, then there exist vectors
,
such that
(4)
,
(5)
,
(6)
Proof. Using Theorem 2.1 and Lemma 3.1, there exist
,
and
,
,
such that
.
By setting
,
, we have
,
,
.
Since
for
, we have
for
.
which completes the proof.
Therefore, ACQ and Motzkin Theorem of the alternative allows restricting multipliers rule with
.
Example 4.1 We take
; Consider the problem
, and
.
It is easily verified that:
1)
is an efficient solution to the problem.
2)
, where
.
3)
.
4)
.
5)
.
6)
, Abadie Constraint Qualification (ACQ) does not hold at
.
7)
,when
, and thus Guignard’s Constraint Qualification (GCQ) holds at
.
8)
does not satisfy Theorem 4.1.
9) Figure 4.
Remarks: In single optimisation, since
lies in open half-space, as a result, Theorem 3.1 holds for ACQ and GCQ when
. Theorem 4.1 does hold for ACQ when
, however, the assumption GCQ does not guarantee to hold Theorem 4.1. For this instance, GCQ cannot be used in multiobjective optimisation problems. In Example 4.1, we see that, even if GCQ holds at x0, but KKT does not hold at that point.
Under certain situations GCQ can be used for multiobjective optimisation problems, and this illustrates by the following examples given below.
Example 4.2 We take
; consider the problem
and
.
It is easily verified that:
1)
is a weak efficient solution to the problem.
2) GCQ holds at
.
3)
.
4)
and
.
Hence,
. That is Lemma 4.1 holds when
.
Example 4.3 We take
, consider the problem
and
.
It is easily verified that:
1)
is an efficient solution to the problem (see Figure 5).
2) GCQ holds at
.
3)
.
4)
and
.
Hence,
. That is Lemma 4.1 holds when
.
5. Sufficient Conditions for Efficiency
To establish the necessary conditions, we need some sort of constraint qualifications but do not need convexity assumptions. Necessary conditions generally do not turn out to be also sufficient unless additional assumptions hold. In this section, we use the concept of quasiconvexity (quasiconcavity) and pseudoconvexity for the sufficient condition. For the definitions of qausi and psedu convexity we refer [17]. Many authors have been devoted to generalise the sufficient condition by using weaker assumption such as generalised convex function see [1] [6] [23] [24].
Now we will check the sufficiency of Lemma 4.1. Let’s state it as follows.
Lemma 5.1 If the system
(7)
has no solution
then
is an efficient solution to Problem P. The Lemma 5.1 is no longer true for MOP, as the following example shows.
Example 5.1 Recall the Example 3.1
Consider the problem
and
.
It is easily verified that:
1)
, where
.
2)
.
3)
.
4)
.
5)
, Abadie Constraint Qualification (ACQ) holds at
.
6)
, therefore,
, and thus Guignard’s Constraint Qualification (GCQ) holds at
.
7)
is not an efficient solution to the problem. It is noted that
is a weak efficient point.
8) Figure 3.
Unfortunately, there is no suitable theorem of the alternative, which allows turninglemma into a sufficient multipliers rule: convexity assumptions have to be added to achieve this type of result [8] [10].
Theorem 5.1 Let
be a feasible solution of problem P. Let
and
. Suppose that f is pseudoconvex at
,
,
are quasiconvex at
,
for
are quasiconvex at
and
for
are quasiconcave at
. If there exist
,
such that (4) and (5) hold at
, then
is an efficient solution for problem P on X.
Proof. Let
. Since
and
for
and hence
for
.
Now
,
for all
. It follows by quasiconvexity of
,
at
.
and
,
(8)
Similarly, since
for
are quasiconvex at
and
for
are quasiconcave at
, we have
for
(9)
and
for
(10)
Multiplying (9) and (10) by
and
respectively, and adding
with (8),we get
(11)
Multiplying (3.16) by
we get,
and compared with (12) we write
, since
and for all
.
Which, by the pseudo convexity of f at
, implies that
for all
.
Hence, the proof is complete.
6. Conclusion
This paper reviewed first-order optimality conditions for multiple objective functions. Combining the result of [25] [26], we have derived KKT type necessary conditions. Furthermore, concepts of constraint qualification are introduced, which are required in the necessary conditions. The main result of this paper is that it ensures the existence of GCQ for Multiobjective optimisation problems. Counterexamples are provided to show the restrictions that do not allow one to use well-known GCQ in Multiobjective optimisation problems. In our future research, we intend to investigate the sufficiency conditions of KKT conditions for Multiobjective optimization problem under more general assumptions.