Generating Epsilon-Efficient Solutions in Multiobjective Optimization by Genetic Algorithm

Abstract

We develop a new evolutionary method of generating epsilon-efficient solutions of a continuous multiobjective programming problem. This is achieved by discretizing the problem and then using a genetic algorithm with some derived probabilistic stopping criteria to obtain all minimal solutions for the discretized problem. We prove that these minimal solutions are the epsilon-optimal solutions to the original problem. We also present some computational examples illustrating the efficiency of our method.

Share and Cite:

Rahmo, E. and Studniarski, M. (2017) Generating Epsilon-Efficient Solutions in Multiobjective Optimization by Genetic Algorithm. Applied Mathematics, 8, 395-409. doi: 10.4236/am.2017.83032.

1. Introduction

The goal of multiobjective optimization, also called vector optimization, is to find a certain set of optimal (efficient) elements of a nonempty subset of a partially ordered linear space. However, finding an exact description of this set often turns out to be practically impossible or computationally too expensive. Therefore, many researchers have focused their efforts on approximation pro- cedures and approximate solutions (see e.g. [1] [2] and references therein).

More than three decades ago, the notion of ε -efficiency has been introduced by Loridan [3] for multi-objective optimization problems (MOPs). Afterwards, this concept has been used e.g. in [2] [4] [5] . To deal with a continuous multi- objective optimization problem, one has to consider a finite discretization of the set of feasible points (see Section 3 below). The discretization of the search space is one of the efficient techniques to obtain approximate solutions for MOPs, (e.g. [6] [7] ). The aim of the present paper is to develop a method of generating ε - efficient solutions (as defined in [4] ) of a continuous MOP. This is achieved by discretizing the problem and then using a genetic algorithm according to the scheme described in [8] . In this way, some probabilistic stopping criteria are obtained for this procedure. They are given in the form of an upper bound for the number of iterations necessary to ensure finding all minimal elements of a finite partially ordered set with a prescribed probability. Supporting theoretical results are established and some computational examples are provided.

2. Stopping Criteria for Genetic Algorithms

In this section we review the results of [8] on probabilistic stopping criteria which will be applied later in Section 4 to some continuous multiobjective opti- mization problem.

2.1. Random Heuristic Search

The RHS (Random Heuristic Search) algorithm, described in [9] , is defined by a fixed initial population p ^ and a transition rule τ which, for a given popu- lation p , determines a new population τ ( p ) . Iterating τ , we obtain a se- quence of populations:

p ^ , τ ( p ^ ) , τ 2 ( p ^ ) , (1)

Each population consists of a finite number of individuals which are elements of a given finite set Ω called the search space. Populations are multisets, which means that the same individual may appear more than once in a given popu- lation.

To simplify the notation, it is convenient to identify Ω with a subset of integers: Ω = { 0 , 1 , , n 1 } . The number n is called the size of search space. Then a population can be represented as an incidence vector (see [10] , p. 141):

v = ( v 0 , v 1 , , v n 1 ) T , (2)

where v i is the number of copies of individual i Ω in the population ( v i = 0 if the i -th individual does not appear in the population). The size of population v is the number

r = i = 0 n 1 v i . (3)

We assume that all the populations appearing in sequence (1) have the same size r . Dividing each component of incidence vector (2) by r , we obtain the population vector

p = ( p 0 , p 1 , , p n 1 ) T , (4)

where p i = v i / r is the proportion of individual i Ω in the population. In this way, we obtain a more general representation of the population which is independent of population size. It follows that each vector p of this type be- longs to the set

Λ : = { x n : x i 0 ( i ) , i = 0 n 1 x i = 1 } , (5)

which is a simplex in n . However, not all points of this simplex correspond to finite populations. For a fixed r , the following subset of Λ consists of all populations of size r (see [9] , p. 7):

Λ r : = 1 r { x n : x i { 0 } ( i ) , i = 0 n 1 x i = r } . (6)

We now define the mapping

G : Λ Λ ,

called heuristic ( [9] , p. 9) or generational operator ( [10] , p. 144), in the following way: for a vector p Λ representing the current population, G ( p ) is the probability distribution that is sampled independently r times (with replace- ment) to produce the next population after p . For each of these r choices, the probability of selecting an individual i Ω is equal to G ( p ) i , the i -th com- ponent of G ( p ) .

A transition rule τ is called admissible if it is a composition of a heuristic G with drawing a sample in the way described above. Symbolically,

τ ( p ) = sample ( G ( p ) ) , p Λ . (7)

Of course, a transition rule defined this way is nondeterministic, i.e., by applying it repeatedly to the same vector p , we can obtain different results. It should also be noted that, although G ( p ) may not belong to Λ r , the result of drawing an r -element sample is always a population of size r ; therefore, it follows from (7) that τ ( p ) Λ r .

2.2. The Case of a Genetic Algorithm

In this subsection we consider a genetic algorithm as a particular case of the RHS. We assume that a single iteration of the genetic algorithm produces the next population from the current population as follows:

1) Choose two parents from the current population by using a selection method which can be described by some heuristic (see [9] , 4.2).

2) Crossover the two parents to obtain a child.

3) Mutate the child.

4) Put the mutated child into the next population.

5) If the next population contains less than r members, return to step 1.

The only difference between the iteration described above and the iteration of the Simple Genetic Algorithm defined in ( [9] , p. 44) is that in our version muta- tion is done after crossover.

To derive our stopping criteria, we will use some properties of mutation which is generally understood as changing one element of the search space to another, with a certain probability. The way of implementing selection and crossover is not important for our model, so we omit the discussion of them (we refer the reader to ( [10] , Chapter 5)). The only requirement is that the com- position of the three operations (selection, crossover, mutation) can be described in terms of some heuristic.

We assume that mutation consists in replacing a given individual from Ω by another individual, with a prescribed probability. Let us denote by u i , j the probability that individual i mutates to j . In this way, we obtain a n × n

matrix U = [ u i , j ] i , j Ω . We denote by Pr ( q | p ) = Pr ( τ ( p ) = q ) the probability

of obtaining a population q in the current iteration of the RHS algorithm provided the previous population is p , and by Pr ( [ j ] | p ) = G ( p ) j the pro- bability of selecting an individual j Ω by a single sampling of the probability distribution G ( p ) . In particular, the probability of generating individual j from population p by successive application of selection, crossover and muta- tion is equal to (see [8] , formula (7))

G ( p ) j = Pr ( [ j ] | p ) scm = i = 0 n 1 u i , j Pr ( [ j ] | p ) sc , (8)

where the subscript sc means that we are dealing with the composition of selection and crossover, and the subscript scm indicates the composition of selection, crossover and mutation. To get a whole new population, one should draw an r-element sample from the probability distribution (8). The probability of generating a population q in this way is then equal to (see [8] , formula (8))

Pr ( q | p ) scm = r ! j = 0 n 1 ( Pr ( [ j ] | p ) scm ) r q j ( r q j ) ! . (9)

2.3. Stopping Criteria for Finding All Minimal Elements of Ω

We now consider the following multiobjective optimization problem. Let Ω be a finite search space defined in Subsection 2.1, and let f : Ω F be a function being minimized, where F = { f ( ω ) : ω Ω } and ( F , ) is a partially ordered set. An element x F is called a minimal element of ( F , ) if there is no x F such that x x , where the relation is defined by

( x y ) : ( x y x y ) . (10)

The set of all minimal elements of F is denoted by Min ( F , ) . We define the set of optimal solutions in our multiobjective problem as follows:

Ω = Min f ( Ω , ) : = { ω Ω : f ( ω ) Min ( f ( Ω ) , ) } . (11)

In particular, if F is a finite subset of the Euclidean space k , and f = ( f 1 , , f k ) , where each component of f is being minimized indepen- dently, then the relation in F can be defined by

( x y ) : ( x i y i , i = 1 , , k ) .

In this case, Ω is the set of all Pareto optimal solutions of the respective multiobjective optimization problem.

In this section, we assume that the goal of RHS is to find all elements of Ω . Suppose that the set Ω of minimal solutions has the following form:

Ω = { j 1 , j 2 , , j m } , (12)

where the (possibly unknown) number m of these solutions is bounded from above by some known positive integer M . We say that all elements of Ω have been found in the first t iterations if, for each l { 1 , , m } , there exists s { 1 , , t } such that τ s ( p ^ ) j l > 0 . This means that each minimal solution is a member of some population generated in the first t iterations.

Theorem 1 ( [8] , Thm. 6.1) We consider the general model of genetic algori- thm, described in Subsection 2.2, being a special case of the RHS algorithm with the heuristic G given by (8). Suppose that there exists a number β ( 0,1 ) sa- tisfying

u i , j β , i Ω , j Ω . (13)

Let M and t be two positive integers satisfying the inequality

M ( 1 β ) r t < 1. (14)

Let Ω be of the form (12) with m M . Then the probability of finding all elements of Ω in the first t iterations is at least

1 M ( 1 β ) r t . (15)

Corollary 2 ( [8] , Cor. 2) We consider the same model of algorithm as in Theorem 1. Suppose that condition (13) holds for some β ( 0 , 1 ) . Let M be a given upper bound for the cardinality of Ω . For any δ ( 0,1 ) , we denote by t min ( δ ) the smallest number of iterations required to guarantee that all elements of Ω have been found with probability δ . Then

t min ( δ ) ln ( 1 δ ) ln M r ln ( 1 β ) , (16)

where x is the smallest integer greater than or equal to x .

2.4. Construction of the Set of Minimal Elements

The results of Section 2.3 give no practical way of constructing the set Ω . Different elements of this set are members of different populations generated by the genetic algorithm, and cannot be easily identified. To give an effective way of constructing Ω , some modification of the RHS is necessary.

The algorithm presented below is a combination of the RHS and the base VV (van Veldhuizen) algorithm described in ( [11] , 3.1). Suppose we have some RHS satisfying the assumptions of Theorem 1. It generates a sequence (1) of popu- lations, all of them being members of Λ r . For each p Λ r , we define the set of individuals represented in population p :

set ( p ) : = { ω Ω : p ω 0 } . (17)

Then we construct a sequence { D t } of subsets of Ω as follows:

D t : = set ( τ t ( p ^ ) ) , t = 0 , 1 , , (18)

where τ 0 : = i d is the identity mapping. Finally, we define another sequence { E t } of sets recursively by

E 0 : = Min f ( D 0 , ) , (19)

E t + 1 : = Min f ( E t D t + 1 , ) , t = 0 , 1 , , (20)

where we have used the notation Min f as in (11). Formulas (19) and (20) define the VV algorithm.

It is shown in ( [11] , Prop. 1) that the sets f ( E t ) converge with probability 1 to Min ( F , ) in the sense of some metric. However, as the authors comment, “The size of the sets E t will finally grow to the size of the set of minimal elements. Since this size may be huge, this base algorithm offers only limited usefulness in practice”. In fact, our considerations can have practical value only if the cardinality of Ω is relatively small. For continuous multiobjective opti- mization problems, such situation can be achieved by choosing a suitable dis- cretization.

Our final result is the following theorem which shows that, with a prescribed probability, the sets E t constructed by the VV algorithm are equal to Ω for t sufficiently large.

Theorem 3 ( [8] , Thm. 7.1) Let the assumptions of Corollary 2 be satisfied. Then, with probability δ , we have

Ω = E t , t t min ( δ ) . (21)

3. Generation of ε -Efficient Solutions for a Continuous Problem

Let f : X l be a given mapping, where X is a closed and bounded subset of k . We consider the following multiobjective optimization problem:

min { f ( x ) : x X } . (22)

To solve this problem means to find all Pareto optimal (efficient) points of X with respect to the partial order relation in l defined by

( u v ) : ( u i v i , i = 1 , , l ) . (23)

However, in practical situations this can be very difficult or even impossible. Therefore, we shall consider a discretized version of problem (22).

For any given η > 0 , we say that a subset Ω of k is a η -discretization of X if

Ω X and X z Ω B ( z , η ) , (24)

where B ( x , η ) : = { y k : y x < η } . Since X is compact, we can always find a finite η -discretization of X . The discretized multiobjective optimi- zation problem can now be formulated as follows:

min { f ( x ) : x Ω } , (25)

where the same relation (23) is considered, but now in the finite set f ( Ω ) .

It is natural to ask whether an exact solution of problem (25) yields some sort of approximate solution of problem (22). One of the cases where a positive answer can be given is described in the proposition below. Before formulating it, we must define ε -efficient solutions, following ( [4] , Definition 2.3 (ii)).

Let ε = ( ε 1 , , ε l ) l be such that ε i > 0 ( i = 1 , , l ) . We say that a point x ¯ X is an ε -efficient solution of problem (22) if there is no x X such that

f ( x ) f ( x ¯ ) ε , (26)

where the relation “ ” is defined by formula (10).

Proposition 4 Let f = ( f 1 , , f l ) : X l where X is compact and each function f i is Lipschitz continuous with constant K i > 0 ( i = 1 , , l ) . Let ε l be such that ε i > 0 ( i = 1 , , l ) . Define

η : = min { ε i K i : i = 1 , , l } , (27)

and let Ω be a η -discretization of X . Denote by Ω the set of all Pareto optimal solutions of problem (25) (i.e., Ω = Min f ( Ω , ) as in formula (11)). Then every point x ¯ Ω is an ε -efficient solution of problem (22).

Proof. Let x ¯ Ω . Suppose to the contrary that x ¯ is not an ε -efficient solution of (22). Then there exists x X such that (26) holds. In particular, we have

f i ( x ) f i ( x ¯ ) ε i , forall i { 1 , , l } . (28)

By the second inclusion in (24), there exists z Ω such that z x < η . Therefore, using (27) and (28), we obtain, for all i { 1 , , l } ,

f i ( z ) f i ( x ) + | f i ( z ) f i ( x ) | f i ( x ) + K i z x < f i ( x ) + K i η f i ( x ¯ ) ε i + K i η f i ( x ¯ ) ,

which contradicts the assumption that x ¯ Ω .

4. The Main Algorithm

Consider the multiobjective optimization problem (22), where the constraint set X is a box defined by

X : = i = 1 k [ α i , β i ] , (29)

where α i < β i ( i = 1 , , k ) . Suppose that the assumptions of Proposition 4 are satisfied. We want to specify a η -discretization of X . We construct the set Ω as follows:

Ω : = { x k : x i = α i + t i k i ( β i α i ) , t i = 0 , 1 , , k i , i = 1 , , k } , (30)

where k i ( i = 1 , k ) are given positive integers.

Proposition 5 For every given η > 0 , it is possible to find the numbers k i so large that the set Ω defined by (30) is a η -discretization of X .

Proof. The inclusion Ω X is obvious. We now prove the second inclusion in (24). For simplicity, we consider the l norm in k :

x : = max 1 i k | x i | . (31)

Let us choose k i so that

1 k i ( β i α i ) < 2 η . (32)

Take any x X . Then, for each i { 1, , k } , there exists s i { 0 , 1 , , k i } such that the number z i defined by

z i : = α i + s i k i ( β i α i ) (33)

satisfies

| x i z i | 1 2 k i ( β i α i ) < η .

Then the vector z : = ( z 1 , , z k ) Ω is such that

x z : = max 1 i k | x i z i | < η ,

which completes the proof.

In the sequel we consider the following simple evolutionary algorithm which is a special case of the algorithm described in Subsection 2.4. It does not contain selection and crossover. The mutation process is very simple and consists in replacing the current population by another randomly chosen population of the same size. However, the stopping criteria described above still hold for this algorithm because their proofs make use of the properties of the mutation alone.

Algorithm 1 Parameters: δ > 0 (for the stopping criterion), ε l (for defining η -discretization).

1) Set t : = 0 .

2) Choose randomly a population D 0 consisting of r elements of Ω .

3) Construct the set E 0 by formula (19).

4) Mutate the population D t by replacing it by another randomly chosen population D t + 1 consisting of r elements of Ω .

5) Construct the set E t + 1 by formula (20).

6) If t + 1 t min ( δ ) , then stop and define Ω ¯ : = E t + 1 .

7) Increase t by 1 and go to Step 4.

Proposition 6 After stopping Algorithm 1, the equality Ω ¯ = Ω holds with probability δ , and consequently, Ω ¯ consists entirely of ε -efficient solutions of problem (22) with probability δ .

Proof. Apply Theorem 3 and Corollary 2 with M : = card Ω and β : = 1 / M (we assume the equal probability 1 / M of mutating i to j for every i , j Ω ).

5. Computational Examples

Below we report on testing the algorithm described above on some examples taken from literature. To find the set of minimal elements (i.e., nondominated elements) of finite sets in Steps 3 and 5, we have used the simple algorithm for classifying a population according to non-domination, see Section 4.3.1 of [12] .

Example 7 (Problem SCH in Table I of [13] )

min ( f 1 ( x ) , f 2 ( x ) ) ,

where f 1 ( x ) = x 2 , f 2 ( x ) = ( x 2 ) 2 , x [ 10 3 , 10 3 ] .

As stated in Table I of [13] , any point x [ 0,2 ] is a Pareto optimal solution of this problem. Let X = [ 10 3 , 10 3 ] . Since each of the functions f i , i = 1 , 2 , is continuously differentiable on X , which is closed and bounded, then each of

f i is locally Lipschitz continuous on X . Here d f 1 ( x ) d x = 2 x and

d f 2 ( x ) d x = 2 ( x 2 ) . Hence, s u p x X | d f 1 ( x ) d x | 2000 and s u p x X | d f 2 ( x ) d x | 2004.

Therefore, we can take the Lipschitz constants K i = 2004 , i = 1 , 2 such that

| f i ( y ) f i ( z ) | K i | y z | , forall y , z X .

Let ε = ( ε 1 , ε 2 ) = ( 50 , 50 ) . Then, from (27), we have η = 25 1002 . In formula

(30), let k 1 = 64 × 10 3 . Hence the cardinality of Ω is card ( Ω ) = 64 × 10 3 + 1

and 1 k 1 ( β 1 α 1 ) = 1 32 , and therefore inequality (32) is satisfied. Suppose that

the population size is r = 200. For the stopping criterion, we take δ = 0.99 and compute t min ( δ ) = 5016 . After 5016 iterations of Algorithm 1, the resulting set Ω ¯ is the following:

Ω ¯ = { 0 , 1 , 2 , 1 32 , 1 16 , 3 32 , 1 8 , 5 32 , 3 16 , 7 32 , 1 4 , 9 32 , 5 16 , 11 32 , 3 8 , 13 32 , 7 16 , 15 32 , 1 2 , 17 32 , 9 16 , 19 32 , 5 8 , 21 32 , 11 16 , 23 32 , 3 4 , 25 32 , 13 16 , 27 32 , 7 8 , 29 32 , 15 16 , 31 32 , 33 32 , 17 16 , 35 32 , 9 8 , 37 32 , 19 16 , 39 32 , 5 4 , 41 32 , 21 16 , 43 32 , 11 8 , 45 32 , 23 16 , 47 32 , 7 4 49 32 , 25 16 , 51 32 , 13 8 , 53 32 , 27 16 , 55 32 , 7 4 , 57 32 , 29 16 , 59 32 , 15 8 , 61 32 , 31 16 , 63 32 } . (34)

Remarks:

1) One should remember that the number t min ( δ ) depends on the pre- scribed probability δ . We have run Algorithm 1 many times up to 10,000 iterations and have observed the following changes in the set E t + 1 : it has always become the set (34) somewhere between iterations 1155 and 1330, and has not changed in the later iterations. This means that the theoretically computed number of 5016 iterations gives the correct set Ω ¯ (in the sense that it cannot be further improved), but in fact much less iterations are sufficient to obtain the same result.

2) The cardinality of Ω ¯ is card ( Ω ¯ ) = 65 . Each element of Ω ¯ belongs to the interval [ 0,2 ] , and hence is a Pareto efficient solution.

3) According to the performance measure Diversity Metric Δ , see section B page 188 in [13] , the mean and variance of Δ for Algorithm 1 is 0.1014490343 and 0.09539251009 , respectively, where d f = d l = 0. Hence our algorithm finds better spread of solutions than any other algorithm included in Table III of [13] , see Figure 1, this is because the mean is the smallest one.

Example 8 (Problem FON in Table I of [13] ). Consider the following opti- mization problem:

min ( f 1 ( x ) , f 2 ( x ) ) ,

where

f 1 ( x ) = 1 exp ( i = 1 3 ( x i 1 3 ) 2 ) , f 2 ( x ) = 1 exp ( i = 1 3 ( x i + 1 3 ) 2 ) ,

with variable bounds x 1 , x 2 , x 3 [ 4,4 ] .

Table I of [13] states that every point ( x 1 , x 2 , x 3 ) satisfying the condition

x 1 = x 2 = x 3 [ 1 3 , 1 3 ] (35)

is a Pareto optimal solution of this problem. Let X = [ 4 , 4 ] 3 . Since each of the functions f i , i = 1 , 2 , is continuously differentiable on X , which is closed and bounded, then each of f i is locally Lipschitz continuous on X . We denote by f i ( x ) the gradient vector of f i at x :

f i ( x ) = ( f i ( x ) x 1 , f i ( x ) x 2 , f i ( x ) x 3 ) T , i = 1 , 2.

Then

f i ( x ) = max 1 j 3 | f i ( x ) x j | , i = 1 , 2.

Figure 1. True PF and nondominated solutions by New Algorithm on SCH.

Note that sup x X f i ( x ) 1 for i = 1 , 2 . For any y , z X , there exists u [ y , z ] such that

| f i ( y ) f i ( z ) | = | f i ( u ) , y z | = | j = 1 3 f i ( u ) x j ( y j z j ) | j = 1 3 | f i ( u ) x j | | y j z j | 3 f i ( u ) y z 3 sup x X f i ( x ) y z . (36)

Therefore, we can take the Lipschitz constants K i = 3 , i = 1 , 2, such that

| f i ( y ) f i ( z ) | K i y z , forall y , z X . (37)

Let ε = ( ε 1 , ε 2 ) = ( 3 5 , 3 5 ) . Then, from (27), we have η = 1 5 . In formula (30),

let k i = 50 , i = 1 , 2 , 3. Hence the cardinality of Ω is card ( Ω ) = 51 3 = 132651

and 1 k i ( β i α i ) = 4 25 , and therefore inequality (32) is satisfied. Suppose that

the population size is r = 200 . For the stopping criterion, we take δ = 0.99 and compute t min ( δ ) = 10878 . After 10878 iterations of Algorithm 1, the re- sulting set Ω ¯ is the following:

(38)

Remarks:

1) In practical tests, the set E t + 1 has always become the set (38) somewhere between iterations 3475 and 3500, and has not changed in the later iterations.

2) The cardinality of Ω ¯ is card ( Ω ¯ ) = 57. The points in Ω ¯ which satisfy condition (35) are Pareto optimal but the other elements of Ω ¯ are not optimal. However, it follows from Proposition 6 that all elements of Ω ¯ are ε -efficient solutions with probability δ .

3) According to the performance measure Diversity Metric Δ , see section B page 188 in [13] , the mean and variance of Δ for Algorithm 1 is 0.06078996663 and 0.4859115201 , respectively, where d f = d l = 0.01343253265. Hence our algorithm finds better spread of solutions than any other algorithm included in Table III of [13] , see Figure 2, this is because the mean is the smallest one.

Example 9 (Problem POL in Table I of [13] ).

m i n ( f 1 ( x ) , f 2 ( x ) ) ,

where

f 1 ( x ) = [ 1 + ( A 1 B 1 ) 2 + ( A 2 B 2 ) 2 ] , f 2 ( x ) = [ ( x 1 + 3 ) 2 + ( x 2 + 1 ) 2 ] ,

A 1 = 0.5 sin 1 2 cos 1 + sin 2 1.5 cos 2, A 2 = 1.5 sin 1 cos 1 + 2 sin 2 0.5 cos 2 ,

B 1 = 0.5 sin x 1 2 cos x 1 + sin x 2 1.5 cos x 2 ,

B 2 = 1.5 sin x 1 cos x 1 + 2 sin x 2 0.5 cos x 2 , with variable bounds x 1 , x 2 [ π , π ] .

POL is a problem with two nonconvex Pareto fronts that are disconnected in both the objective and decision variable spaces, see [13] . The true set of Pareto- optimal solutions is difficult to know in this problem. Figure 3 illustrates that Algorithm 1 is able to discover the two disconnected Pareto fronts that lie on the boundaries of the search space.

Let X = [ π , π ] × [ π , π ] . Since each of the functions f i , i = 1 , 2 , is con- tinuously differentiable on X , which is closed and bounded, then each of f i , i = 1 , 2 , is locally Lipschitz continuous on X . By using a computer program, it is possible to show that s u p x X f 1 ( x ) 34 and sup x X f 2 ( x ) 13 . Using an estimate similar to (36), but with two variables, we find that, for the following Lipschitz constants: K 1 = 68 , and K 2 = 26 , we have

| f i ( y ) f i ( z ) | K i y z , forall y , z X , i = 1 , 2.

Figure 2. True PF and nondominated solutions by New Algorithm on FON.

Let ε = ( ε 1 , ε 2 ) = ( 5 / 2 , 1 ) . Then, from (27), we have η = 5 136 . In formula

(30), let k i = 100 , i = 1 , 2. Hence the cardinality of Ω is

card ( Ω ) = 101 2 = 10201 and 1 k i ( β i α i ) = π 50 , and therefore inequality (32) is

satisfied. Suppose that the population size is r = 200 . For the stopping criterion, we take δ = 0.99 and compute t min ( δ ) = 706 . After 706 iterations of Algorithm 1, the resulting set Ω ¯ is the following:

(39)

Remarks:

1) In practical tests, the set E t + 1 has always become the set (39) somewhere

Figure 3. True PF and nondominated solutions by New Algorithm on POL.

between iterations 285 and 350, and has not changed in the later iterations.

2) The cardinality of Ω ¯ is card ( Ω ¯ ) = 75. It follows from Proposition 6 that all elements of Ω ¯ are ε -efficient solutions with probability δ .

3) According to the performance measure Diversity Metric Δ , see section B page 188 in [13] , the mean and variance of Δ for Algorithm 1 is 0.5021982345 and 0.7382353788, respectively, where d f = 0.1063348336 and d l = 0.01974325126 for the left Pareto front in Figure 3, and d f = 0.0139738762 and d l = 0.1941428847 for the right Pareto front in Fig- ure 3. Hence, in Table III of [13] , a better spread of solution is achieved by the algorithm NSGA-II (real-coded) in [13] . The spread of solution by our algori- thm is the next-best for this problem.

6. Conclusion

We have presented a new evolutionary method for generating ε -efficient so- lutions of a continuous multiobjective programming problem. This was achieved by discretizing the problem and then using a genetic algorithm. Some proba- bilistic stopping criteria were used for this procedure to obtain, with a pre- scribed probability, all minimal solutions for the discretized problem, which are ε -efficient solutions for the original problem. This article contains the main underlying theory and only some preliminary numerical computations pertaining to this method.

Acknowledgements

The authors are grateful to an anonymous referee for his/her comments which have improved the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Ruzika, S. and Wiecek, M.M. (2005) Approximation Methods in Multiobjective Programming. Journal of Optimization Theory and Applications, 126, 473-501.
https://doi.org/10.1007/s10957-005-5494-4
[2] Ghaznavi-Ghosoni, B.A., Khorram, E. and Soleimani-damaneh, M. (2013) Scalarization for Characterization of Approximate Strong/Weak/Proper Efficiency in Multi-Objective Optimization. Optimization, 62, 703-720.
https://doi.org/10.1080/02331934.2012.668190
[3] Loridan, P. (1984) -Solutions in Vector Minimization Problems. Journal of Optimization Theory and Applications, 42, 265-276.
https://doi.org/10.1007/BF00936165
[4] Engau, A. and Wiecek, M.M. (2007) Generating -Efficient Solutions in Multiobjective Programming. European Journal of Operational Research, 177, 1566-1579.
https://doi.org/10.1016/j.ejor.2005.10.023
[5] Engau, A. and Wiecek, M.M. (2007) Exact Generation of Epsilon-Efficient Solutions in Multiple Objective Programming. OR Spectrum, 29, 335-350.
https://doi.org/10.1007/s00291-006-0044-5
[6] Laumanns, M., Thiele, L., Deb, K. and Zitzler, E. (2002) Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evolutionary Computation, 10, 263-282.
https://doi.org/10.1162/106365602760234108
[7] Schutze, O., Laumanns, M., Tantar, E., Coello Coello, C.A. and Talbi, E.-G. (2007) Convergence of Stochastic Search Algorithms to Gap-Free Pareto Front Approximations. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2007), 892-899.
[8] Studniarski, M. (2011) Finding All Minimal Elements of a Finite Partially Ordered Set by Genetic Algorithm with a Prescribed Probability. Numerical Algebra, Control and Optimization, 1, 389-398.
https://doi.org/10.3934/naco.2011.1.389
[9] Vose, M.D. (1999) The Simple Genetic Algorithm: Foundations and Theory. MIT Press, Cambridge.
[10] Reeves, C.R. and Rowe, J.E. (2003) Genetic Algorithms—Principles and Perspectives: A Guide to GA Theory. Kluwer, Boston.
[11] Rudolph, G. and Agapie, A. (2000) Convergence Properties of Some Multi-Objective Evolutionary Algorithms. In: Zalzala, A., et al., Eds., Proceedings of the 2000 Congress on Evolutionary Computation (CEC 2000), Vol. 2, IEEE Press, Piscataway, 1010-1016.
[12] Osman, M.S., Abo-Sinna, M.A. and Mousa, A.A. (2005) An Effective Genetic Algorithm Approach to Multiobjective Resource Allocation Problems (MORAPs). Applied Mathematics and Computation, 163, 755-768.
https://doi.org/10.1016/j.amc.2003.10.057
[13] Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6, 182-197.
https://doi.org/10.1109/4235.996017

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.