A Min-Max Strategy to Aid Decision Making in a Bi-Objective Discrete Optimization Problem Using an Improved Ant Colony Algorithm ()

Douglas Yenwon Kparib^{}, Stephen Boakye Twum^{}, Douglas Kwasi Boah^{}

Department of Mathematics, Faculty of Mathematical Sciences, University for Development Studies, Navrongo, Ghana.

**DOI: **10.4236/ajor.2019.94010
PDF HTML XML
520
Downloads
878
Views
Citations

Department of Mathematics, Faculty of Mathematical Sciences, University for Development Studies, Navrongo, Ghana.

A multi-objective optimization problem has two or more objectives to be minimized or maximized simultaneously. It is usually difficult to arrive at a solution that optimizes every objective. Therefore, the best way of dealing with the problem is to obtain a set of good solutions for the decision maker to select the one that best serves his/her interest. In this paper, a ratio min-max strategy is incorporated (after Pareto optimal solutions are obtained) under a weighted sum scalarization of the objectives to aid the process of identifying a best compromise solution. The bi-objective discrete optimization problem which has distance and social cost (in rail construction, say) as the criteria was solved by an improved Ant Colony System algorithm developed by the authors. The model and methodology were applied to hypothetical networks of fourteen nodes and twenty edges, and another with twenty nodes and ninety-seven edges as test cases. Pareto optimal solutions and their maximum margins of error were obtained for the problems to assist in decision making. The proposed model and method is user-friendly and provides the decision maker with information on the quality of each of the Pareto optimal solutions obtained, thus facilitating decision making.

Keywords

Optimization, Discrete, Bi-Objective, Ratio Min-Max, Network, Pareto Optimal

Share and Cite:

Kparib, D. , Twum, S. and Boah, D. (2019) A Min-Max Strategy to Aid Decision Making in a Bi-Objective Discrete Optimization Problem Using an Improved Ant Colony Algorithm. *American Journal of Operations Research*, **9**, 161-174. doi: 10.4236/ajor.2019.94010.

1. Introduction

Optimization is defined as a process for finding the best solution to a problem, referred to as the global optimal solution from a given set of possible (feasible) solutions [1] . Applications of optimization are found in Engineering, Science, Economics, Finance, Medicine and Mathematics [1] .

In today’s business environment, where cost of production is high coupled with a lot of competitions due to globalization, single-objective optimization problems have become less popular. Much attention has now been shifted to multi-objective optimization problems. A multi-objective optimization problem has a set of objectives that are usually in competition or conflict; the appreciation of one objective results in the depreciation of the other [2] . It is very difficult or impossible to obtain a solution that is optimal for each of the objectives. The best way is to find a solution that will be adjudged the best compromise one.

The aim of this paper is to develop a model and methodology that will generate a set of Pareto solutions and also a way to assist the decision maker to choose the best compromise solution. The next section outlines the multi-objective optimization model and solution methods, with specific focus on the normed and weighted sum; solution algorithms are also given some general attention with a special one for the Ant Colony System algorithm, which the authors have made some improvements to [3] . A weighted sum ratio min-max strategy is proposed in the next section followed by the consideration of some test cases to generate numerical results in the subsequent section. The results are discussed in the succeeding section, while the last concludes the paper.

2. Methodology

2.1. Multi-Objective Optimization

Multi-objective optimization is concerned with minimizing or maximizing more than one objective function at the same time, subject to some constraints [4] . A general mathematical model of multi-objective optimization is given as:

$\mathrm{min}\left\{{f}_{1}\left(x\right),{f}_{2}\left(x\right),\cdots ,{f}_{n}\left(x\right)\right\}$

subject to ${g}_{i}\left(x\right)\le {b}_{i},i=1,2,\cdots ,m$ (1)

where $x\in X$ is a vector of decision variables, ${f}_{j}$ is the jth objective function $\left(j=1,2,\cdots ,n\right)$ , ${g}_{i}$ is the constraint function $\left(i=1,2,\cdots ,m\right)$ , and ${b}_{i}$ is the limit on the ith constraint. The expression (1) is nonlinear if at least one objective function or constraint is nonlinear, otherwise it is linear. It is continuous if the decision variables can assume continuous values. It is discrete if the decision variables can only assume discrete values, and mixed if both continuous and discrete values are admissible; in the last two cases, additional notation to those effects are included in (1).

2.2. Concept of Optimality

The problem defined in (1) belongs to the class of constrained multi-objective optimization problems, within this class of problems there are the linear types, the nonlinear types, the continuous types, the discrete and the mixed integer types [5] . Generally, there is no unique minimum, but rather a set of equally good solutions [6] . Notable notions of optimality are: Pareto, min-max, and lexicographic, with the most fundamental of these being Pareto optimality which is derived from the concept of dominance. Therefore, Pareto optimality can also be called Pareto dominance.

A decision vector ${x}^{*}\in X$ is said to dominate a decision vector $x\in X$ or is Pareto optimal, if ${f}_{i}\left({x}^{*}\right)\le {f}_{i}\left(x\right)$ for all $i=1,2,\cdots ,n$ and if ${f}_{j}\left(x\right)\le {f}_{j}\left(x\right)$ for some $j\in \left\{1,2,\cdots ,n\right\}$ . For a fuller discussion of the subject the reader is referred to [6] [7] . This definition indicates that there is more than just one solution that is Pareto optimal. There could be many or even infinite solutions, depending on the type and nature of the problem [8] [9] . The question of which is the most acceptable solution is subjective, and the final solution of the decision maker is always based on a trade-off between objectives. Even so, obtaining a Pareto optimal solution set is preferable to having just a single solution, due to its practicability to real-life problems.

2.3. Solution Methods

There are a number of methods designed to assist the decision maker to arrive at the best compromise solution. The most classical of the methods use a number of schemes to convert the multiple objectives into a scalar one [10] [11] (these would be the focus of our discussions in this paper) and apply standard scalar optimization algorithms which generate one Pareto optimal solution with each run of the algorithm [12] . The unique optimum which results as solution is considered Pareto optimal under certain conditions required by the particular method. There are algorithms today, such as the meta-heuristic ones however, that can generate portions or the entire Pareto optimal solutions in a single run of the algorithm due to their global search approach to finding solutions. One of such algorithms is the Ant Colony algorithm, which is the choice algorithm in this work due to its suitability as observed in the next section. Obtaining a Pareto optimal solution set is preferable to a single solution, since it provides a basis upon which to make value judgment’s in order to settle on a final solution.

An important attribute of the methods under the classical class is that of using them interactively by incorporating user preferences directly into the algorithm at specified stages of the solution process, such as before, during, or after the run of the algorithm [13] [14] in order to find the user’s best compromise solution. This means that it may not be necessary to produce the entire Pareto optimal solution set where the user’s preference is available. There is the risk, however, of obtaining a solution that is not Pareto optimal with this approach [15] . In this section attention is devoted to two of the methods in view of the fact that they are related to this work; these are the Global Criterion (or the Normed) and weighted sum methods.

The normed method, also called compromise programming, global criterion, or utopian point method [14] [16] minimizes the aggregate relative distance from a prospective solution to the actual optimal solution of the objective [17] . The method is defined as:

$\mathrm{min}{\left|\underset{j=1}{\overset{n}{{\displaystyle \sum}}}{\left|\frac{{f}_{j}-{f}_{j}^{*}}{{f}_{j}^{*}}\right|}^{p}\right|}^{\frac{1}{p}},x\in X,1\le p\le \infty $ (2)

where ${f}_{j}^{*},j=1,2,\cdots ,n$ is the unique value of the jth objective function, and the utopian solutions are given by the vector ${\left({f}_{1}^{*},{f}_{2}^{*},\cdots ,{f}_{n}^{*}\right)}^{\text{T}}$ [18] . A utopia solution is a vector of the unique solutions of the individual objectives and is generally infeasible. The power p offers the various ways of calculating the relative distance. The most often used values of p are 1, 2 and ∞. When ∞ is assigned, the resulting scalar problem is called min-max optimization as defined by [16] . Pareto optimal solutions are obtained by incorporating weights in (2) and the weighted sum approach used to find the Pareto optimal solutions.

The Weighted-Sum method: Since the development of the concept of Pareto optimality in Economics by Vilfredo Pareto in 1906 and the application of it in Engineering and Science by Stadler in the 1970s [19] , the weighted sum method has been the most commonly used in solving a multi-objective optimization problem. The objective functions are weighted and aggregate in a single objective function as in (3):

$\mathrm{min}F\left(x\right)=\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\text{\hspace{0.05em}}{w}_{j}{f}_{j}(\; x\; )$

subject to $x\in X,{\displaystyle {\sum}_{j=1}^{n}{w}_{i}}=1,{w}_{j}\ge 0,\forall j$ (3)

where
${w}_{j}$ is the weight of the j^{th} objective function [20] . The weights are assigned to the objectives by the decision maker or analyst. The weights indicate the relative importance of the objective functions to the user. Each set of weights used results in a single optimal solution [21] . For fuller and more detailed discussion of the existing methods of solution of the problem in (1) the reader is referred to [14] .

Since in general the objective functions in a given problem may be dimensionally unequal, and to avoid ambiguity in the weights that are assigned to the objective functions, the objective functions may be normalized (i.e., the value of each of objective function is mapped onto the interval [0 1]). For specific examples the reader is referred to [22] [23] .

3. Bi-Objective Problem

Let $G\left(V,E\right)$ be an undirected graph consisting of an indexed set of nodes V with $n=\left|V\right|$ and a spanning set of edges (arcs) E with $m=\left|E\right|$ , where n and m are the number of nodes and edges (arcs) respectively. Each arc is represented as a pair of nodes, thus from node i to node j, and denoted by $\left(i,j\right)$ . Let each arc be associated with two numbers: the distance ${d}_{ij}$ and social cost ${A}_{ij}$ in constructing rail lines to link nodes (towns) and let the arcs represent the streets.

The bi-objective optimization problem is thus:

$\mathrm{min}{f}_{1}\left(p\right)=\underset{\left(i,j\right)\in E}{{\displaystyle \sum}}{d}_{ij}{x}_{ij}$

$\mathrm{min}{f}_{2}\left(p\right)=\underset{\left(i,j\right)\in E}{{\displaystyle \sum}}{A}_{ij}{x}_{ij}$

subject to ${x}_{ij}\in \left\{0,1\right\},\forall \left(i,j\right)\in E,i=1,2,\cdots ,n-1$ and $j=1,2,\cdots ,n$ (4)

where ${x}_{ij}$ is the decision variables for selecting or otherwise, nodes $\left(i,j\right)$ , p is the path from the source to the destination, ${f}_{1}$ and ${f}_{2}$ respectively the distance function and social cost function with $f\left(p\right)=\left({f}_{1},{f}_{2}\right)$ . The problem (4) belongs to the class of discrete multi-objective optimization. The techniques for solving continuous multi-objective optimization problems are also application to this type of problem as given in (4), however, there are some limitations on the ability of the methods to find the entire Pareo optimal solutions [24] . The computational challenges for such problems are also similar to those encountered in continuous problems in the application of any scalar methods.

It must be observed that in this problem (4), the objectives are not inherently in conflict and that the conflict could only arise in the data which may be used. For example, the possibility of both the distance and social cost decreasing at the same time and vice versa is real. Therefore there is the possibility of obtaining a unique optimal solution.

Solution Technique

The weighted sum is applied as the solution method to obtain Pareto optimal solutions, following which a post-optimality ratio min-max strategy is employed as an aid in the process of identifying a best compromise solution for the decision maker. Suppose K is the set of Pareto optimal solutions, such that $p\in K$ is a Pareto optimal path, then the proposed ratio min-max strategy is given by:

$\mathrm{min}\left\{F\left(p,w\right)\right\}=\mathrm{min}\left\{\mathrm{max}\left|\frac{{F}_{j}^{*}\left(p,w\right)}{{f}_{j}^{*}\left(p\right)}-1\right|,\forall j:p\in K\right\}$

where
${f}_{j}^{*}\left(p\right)$ is the unique optimal solution value of j^{th} objective function,
${F}_{j}^{*}\left(p,w\right)$ is the value of the j^{th} objective function at a Pareto optimal solution after the weighted sum optimization,
$F\left(p,w\right)$ is therefore a measure of the maximum margin of error of the j^{th} objective function at the Pareto optimal solutions.

The following steps depict the process towards finding the best compromise solution:

Step 1: Compute
${f}_{j}^{*}\left(p\right)$ , the unique optimal value of the j^{th} objective function for all
$p\in K$ ;

Step 2: Perform the weighted sum optimization with a set of generated weights;

Step 3: Compute
${F}_{j}^{*}\left(p,w\right)$ , the value of the j^{th} objective function for all
$p\in K$ ;

Step 4: Compute the absolute values of the ratios ${F}_{j}^{*}\left(p,w\right)$ to ${f}_{j}^{*}\left(p\right)$ less than one #Math_48# and $p\in K$ ;

Step 5: Select the maximum values of the result in Step 4 $\forall j$ and $p\in K$ ;

Step 6: Choose the minimum value in step 5.

The proposed strategy finds the margin of errors of the Pareto optimal of the objective functions values relative to their utopian ones to assists the decision maker to choose the result which best suits his/her interest.

4. Solution Algorithm

The Ant Colony algorithm is in the class of meta-heuristic algorithms and was primarily designed to handle network problems. It has demonstrated its ability to obtain the best solution for large and combinatorial optimization problems [25] [26] and it is also flexible. The development of the Ant Colony Optimization (ACO) meta-heuristic algorithm was inspired by the socialist lifestyle exhibited by ants in search of food and other provisions [27] [28] . Individual ants are considered un-intelligent and are practically blind and crappy in sight. But with their social structure, they are able to determine the shortest path to a food source after discovering it [29] .

Ants, in the course of looking for food and other provisions, drop a chemical called pheromone in their trail anytime they come across food to attract other ants to the place [30] . So the ants that move along the shortest path accommodate more amounts of pheromones per unit length. The amount of pheromone that an ant deposit is inversely proportional to the distance travelled. That is the higher the density of pheromone the shorter the path [30] [31] .

The Ant Colony algorithm uses the decision of the artificial ants to move from node to node by a stochastic process which is dictated by the magnitude of the pheromone on the route.

There are three main kinds of Ant Colony Optimization algorithms: 1) the Ant System (AS) by [32] , 2) the Max-Min Ant System (MMAS) by [33] and 3) the Ant Colony System (ACS) by [34] as cited in [35] . The contributions of the authors are with respect to the third. The specific improvements made to the algorithm are discussed next.

An Improved Ant Colony System Algorithm

The ACS algorithm involves seven main steps, which are: 1) setting parameters, 2) initializing pheromone trails, 3) calculating the heuristic Information, 4) building the ant solution by using the stochastic state transition rule, 5) updating the local pheromone, 6) applying local search to improve solution constructed by an ant, and 7) updating the global pheromone information. The contributions of the Authors [3] are in (2), (3) and (4). The modifications mode in these three specific areas of the ACS—a detailed discussion of which are presented in [3] , results in improvement of the ACS algorithm, in terms of the number of iterations required for convergence. The proposed modifications are presented here as follows:

The initial pheromone ${\tau}_{ij}\left(0\right)$ that is deposited along $\left(i,j\right)$ at the beginning of the search, which is usually a small positive constant, is proposed:

${\tau}_{ij}\left(0\right)=\frac{1}{\left(n-1\right){\stackrel{\xaf}{L}}_{ij}}$

where n is the number of nodes (i.e., the problem size), ${\stackrel{\xaf}{L}}_{ij}$ is the ratio of the weighted sum of the distance and social cost linking node i to node j to the total from the source node to the current node.

The heuristic information in literature is calculated from the current node, i where the ant is to the next node j. This approach, however, fails to provide the ant with information on the nature of the path from source node to the current node i. This shortcoming is addressed in the reformulation. The heuristic information is calculated from the source node to the present node i and from node i to the next node j as:

${\eta}_{ij}={w}_{1}{\eta}_{0j}^{1}+{w}_{2}{\eta}_{0j}^{2}$

${\eta}_{0j}^{1}=\frac{1}{\left({d}_{0i}+{d}_{ij}\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\eta}_{0j}^{2}=\frac{1}{\left({A}_{0i}+{A}_{ij}\right)}$

${w}_{1},{w}_{2}\in w$ and $0\le w\le 1$ ,

where ${d}_{0i}$ is the total distance from the source node to the present node i, ${A}_{0i}$ is the total social cost from the source node to the present node i.

The aim of local pheromone update is to reduce the content of the pheromone along the routes to encourage other ants to generate new paths. The practice has been to deduct a constant amount along all routes. Even so, care needs to be taken to ensure that paths far from optimal are not created. To address this, the local pheromone update model is reformulated to ensure that reduction in the pheromone levels during the construction of paths is done in such a way that smaller amounts are taken from the shorter routes compared to the longer ones. The proposed ratio approach is:

${\tau}_{ij}\left(t+1\right)=\left(1-\rho \right)\frac{{\tau}_{ij}\left(t\right)}{{L}_{ij}}+\rho {\tau}_{ij}\left(0\right),$

where ${L}_{ij}={\displaystyle \sum {\tau}_{ij}\left(t\right)}$ and $\rho $ is the rate of evaporation of the pheromone.

Also, to speed up the process of arriving at the optimal solution, a new approach is proposed to calculate the pheromone increment. This is the global pheromone update, which is modeled as:

${\tau}_{ij}\left(t+1\right)=\left(1-\rho \right){\tau}_{ij}\left(t\right)+\frac{1}{\rho}{\displaystyle \underset{k=1}{\overset{m}{\sum}}\Delta {\tau}_{ij}^{k}}$

$\Delta {\tau}_{ij}^{k}=\{\begin{array}{l}\frac{1}{\stackrel{\xaf}{L}}\cdot \frac{{L}^{k}-{L}_{ij}}{{L}^{k}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\left(i,j\right)\in {U}_{i}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}{I}_{i}\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$

where ${L}_{ij}$ is the weighted sum of distance and social cost $\left({w}_{1}{d}_{ij}+{w}_{2}{A}_{ij}\right)$ of the edge $\left(i,j\right)$ , ${L}^{k}$ is weighted sum of distance and social cost of the kth solution generated by ant k, $\stackrel{\xaf}{L}$ is the ratio of the weighted sum of the two criteria of the solutions.

In constructing the feasible solutions, the decision by the ants to move from one node to the next is based on a stochastic probability rule. The proposed initial pheromone trail, heuristic information, local pheromone update and global pheromone trail update are embedded in the expression below, as in [36] :

${P}_{ij}^{k}\left(t\right)=\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}q\le {q}_{0}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}j={j}^{*}\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}q\le {q}_{0}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}j\ne {j}^{*}\\ \frac{{\left({\tau}_{ij}\left(t\right)\right)}^{\alpha}{\left({\eta}_{ij}\left(t\right)\right)}^{\beta}}{{{\displaystyle \sum}}_{r\in {N}_{i}^{k}}{\left({\tau}_{ir}\left(t\right)\right)}^{\alpha}{\left({\eta}_{ir}\left(t\right)\right)}^{\beta}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$

${j}^{*}:\text{yields}\text{\hspace{0.17em}}\mathrm{arg}{\mathrm{max}}_{r\in {N}_{i}^{k}}\left({\left({\tau}_{ij}\left(t\right)\right)}^{\alpha}{\left({\eta}_{ij}\left(t\right)\right)}^{\beta}\right)$ and it is used to identify the unvisited node in ${N}_{i}$ that maximizes ${P}_{ij}^{k}\left(t\right)$ .

${N}_{i}^{k}\in {N}_{i}$ is the set of nodes which are the neighbors of node i and are not yet visited by ant k (nodes in ${N}_{i}^{k}$ are obtained from those in ${N}_{i}$ by the use of ant k’s private memory ${H}_{i}^{k}$ (which stores nodes already visited by the ant)). ${N}_{i}$ is the set of nodes which are directly linked to node i by an edge (i.e. the neighbours of node i). ${\tau}_{ij}\left(t\right)$ is the quantity of pheromone trail laid along the edge linking node i and node j. ${\eta}_{ij}\left(t\right)$ is the heuristic information for the ant visibility measure. $\alpha $ is the parameter to control the influence of ${\tau}_{ij}$ . $\beta $ is the parameter to control the influence of ${\eta}_{ij}{q}_{0}$ is the pre-defined parameter ( $0\le {q}_{0}\le 1$ ). q is the uniformly distributed random number to determine the relative importance of exploitation versus exploration, $q\in \left[0,1\right]$ .

5. Numerical Test Cases

Two example networks are presented and their results discussed in this section.

5.1. Test Case 1

The first network has fourteen (14) nodes and twenty-eight (28) edges with the node labeled 1 being the source and the one labeled 14 being the destination (see Figure 1). The aim is to find a path from 1 to 14 which minimizes the distance and the social cost simultaneously.

Figure 1. A network of 14 nodes and 28 edges with distances and social costs values.

In Table 1, the first and second columns are the names of the paths and their composites (path sets). The third column gives the values of the weights. The fourth column gives the Pareto optimal objective function values and the fifth gives the min-max ratio values. The sixth column gives the margin of error (ME) and the seventh the maximum margin of error (MME) values. The eighth and the ninth columns give the number of iterations that the existing ACS (E) and the Improved ACS algorithms (I) each takes to converge to the optimal solution [3] . The last row gives the utopia solution of each of the objective functions.

The first row of Table 1 shows that the path 1-4-8-10-13-14 yields a minimum distance of 14 km and a minimum social cost of 11 units; the min-max ratios for the distance and social cost values are approximated to 1.17 and 1.47 respectively, leading to a margin of error 0.17 and 0.47 respectively and hence a maximum margin of error of approximately 0.47. The margin of error and the maximum margin of error (MME) figures show that whereas the minimum distance was 17% short of the ideal solution for distance, the minimum social cost value was 47% short of the ideal solution. The new algorithm has less number of iterations compared to the existing one, which is an improvement over the existing one. Similar interpretations apply to the rest of the rows of the table.

Table 1. Results of test Case 1.

The results support the view expressed above that, the multi-objective optimization model is such that the objectives are not inherently in conflict and that the conflict could only arise in the data itself. This is evidenced by the objective function values of (12, 7) obtained. The values are clearly the same as the utopian ones (see last row of the table). This indicates that the objectives are not in conflict for most of the data used in this network. The objective function values of (12, 7) dominate all the others in the table, except (12, 10) which it weakly dominates. This observation explains the optimal objective function values, margin of error and maximum margin of error values obtained in Table 1. The unique solution therefore is (12, 7) with the shortest path being (1-4-7-11-12-14).

The other solutions which are dominated by the unique solution (as seen in Table 1) may be worth considering, however, by a decision maker if the problem was with a time window constraint (the need for alternative routes to cater for traffic during the day and other unforeseen problems), for instance, or with other similar conditions. The time window constraint may arise when the distance and the social cost values are worse than the optimal but they offer a better time window.

5.2. Test Case 2

This consists of a larger network of twenty (20) nodes (from 1 as source to 20 as destination) as shown in Figure 2. Interpretations of this figure are similar to that of Test Case 1.

The first row of Table 2 shows that the path 1-2-8-16-20 yields a minimum distance of 16 units and a minimum social cost of 11 units; the min-max

Figure 2. A network of 20 nodes showing the associated distances and social costs.

Table 2. Results of test Case 2.

ratio values for the distance and cost are approximated to 1.33 and 1.22 respectively leading to a margin of error of 0.33 and 0.22 respectively and hence a maximum margin of error of approximately 0.33. The margin of error and the maximum margin of error (MME) figures show that whereas the minimum distance was 33% short of the ideal solution for distance, the minimum social cost value was 22% short of the ideal solution. Similar interpretations are applied to rest of the table.

The results confirm the notion in literature that in the weighted sum method, variation of weights does not necessarily result in different solutions. Also, the size of the Pareto optimal solution set does not depend on the size of the network. Moreover, the problem associated with the new method is that, different paths (solutions) may have the same maximum margin of error but may not have the same objective function values (as in path ${p}_{7}$ and path ${p}_{8}$ in Table 2). Finally, the zero maximum margin of error obtained in Case 1 signifies that, the solution obtained is the best for the two objectives.

6. Conclusion

A ratio min-max strategy to help a decision maker identify a best compromise solution in a bi-objective discrete optimization function and solved by the weighted sum method has been proposed. The problem was illustrated in the context of finding the shortest distance and least social cost in hypothetical rail construction to link a source and destination. An improved ant colony system algorithm was used to find solutions for two hypothetical network problems. The proposed method is user-friendly and also provides the decision maker with the quality of each of the Pareto optimal solutions obtained, making it easy to identify the best compromise solution. A future direction would be to apply the model and methodology to a real network.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Malti, B., Shikha, A. and Sanjay, S. (2012) Survey of Meta-Heuristic Algorithms for Combinatorial Optimization. Journal of Computer Applications, 58, 21-31.
https://doi.org/10.5120/9391-3813 |

[2] | Castro-Gutierrez, J. (2012) Multi-Objective Tools for the Vehicle Routing Problem with Time Windows. |

[3] | Kparib, D.Y., Twum, S.B. and Boah, D.K. (2018) An Improved Ant Colony System Algorithm for Solving Shortest Path Network Problems. International Journal of Science and Research, 7, 1123-1127. |

[4] |
Zeinab, B. and Elhan, S. (2015) The New Hybrid COAW Method for Solving Multi-Objective Optimization. International Journal in Foundations of Computer and Technology, 5, 15-22. https://doi.org/10.5121/ijfcst.2015.5602 |

[5] |
Miettinen, K. (1998) Nonlinear Multi-Objective Optimization. International Series in Operations Research and Management Science, Kluwer Academic Publishing Group, Dordrecht. https://doi.org/10.1007/978-1-4615-5563-6 |

[6] |
Taboada, H.A., Baheranwala, F. and Coit, D.W. (2007) Practical Solutions for Multi-Objective Optimization: An Application to System Reliability Design Problems. Reliability Engineering and System Safety, 92, 314-322.
https://doi.org/10.1016/j.ress.2006.04.014 |

[7] | Ehrgott, M. (2005) Multi-Criteria Optimization. Springer, Berlin, Heidelberg. |

[8] | Eskelinen, P., Miettinen, K., Klamroth, K. and Hakanen (2008) Pareto Navigator for Interactive Nonlinear Multi-Objective Optimization. Springer-Verlag, Berlin. |

[9] | Manuel, L.I. and Stutzle, T. (2012) The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms. IRIDIA-Technical Report Series, Technical Report No. TR/IRIDIA/2011-003. |

[10] |
Orths, A., Schmitt, A., Styczyuskiz, A. and Verstege, J. (2001) Multi-Criteria Optimization Methods for Planning and Operation of Electrical Energy System. Electrical Engineering, 83, 251-258. https://doi.org/10.1007/s002020100085 |

[11] | Schmitt, A. and Verstage, J.F. (2001) A Multi-Criteria Optimization of Ancillary Services with Pareto-Based Evolutionary Strategies. Proceeding of IEEE Porto Power Conference, Porto, 10-15 September 2001, 70-78. |

[12] |
Fonseca, C.M. and Fleming, P.J. (1995) An Overview of Evolutionary Algorithms in Multi-Objective Optimization. Evolutionary Computing, 3, 1-16.
https://doi.org/10.1162/evco.1995.3.1.1 |

[13] |
French, S. (1984) Interactive Multi-Objective Programming, Its Aims, Applications and Demands. Journal of Operations Research Society, 35, 827-834.
https://doi.org/10.1057/jors.1984.165 |

[14] |
Marler, R.T. and Arora, J.S. (2004) Survey of Multi-Objective Optimization Methods for Engineering. Structural Multi-Disciplinary Optimization, 26, 369-395.
https://doi.org/10.1007/s00158-003-0368-6 |

[15] | Collette, Y. and Siarry, P. (2009) Multi-Objective Optimization: Principles and Case Studies (Decision Engineering). Springer-Verlag, Berlin. |

[16] |
Stadler (1988) Multi-Criteria Optimization in Engineering and the Sciences. Kluwer Academic Publishing Group, Dordrecht. https://doi.org/10.1007/978-1-4899-3734-6 |

[17] | Johan, A. (2000) A Survey of Multi-Objective Optimization in Engineering Design. Technical Report L/THIK-R-109, Department of Mechanical Engineering, Linkoping University, Linkoping. |

[18] |
Giagkiozis, I. and Fleminng, P.J. (2015) Methods for Multi-Objective Optimization: An Analysis. Information Sciences, 293, 338-350.
https://doi.org/10.1016/j.ins.2014.08.071 http://www.elsevier.com/locate/ins |

[19] |
Kim, I.Y. and de Weck, O.L. (2005) Adaptive Weighted Sum Method for Bi-Objective Optimization. Structural and Multidisciplinary Optimization, 29, 149-158.
https://doi.org/10.1007/s00158-004-0465-1 |

[20] | Alaa, S., Abdel, K.B., Mohamed, A. and Noor, K. (2012) Multi-Objective Evolutionary Computation Solution for Chocolate Production System Using Pareto Method. International Journal of Computer Science, 9, 75-83. |

[21] | Subhamoy, C. and Sugata (2015) An Elitist Simulated Annealing Algorithm for Solving Multi-Objective Optimization Problems in Internet of Design. International Journal Advanced Network and Applications, 7, 2784-2789. |

[22] | Mausser, H. (2006) Normalization and Other Topics in Multi-Objective Optimization. Proceedings of the Fields—MITACS Industrial Problems Workshop, Toronto, 1 August 2006, 89-101. |

[23] |
Wilfried, J. and Blum, C. (2014) Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts. Algorithms, 7, 166-185. https://doi.org/10.3390/a7010166 |

[24] |
Caramia, M. and Dell’Olmo, P. (2008) Multi-Objective Management in Freight Logistics Increasing Capacity, Services Level and Safety with Optimization Algorithms. Springer, Berlin. http://www.springer.com/978-1-84800-381-1 |

[25] |
Janina, A.K. (2005) The Ant Colony Algorithms for Rule Induction. AIML Conference, Cairo, 19-21 December 2005, 50-55. http://www.icgst.com |

[26] |
Gulben, C. and Orhan, Y. (2015) An Improved Ant Colony Optimization Algorithm for Construction Site Layout Problems. Journal of Building Construction and Planning Research, 3, 221-232. https://doi.org/10.4236/jbcpr.2015.34022 |

[27] | Dhiman, P. and Hooda, R. (2014) An Enhanced Ant Colony Optimization Based Image Edge Detection. International Journal of Advanced Research in Computer Science and Software Engineering, 4, 923-933. |

[28] |
Deneubourg, J.L., Aron, S., Goss, S. and Pasteels, M. (1990) The Self-Organising Exploratory Patterns of the Argentine Ants. Journal of Insects Behaviour, 3, 159-168.
https://doi.org/10.1007/BF01417909 |

[29] | Rojee, P., Eiichi, T. and Tadashi, Y. (2010) Optimization of Vehicle Routing and Scheduling Problem Time Window Constraints in Hazardous Material Transportation. Journal of the Eastern Asia Society for Transportation Studies, 8, 790-803. |

[30] | Anuj, K.G., Harsh, S. and Verma, K. (2012) MANET Routing Protocols Based on Ant Colony Optimization. International Journal of Modelling and Optimization, 2, 42. |

[31] |
Zaninudin, Z. and Paputungan, T.V. (2013) A Hybrid Optimization Algorithm Based on Genetic Algorithm and Ant Colony Optimization. International Journal of Artificial Intelligence and Applications, 4, 63-75.
https://doi.org/10.5121/ijaia.2013.4505 |

[32] | Dorigo (1992) Optimization Learning and Natural Algorithms. Unpublished PhD Thesis, Dipartimento di Eletronica, Politecnico di Milano, Milano. |

[33] | Stutzle, T. and Hoos, H. (1997) Max-Min Ant System and Local Search for the Travelling Salesman Problem. In: IEEE International Conference on Evolutionary Computation, IEEE Press, Piscataway, 309-314. |

[34] |
Dorigo, M. and Gambardella, L.M. (1997) Ant Colony System, a Cooperative Learning Approach to the Travelling Salesman Problem. IEEE Transactions on Evolutionary Computation, 1, 53-66. https://doi.org/10.1109/4235.585892 |

[35] | Aniruth, S., Pratik, P. and Dinesh, B. (2009) Ant Colony Optimization Algorithms: Introduction and Beyond. Artificial Intelligence Seminar, Bombay, 5-10 August, 2009, 1-41. |

[36] | Majid, Y.K. and Mohammad, S. (2011) An Optimization Algorithm for Capacitated Vehicle Routing Problem Based on Ant Colony System. Australian Journal of Basic and Applied Sciences, 5, 2729-2737. |

Journals Menu

Copyright © 2021 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.