An Evolutionary Many-Objective Algorithm Based on a Novel Tournament Selection Strategy

Abstract

Recent studies in scientific research and engineering practice have the tendency to employ evolutionary algorithms to solve multi-objective optimization problems (MOPs), which has a certain effect. In the evolutionary process, the mating selection that aims to make a good preparation for exchanging the information of individuals plays an important role in multi-objective evolutionary algorithms (MOEAs). However, existing MOEAs usually use random selection strategy to form the mating pools. This strategy of generating offspring has a certain randomness, which will affect the quality of offspring, thereby deteriorating the effectiveness of the algorithm. To address this issue, we propose a novel tournament selection strategy, in which a type of binary tournament selection strategy based on the grid dominance relation and density information is adopted to select individuals for variation. The experimental results indicate that the proposed method performed well in terms of convergence and diversity, especially due to the significant benefits of high-dimensional objective space handling.

Share and Cite:

He, C. , Zhong, P. , Li, M. , Jie, L. and Li, Z. (2021) An Evolutionary Many-Objective Algorithm Based on a Novel Tournament Selection Strategy. Journal of Computer and Communications, 9, 238-248. doi: 10.4236/jcc.2021.96016.

1. Introduction

It is undoubted that the multi-objective optimization technology is a research which engineers pay a lot of attention to [1]. Without loss of generality, multi-objective optimization problems (MOPs) involve two or more conflicting objectives to be optimized, which can be formulated as follows:

minimize F ( x ) = ( f 1 ( x ) , f 2 ( x ) , ...... , f m ( x ) ) subjectto x Ω (1)

where Ω R D is the decision space, m is the number of objectives, and F : X Y R M represents the objective function. It is worth noting that optimization problems involving a high number of objectives indeed appear widely in real-world applications. Specially, if an MOP has more than three objectives, it is often known as a many-objective optimization problem (MaOP) [2]. As shown in Equation (1), we can find that MOP/MaOP involves multiple conflicting objectives to be optimized. Therefore, MOPs do not have single optimal solution that can optimize all the objectives and are usually replaced by a set of trade-off solutions. With the great success of evolutionary algorithms in recent years, a variety of promising multi-objective evolutionary algorithms (MOEAs) have been developed, such as NSGA-III [3], MOEA/D [4], and IBEA [5]. According to different computing strategies, these algorithms can be roughly classified into three categories.

The first category directly adopts Pareto dominance to distinguish and select candidate solutions. NSGA-II [6] and SPEA2 [7] are two representative MOEAs of this type, where all non-dominated solutions are first identified and then enter next generation population. Another idea belonging to this category is to combine the Pareto dominance with a new metric, and MOEAs belonging to this category include the SPEA/R [8], NSGA-II/SDR [9] and PREA [10].

The second category covers various indicator-based MOEAs. This strategy generally uses a performance indicator of solution quality measurement to select solutions from current population for the next generation. Some representative approaches of this category are indicator-based evolutionary algorithm [11], S-metric selection based evolutionary multi-objective optimization algorithm [12], and indicator-based evolutionary algorithm.

The third category is known as the decomposition-based MOEAs, which convert an MOP to a number of single-objective optimization problems (SOPs) and simultaneously solve them in a collaborative manner. MOEA/D is a representative of this class of metaheuristics. In MOEA/D, Das and Dennis’s [13] systematic approach is adopted to construct a set of weight vectors. Then, MOEA/D employs decomposition function to decompose a high-dimensional problem into a number of scalarized sub-problems to be solved collaboratively. Furthermore, the decomposition-based idea has also been exploited in some recently developed MOEAs, such as MOEA/D-CMA and ENS-MOEA/D.

Although the existing MOEAs have shown competitive performance in solving MOPs, it is difficult to tackle MaOPs. The one reason for the poor convergence performance of the existing MOEAs is that the random mating selection loses its search efficiency in solving MaOPs. This strategy of generating offspring has certain randomness, which may leads to a severe deterioration of the performance when the number of redundant objectives increases. To address this issue, we propose in this paper an evolutionary many-objective algorithm based on a novel tournament selection strategy for handling MaOPs, called NTSEA. The main contributions of this article are highlighted as in the following:

1) In NTSEA, a novel tournament selection strategy is proposed, we exploit a novel binary tournament selection strategy by grid dominance relation and density information to select individuals for variation.

2) Based on the proposed framework, a novel MOEA is proposed by adopting general MOEA as the optimizer for evolving the populations. To assess the performance of the proposed method, we compared the NTSEA with several state-of-the-art approaches on a set of benchmark problems. Case studies and experimental results demonstrate that the proposed method has good robustness for general MaOPs.

The remainder of this paper is organized as follows. Section 1 briefly reviews the previous work. In Section 2, we describe the proposed NTSEA framework. The experimental results and discussion are presented in Section 3. Finally, we conclude the project in Section 4.

2. Proposed Framework

2.1. Main Procedure of NTSEA

Algorithm 1 gives the framework of NTSEA. The basic procedure of the algorithm is similar to most generational many-objective algorithms like [14]. Firstly, N solution are randomly generated to form an initial population P and generate N uniformly distributed m dimensional unit vectors Z. Then, the grid environment for the current population P, and the fitness of solution in P is assigned according to their location in the grid. Next, select solution with a large fitness value from the population for mating selection to generate a mating pool MP and

perform cross mutation to generate offspring P'. Finally, the environmental selection procedure is used to record N best solution from the combined group Q.

2.2. Mating Selection

Mating selection usually forms a mating pool by selecting promising solutions from the current population. In this paper, a selection strategy based on grid dominance relationship and density information is used to select the solution by cross mutation. Algorithm 2 gives the detailed steps of this strategy. First, two solutions P 1 and P 2 are randomly selected from the population. If one Pareto dominates or the grid dominates the other, choose the former. Otherwise, it means that these two solutions are both non-dominated relations in the Pareto dominance relationship and the grid dominance relationship. In this case, we choose a solution with a lower density estimate (i.e., grid crowding distance (GCD)). Specifically, the density estimator, GCD, of x is defined as

G C D ( x ) = y N ( x ) ( M G D ( x , y ) ) (2)

where N ( x ) represents the set of grid neighborhoods of x. G D ( x , y ) represents the grid difference between x and y. Finally, if GCD still cannot distinguish between the two solutions, one of them is randomly selected.

2.3. Environmental Selection

Algorithm 3 gives the main steps of environment selection. First adopt Pareto dominance as the selection criterion, perform non-dominated sorting on the merged population Q to obtain the Pareto front of the population. Then judge whether the population size satisfies N according to the l-th critical layer, if yes, return directly to the former l-th solution. Otherwise, first extract the first l 1 layer solution. Then l-th level solution is associated with the nearest reference vector Z, the distance between the solution and the reference is calculated, and N K closest solution are selected to maintain the population size and maintain the diversity of the population.

3. Performance Evaluation

To empirically investigate the effectiveness of the proposed NTSEA, we compare it with the following four representative algorithms: 1) MOEA/D [4]; 2) KnEA [15]; 3) RVEA [16]; and 4) RMMEDA [17].

3.1. Experimental Settings

In the experiment, the WFG test problem set whose target number can be expanded arbitrarily is used. This paper studies the cases where M is 3, 5, 8, and 10. The number of WFG decision variables is set to n = k + λ , where k = 2 * ( M 1 ) is the number of position-related variables, and λ = 20 is the number of distance-related variables.

The genetic operator that generates offspring in the experiment uses simulated binary crossover (SBX) and polynomial mutation (PM). The termination condition of each run is the maximum algebra. For all test problems with any target number, the maximum algebra is set to 1000. In the experimental research, the inverted generational distance (IGD) is used as the evaluation index. The IGD index is defined as the average of the sum of the distances between each reference point and its nearest individual. The formula is defined as follows:

I G D ( P * , P ) = x P * min y P d i s ( x , y ) | P | (3)

In the formula, d i s ( x , y ) represents the Euclidean distance between the individual and the reference point. The population solution obtained by the algorithm are denoted as P * , P is a set of reference points obtained by uniform sampling of the real Pareto front surface, and | P | is the size of the population distributed on the real Pareto front surface. The smaller the value of IGD is, the closer the obtained population solution are to the true Pareto front, and the more even the individual distribution is. The IGD index can evaluate both convergence and diversity.

All algorithms were run independently on each test problem 20 times, and the results obtained by the NTSEA algorithm and other comparison algorithms were compared using the Wilcoxon rank sum test method, in which the significance level of the mean analysis was set to 5%. According to the Wilcoxon rank sum test method, “+” means that the result of the comparison algorithm is better than NTSEA, “−” means that the result of the comparison algorithm is inferior to NTSEA, and “=” means that there is no significant difference between the result of the comparison algorithm and NTSEA.

3.2. Experimental Results and Analysis

In this experiment, select WFG1-WFG9 to evaluate the performance of the algorithm, and combine it with Table 1 for specific analysis. As shown in Table 1, we can find that the proposed NTSEA produced the competitive results compared with the other state-of-the-art approaches. For example, we win the best or is comparable with the best on 20 out of the 36 test instances. Since IGD is a direct indicator of population diversity and convergence, the comparison of IGD values shows that the NTSEA proposed in this paper has excellent performance in terms of convergence and diversity. WFG1 has a flat deviation and a mixed structure of Pareto front, and the performance of NTSEA is generally good. WFG2 has a disconnected Pareto front. In this test problem, NTSEA outperforms other algorithms in the 3-objective, 8-objective and 10-objective test cases. The Pareto front of WFG3 is linear and degraded, and NTSEA is not effective in dealing with the test problem of Pareto front degradation. The remaining 6 test problems have the same Pareto front in the target space, but the design in the

Table 1. Comparison algorithm run 20 times independently on the WFG test problem set to obtain a unified result (mean and standard deviation) of the IGD value.

“+” indicates that the algorithm is superior to NTSEA, “−” is inferior to NTSEA, and “=” indicates that the performance is similar to NTSEA

decision variable space has different difficulties. It can be found that NTSEA achieves the best performance in WFG4-WFG9 test questions 3-objective and 8-objective, RVEA is better at 10-objective, and NTSEA shows a certain degree of competitiveness. Figure 1 plots their output with the smallest IGD value in 20 runs on the 5-objective WFG7 in parallel coordinates. It can be seen from Figure 1 that all algorithms can converge to Pareto front, and for diversity, NTSERA is superior to other algorithms in diversity in the 5-objective WFG test problem. Therefore, based on the above analysis, it is concluded that NTSEA has significant performance in processing MaOPs, and can still maintain the performance of the algorithm in different target test cases, and has good universality.

(a) (b) (c) (d) (e)

Figure 1. Output populations of five algorithms on 5-objective WFG7. (a) MOEA/D on WFG7. (b) KnEA on WFG7. (c) RVEA on WFG7. (d) RMMEDA on WFG7. (e) NTSEA on WFG7.

4. Conclusion

In this paper, we have proposed a novel tournament selection strategy based on the grid dominance relation and density information. Based on this strategy, we have implemented an algorithm called NTSEA for handling MaOPs. In the proposed NTSEA, the grid dominance is introduced to compare individuals in the mating selection processes. In this way, we can ensure that the well-converged and well-distributed solutions enter the matching pool to generate promising offspring solutions. We ran our NTSEA algorithm on the WFG test problems to conduct a comprehensive comparison with several state-of-the-art approaches. The experimental results have shown that the proposed NTSEA is competitive compared with other algorithms in terms of the diversity of solutions and convergence speed, especially with the significant benefit of high-dimensional objective space handling.

In the future, we will further enhance the performance of NTSEA and apply it to some real-world problems, such as optimization of deep neural networks and path planning of robots.

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable suggestions on improving this paper. Additionally, this work is supported in part by the National Natural Science Foundation of China (61866026, 61772255, 61866025 and 62066031), the Natural Science Foundation of Jiangxi Province (20181BAB202025), the Advantage Subject Team Project of Jiangxi Province (20165BCB19007), the Aeronautical Science Foundation of China (2018ZC56008), the Outstanding Young Scientist Project of Jiangxi Province (20192BCB23011).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Wu, M., Li, K., Kwong, S. and Zhang, Q. (2020) Evolutionary Many-Objective Optimization Based on Adversarial Decomposition. IEEE Transactions on Cybernetics, 50, 753-764. https://doi.org/10.1109/TCYB.2018.2872803
[2] Duan, X. (2019) MCEDA: A Novel Many-Objective Optimization Approach Based on Model and Clustering. Applied Soft Computing, 74, 274-290. https://doi.org/10.1016/j.asoc.2018.10.039
[3] Deb, K. and Jain, H. (2014). An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems with Box Constraints. IEEE Transactions on Evolutionary Computation, 18, 577-601. https://doi.org/10.1109/TEVC.2013.2281535
[4] Zhang, Q., & Li, H. (2005) MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE International Conference on Advanced Learning Technologies.
[5] Zitzler, E. and Künzli, S. (2004) Indicator-Based Selection in Multiobjective Search. Proc. 8th Int. Conf. Parallel Problem Solving Nat. Birmingham, U.K., 832-842. https://doi.org/10.1109/TEVC.2013.2281535
[6] Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6, 182-197. https://doi.org/10.1109/4235.996017
[7] Zitzler, E., Laumanns, M. and Thiele, L. (2001) SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems. Proceedings of the EUROGEN’2001. Athens, September 19-21.
[8] Jiang, S. and Yang, S. (2017) A Strength Pareto Evolutionary Algorithm Based on Reference Direction for Multi-Objective and Many-Objective Optimization. IEEE Transactions on Evolutionary Computation, 329-346. https://doi.org/10.1109/TEVC.2016.2592479
[9] Tian, Y., Cheng, R., Zhang, X., Su, Y. and Jin, Y. (2019) A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization. IEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2018.2866854
[10] Yuan, J., Liu, H. L., Gu, F., Zhang, Q. and He, Z. (2020) Investigating the Properties of Indicators and an Evolutionary Many-Objective Algorithm Based on a Promising Region. IEEE Transactions on Evolutionary Computation, 25, 75-86. https://doi.org/10.1109/TEVC.2020.2999100
[11] Sun, Y., Yen, G.G. and Yi, Z. (2019) IGD Indicator-Based Evolutionary Algorithm for Many-Objective Optimization Problems. IEEE Transactions on Evolutionary Computation, 23, 173-187. https://doi.org/10.1109/TEVC.2018.2791283
[12] Li, M., Yang, S. and Liu, X. (2016) Pareto or Non-Pareto: Bi-Criterion Evolution in Multi-Objective Optimization. IEEE Transactions on Evolutionary Computation, 20, 645-665. https://doi.org/10.1109/TEVC.2018.2791283
[13] Das, I. and Dennis, J.E. (1996) Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. Siam Journal on Optimization, 8, 631-657. https://doi.org/10.1109/TEVC.2018.2791283
[14] Chen, H., Tian, Y., Pedrycz, W., Wu, G., Wang, R. and Wang, L. (2019) Hyperplane Assisted Evolutionary Algorithm for Many-Objective Optimization Problems. IEEE Transactions on Cybernetics, 50, 3367-3380.
[15] Zhang, X., Tian, Y. and Jin, Y. (2015) A Knee Point-Driven Evolutionary Algorithm for Many-Objective Optimization. IEEE Transactions on Evolutionary Computation, 19, 761-776. https://doi.org/10.1109/TEVC.2014.2378512
[16] Cheng, R., Jin, Y., Olhofer, M. and Sendhoff, B. (2016) A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Transactions on Evolutionary Computation, 20, 773-791. https://doi.org/10.1109/TEVC.2016.2519378
[17] Zhang, Q., Zhou, A. and Jin, Y. (2008) RM-MEDA: A Regularity Model-Based Multi-Objective Estimation of Distribution Algorithm. IEEE Transactions on Evolutionary Computation, 12, 41-63.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.